id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2303.11690 | Exposure Assessment for Wearable Patch Antenna Arrays at Millimeter
Waves | Since the spread of the wearable systems and the implementation of the
forthcoming 5G in many devices, the question about the assessment of the
exposure in wearable typical usage to millimeter waves is crucial and timely.
For such frequencies, the power absorption becomes strongly superficial and
involves only the most superficial tissue of the human body, i.e., the skin. In
literature there are some models able to describe the layered structure of the
skin but, until now, there is no literature consensus on the skin model to
employ in computational exposure assessment studies. For these reasons, the
present work aimed to simulate four different models of the most superficial
tissues with different degree of detail exposed to two wearable patch antennas
at different frequencies i.e., 28 GHz and 39 GHz. This allows to investigate
the impact that the choice of a layered model rather than the homogeneous one
has on the exposure. Simulations were performed through the FDTD method,
implemented in the Sim4life platform and the exposure was assessed with the
absorbed power density averaged over 1 cm2 and 4 cm2 (Sab). The data showed
that the homogeneous model underestimates the peak value of Sab obtained for
multi-layer models in the stratum corneum (by 14% to 21% depending on the
number of layers of the model and the frequency). This finding was confirmed by
an analytical approach with two impinging plane wave TEM-polarized with normal
incidence at 28 GHz and 39 GHz respectively. Conversely, there are no
substantial differences in the exposure levels between the layered models | Silvia Gallucci, Marta Bonato, Martina Benini, Marta Parazzini, Maxim Zhadobov | 2023-03-21T09:20:20Z | http://arxiv.org/abs/2303.11690v1 | # Exposure Assessment for Wearable Patch Antenna Arrays at Millimeter Waves
###### Abstract
Since the spread of the wearable systems and the implementation of the forthcoming 5G in many devices, the question about the assessment of the exposure in wearable typical usage to millimeter waves is crucial and timely. For such frequencies, the power absorption becomes strongly superficial and involves only the most superficial tissue of the human body, i.e., the skin. In literature there are some models able to describe the layered structure of the skin but, until now, there is no literature consensus on the skin model to employ in computational exposure assessment studies. For these reasons, the present work aimed to simulate four different models of the most superficial tissues with different degree of detail exposed to two wearable patch antennas at different frequencies i.e., 28 GHz and 39 GHz. This allows to investigate the impact that the choice of a layered model rather than the homogeneous one has on the exposure. Simulations were performed through the FDTD method, implemented in the Sim4life platform and the exposure was assessed with the absorbed power density averaged over 1 cm\({}^{2}\) and 4 cm\({}^{2}\) (S\({}_{\text{ab}}\)). The data showed that the homogeneous model underestimates the peak value of S\({}_{\text{ab}}\) obtained for multi-layer models in the stratum corneum (by 14% to 21% depending on the number of layers of the model and the frequency). This finding was confirmed by an analytical approach with two impinging plane wave TEM-polarized with normal incidence at 28 GHz and 39 GHz respectively. Conversely, there are no substantial differences in the exposure levels between the layered models.
- Wearable devices; computational dosimetry; skin models; millimeter waves.
## 1 Introduction
The use of wearable technologies is increasingly growing. They are very attractive for various applications, spreading from healthcare to smart home [1, 2]. The wearable technology is based on the concept of the Body-Area Network (BAN) consisting in a network of sensors/actuators/antennas able to exchange with each other and with an external gateway information about the user's health condition, position, environment and so on [3]. This involves communication between sensors, central BAN unit, and external node, e.g. smartphone, at two levels of communication: intra-BAN and inter-BAN [4]. The communication protocol of the wireless BAN (WBAN) is defined in the IEEE 802.15.6 standard [5] in which several frequency bands are mentioned. The operating frequencies include the 2.4 GHz Industrial-Scientific-Medical (ISM) band, which became standard for such type of systems due to the spread of the Bluetooth, BLE and Zigbee protocols [6].
Due to the way of use of the wearable devices that entails to pose them at very short distance from the human body, the assessment of the human exposure to the electromagnetic field (EMF) emitted by these devices is needed. Indeed, there is increasing public concern regarding safety issues for new/emerging wireless systems [7].
By merging the new 5G frequency bands, particularly the millimeter waves (mmWaves), and the wearable devices, the exposure assessment becomes even more necessary. In this regard, in literature there are some studies based on the design of novel mmWave wearable antenna in which the question about the human exposure is addressed (see as examples [8, 9]) but these studies do not appear exhaustive both in terms of the quantities used to assess the exposure and the used human models.
Since in the upper part of the microwave spectrum the absorbed power is confined in the superficial, to characterize numerically antenna/body interactions the use of appropriate tissue model is crucial as it directly impacts the reliability and accuracy of results. The anatomical human models typically used in dosimetric studies (e.g., [10]) do not represent the skin structure in sufficient details for mmWaves dosimetry. Indeed, in this type of models, the skin is typically modelled as a homogeneous tissue, disregarding its heterogeneous structure [11]. For this reason, stratified multi-layered cutaneous models were introduced in literature [7, 12].
Alekseev et al. [13] derived the dielectric properties of two skin layers at various locations on the body: the stratum corneum (SC), the external layer of skin, and the viable epidermis and dermis, the inner ones, in the frequency range 37-78 GHz. In another study [14], the same group compared three stratified skin models when they are exposed to a plane wave in the frequency range 30-300 GHz: the first one made of dermis, the second one made of SC and viable epidermis and dermis, and the third one based on three layers (SC, viable epidermis and dermis, and fat). In this study, the authors also investigated the impact of the skin thickness depending on the on-body location (forearm and palm). The results demonstrated that there are no differences between power density and specific absorption rate (SAR) in the homogeneous model compared to a layered model of thin (0.015 mm) SC. Sasaki et al. [15] used a skin model where viable epidermis and dermis were modelled as two separate layers and followed by a subcutaneous adipose tissue and a muscle layer. By means of Monte Carlo method they showed that, for a plane wave with frequencies ranging from 10 GHz to 1 THz, thickness variation affects the power absorption. Sacco et al. [16] considered the age-dependent variations of the skin permittivity and thickness. The models consisted of four layers: SC, viable epidermis and dermis, fat, and muscle. The results showed that the skin thickness variations affect the exposure for the lowest frequency (i.e., 26 GHz) and in particular for people <25 years old; in general, considering both the analysed variations, the power transmission coefficient increases with age for both the considered frequencies. Christ et al. [17] focused on the temperature increase induced in a five-layered model: SC, viable epidermis, dermis, fat, and muscle. The results showed that the homogeneous model underestimates more than a factor of three the induced temperature increase with respect to the layered one, when plane wave frequencies (6-100 GHz) and thicknesses of the SC (10-700 \(\upmu\)m) are varied. Later, Christ et al. [18] demonstrated that the homogeneous model with properties of dermis underestimates the transmission coefficient at the air/skin interface (T(0), where 0 is the incidence angle of the plane wave) compared
to two multi-layered models made of SC, viable epidermis and dermis, fat, and muscle when the model is impinged by a plane wave in the frequency range of 6-300 GHz. These two layered models differ with each other for the thickness of the outer layer, i.e., the SC. Finally, Ziskin et al. [12] calculated the reflection coefficient, the power deposition, and the temperature increase in two different skin models: three- and four-layered ones. The models were tested with an incident plane wave at frequencies in the 37-78 GHz range and the results obtained by means of both the analytical and the computational approaches revealed that the power absorption is strongly localized on the most superficial layers, as expected. From the point of view of the type of the used model, they affirmed that the relevance in studies at such frequencies of the multi-layered models also including the inner tissues (i.e., fat and muscle) is linked to the thermal analysis since the heat propagates deeper compared to the EMF.
In light of the abovementioned literature studies, it is clear that there is no literature consensus on the skin modelling approach to employ in computational exposure assessment studies. Moreover, even the international organizations, responsible for the regulations aimed to the radiation protection, do not refer to a standard model for the skin. Indeed, the ICNIRP guidelines [19] refer to the absorbed power density as a dosimetric quantity at mmWaves without defining neither which is the appropriate model to reproduce the skin, nor which is the skin layer to considered for the comparison with the exposure limits. On the other hand, IEEE Std. C95.1 [20] suggests using the epithelial power density as the dosimetric quantity between 6 GHz and 300 GHz, referring with "epithelial" to the SC. In their recent review paper, Hirata et al. [7] reaffirmed the importance of using appropriate human models because the improvement of their degree of details can make the results obtained by means of the computational approach even more reliable. This is valid especially at mmWave frequencies, where a realistic representation of the skin structure could strongly impact on the exposure assessment.
For all these reasons, further investigations are needed with the aim to deepen the variation on the exposure results owing to the employed approach for the cutaneous tissue modelling. To the best of our knowledge, the literature is lacking in studies of exposure assessment due to mmWave frequencies and, particularly, to wearable antennas tuned to such frequencies, that evaluate skin tissue models with different stratifications.
This paper takes place in this context, aiming to simulate different planar geometrical layered models of the most superficial tissues with different degree of detail in the description of the cutaneous tissue exposed to two wearable patch antennas array at different frequencies both belonging to the 5G bands i.e., 28 GHz and 39 GHz. This allowed to investigate the impact that the choice of a layered model rather than the homogeneous one has on the exposure assessment. More specifically, four multi-layered models with increasing complexity were simulated: from a homogeneous model with dermis properties to the four-layered model composed of the SC, dermis, fat, and muscle. For each model, the exposure assessment was performed both by a computational approach making use of the FDTD method and an analytical approach with the estimation of the absorbed power density when the skin is hit by a normally incident plane wave at 28 GHz and 39 GHz.
## 2 Materials & Methods
This section is organized as follows. First, the tissue models are introduced in terms of their geometrical and electromagnetic properties. Then antenna design is presented, and the numerical method and analytical approach are described.
### Anatomical Models
We considered four superficial tissue models of increasing complexity, from a homogeneous one to a stratified four-layered model. More in detail, the simulated models are: (i) homogenous single layer with dermis properties, (ii) two-layered, made of SC and dermis, (iii) three-layered made of SC, dermis, and fat, (iv) four-layered, modelled as SC, dermis, fat, and muscle [16]. The dielectric properties of each layer (Table 1) were chosen according to the data found in literature at 30 GHz and 40 GHz [12, 17] and here assigned to 28 GHz and 39 GHz, respectively. The range of thickness of each layer were taken from the literature [12, 17]. With more details, the range of thicknesses of the SC were chosen in the range of dry "thin skin" because most of the body regions belong to it except for the palms and the soles of the feet. Table 2 report the thicknesses used here and obtained as belonging to the realistic range for the layers of fat, muscle, and viable epidermis and dermis; whereas the thickness of the stratum corneum was chosen within the ranges of thickness variation found in literature in order to optimize the models to have the same power transmission coefficient The maximum difference among the multi-layered models was of 4% for an impinging TEM-polarized plane wave with the incidence angle (0) varying from 0\({}^{\circ}\) up to 80\({}^{\circ}\).
The overall dimension of the models is 150 x 150 mm, and the depth was chosen to be thick enough in order to neglect the possible contribution from the reflection at the deepest interface.
### Antenna Models
The wearable antennas have to comply with constraints in terms of the compact size, lightness, and low profile [21]. To satisfy the aforementioned requirements, the two simulated antennas, inspired from Chahat et al. [22] and redesigned at 28 GHz and 39 GHz, are microstrip-fed four-patch antenna arrays. They consist of three different layers: ground plane, radiative element, and RT Duroid 5880 substrate (\(\varepsilon\)r = 2.2, \(\sigma\) = 5\(\cdot\)10-4 S/m). The overall antenna dimensions and inter-element distances are chosen to resonate at 28 GHz and 39 GHz, and they are detailed in Table III. Figure 1 shows the geometry of the antennas.
### Computational Approach
The exposure was assessed for the antenna located 2 mm from the model. Fig. 2 represents the positioning of the antenna with respect to the human model. The exposure scenario was the same for both antennas and the accepted power was set in both cases to 100 mW.
The EMF was computed using the Finite-Difference Time-Domain (FDTD) solver. Briefly, the FDTD method involves both a spatial and temporal discretization of the electric and magnetic fields over a period of time and a specific spatial domain limited with the boundary conditions. Typically, the minimum spatially sampling is at intervals of 10-20 per wavelength, and temporal sampling is sufficiently small to maintain stability of the algorithm [23]. All the simulations were performed in the software platform Sim4Life v.7 (ZMT Zurich Med Tech AG, Zurich Switzerland, www.zmt.swiss, accessed on 9 February 2023).
The computational domain was discretized with an automatic non-uniform grid for the antenna and the surrounding of the phantom with a sub-wavelength resolution of around 15 samples per wavelength. We set the mesh cell size varying from 0.06 mm to 0.33 mm depending on the dielectric properties of the model, in order to correctly discretize all the tissues to guarantee the compliance with the constraint of \(\lambda\)/10 imposed by the FDTD method for its stability. This resulted in a total of 8.537 and 162.310 MCells, respectively at 28 and 39 GHz. The computational domain was truncated by assuming 8 layers of perfectly matched layer (PML) material and 10 cells of free space were added around the computational domain at the domain boundaries.
All the simulations were performed on a workstation Z8 16-Core Processor @3.8 GHz, RAM 512 GB, with a mounted graphical card NVIDIA GeForce RTX5000. To speed up simulations, Sim4Life GPU accelerator aXware was used, and the maximum computational time was 30 minutes.
The Absorbed Power Density (S\({}_{\text{sb}}\), W/m\({}^{2}\)) averaged over a surface of 1 cm\({}^{2}\) or 4 cm\({}^{2}\) of tissue was calculated as:
\[\text{S}_{\text{sb}}\!=\!\frac{1}{A}\int_{S}\text{Re}\left[E\ x\ H^{*}\right] \text{dS} \tag{1}\]
where A is the surface area, dS is the surface element and Re [E x H*] is the real part of the Poynting vector.
According to the ICNIRP Guidelines [19], the surface area A on which such parameter has to be averaged is depending on the frequency. More in detail, the S\({}_{\text{sb}}\) averaged over 1 cm\({}^{2}\) is mandatory for frequencies greater than 30 GHz, since focal beam exposure can occur, whereas for lower frequencies the S\({}_{\text{sb}}\) averaged over 4 cm\({}^{2}\) is the adopted quantity. Indeed, in the ICNIRP guidelines the additional spatial average of 1 cm\({}^{2}\) is used to ensure that the operational adverse health effect thresholds are not exceeded even over smaller regions [19]. However, since the selected frequency of 28 GHz is close to 30 GHz, we have opted for the extraction of the S\({}_{\text{sb}}\) averaged over both the 1cm\({}^{2}\) and 4 cm\({}^{2}\) areas, for both frequencies.
### Analytical Approach
In addition to the computational approach, the present study aimed also to expose the abovementioned superficial tissue models with a plane wave. More specifically, all the
Figure 2: Examples of simulated scenario with antenna positioned 2 mm way from the center of the skin model (for the sake of readability, the size of the phantom and antenna are not in scale).
tissue models described before were exposed to TEM-polarized plane waves, the first one at 28 GHz, and the second one at 39 GHz, and the reflection coefficient was calculated at the interface air/most-external-layer following the generic formula (2) for a M-layered structure:
\[\Gamma_{\mathrm{i}}=\frac{q_{\mathrm{i}}+\Gamma_{\mathrm{i+1}}e^{-2jk_{\mathrm{i }}l_{\mathrm{i}}}}{1+\phi_{\mathrm{i}}\Gamma_{\mathrm{i+1}}e^{-2jk_{\mathrm{i }}l_{\mathrm{i}}}} \tag{2}\]
where i = M, M-1,...1 and it is initialized by \(\Gamma_{\mathrm{M+1}}\)=QM+1. The so obtained \(\Gamma\) was essential to calculate the absorbed power density (Sab) by applying the formula (3) in the ICNIRP Guidelines [19]:
\[\mathrm{S}_{\mathrm{ab}}\mathrm{=}\mathrm{(1-}\mathrm{\mid}\mathrm{\mid}^{2}) \cdot\mathrm{S}_{\mathrm{inc}} \tag{3}\]
where Sinc is the incident power density, here imposed as 10 W/m\({}^{2}\) that is the reference level for general public exposure averaged over 30 minutes, the whole-body, and the frequency ranging from 2 to 300 GHz [19].
## 3 Results
The first section summarizes the results estimated through the computational approach with the averaged Sab peaks of each layer of the models, whereas the second section reports the results of the Sab calculated by means of the analytical approach through the calculation of the reflection coefficient (\(\Gamma\)) at the interface air/ most-external-layer.
### Computational Approach
The computed peak values of the Sab averaged over 1 cm\({}^{2}\) and 4 cm\({}^{2}\) surface are presented in Fig. 3 for all the analyzed scenarios.
As expected, across all models and tissues the peak values of the Sab averaged over 1 cm\({}^{2}\) (top row) resulted always higher than the corresponding ones averaged over 4 cm\({}^{2}\) (bottom row). This trend is justified by the averaging operation itself. Indeed, in this specific situation where the peak is strongly localized, to average over a higher surface means to average over a larger number of lowest values, that has the effect to reduce the averaged value. For the sake of brevity, the results that will be commented henceforth are the Sab averaged over 1 cm\({}^{2}\) but the trend is the same for the peaks obtained averaging over a surface of 4 cm\({}^{2}\).
As reported in the upper left panel of Fig.3, the highest peak with the antenna at 28 GHz is found in the SC of the 4-layered model i.e., 13.8 W/m\({}^{2}\) and the maximum variation of peak Sab in the SC in multi-layered models is around 3%. The lowest value is obtained for 2-layered model whereas the exposure levels induced in 3- and 4-layered models are almost identical (< 2% deviation). Moreover, the exposure levels of dermis in multi-layered models are almost identical (< 1% deviation) whereas for the fat the results showed a maximum variation of 9% across the 3- and 4- layered models. Furthermore, by passing through the
layers from the outer to the inner one, the peak S\({}_{\rm ab}\) decreases confirming that lowest values are always in the most internal stratum, whatever it is.
As an example, the muscle, that is the fourth tissue in the 4-layered model, showed negligible exposure values reduced by around 98% with respect to maximum values at the SC.
The comparison between the responses of the models highlights that the homogeneous model strongly underestimates the peak value of S\({}_{\rm ab}\) obtained for multi-layer models in the SC, i.e., in the most superficial layer. Indeed, the peak S\({}_{\rm ab}\) obtained in the dermis of the homogeneous model is reduced by 18% to 21% depending on the number of layers, with respect to the highest exposure levels achieved in the SC of the multilayered models.
Nevertheless, from the comparison of the peaks S\({}_{\rm ab}\) in the dermis across all the models, the difference between the peak in the homogeneous model (i.e., 10.9 W/m\({}^{2}\)) and the ones in the multi-layered models (i.e., 5.66 W/m\({}^{2}\), 5.63 W/m\({}^{2}\), and 5.44 W/m\({}^{2}\), from the four- to the two-layered models) is noteworthy. Certainly, this behavior is due to the presence in the multilayered models of the SC that shields the EMF so reducing the power absorption in the inner strata.
On the panels of the right column, the peak values of the S\({}_{\rm ab}\) over 1 cm\({}^{2}\) (top panel) and 4 cm\({}^{2}\) (bottom panel) when the antenna tuned to 39 GHz is simulated.
The general trend observed in the scenarios with the antenna tuned to 28 GHz is still valid in the second scenarios, in which the antenna tuned to 39 GHz is employed. First of all, the maximum peak is observed in the SC of the three-layered model i.e., 13.8 W/m\({}^{2}\).
**Fig.3** Peak values of the absorbed power density (S\({}_{\rm ab}\)) for all the analyzed scenarios: on the left column the peaks of the S\({}_{\rm ab}\) in the scenarios with the antenna tuned to 28 GHz, on the right column, the peaks obtained with the antenna tuned to 39 GHz; from the top to the bottom, the Sab averaged over 1 cm\({}^{2}\) and 4 cm\({}^{2}\), respectively.
Nevertheless, comparing this peak value with the ones of the 2- and 4-layered models there is no substantial differences, indeed the maximum deviation is around 5%. As for 28 GHz, the lowest peak S\({}_{\rm ab}\) is observed in the 2-layered model whereas the comparison between the 3- and the 4-layered models did not reveal noteworthy variations (<1% deviation). Moreover, the peak S\({}_{\rm ab}\) in dermis across the 2-, 3- and 4-layered models are almost the same (<1% deviation), whereas in the fat the variation between the 3- and the 4-layered models revealed almost 2% of deviation.
Certainly, the trend of the decrease of the exposure levels from the outer to the inner tissues is amplified in this case where a higher frequency is involved; indeed the peak S\({}_{\rm ab}\) switches from 13.7 W/m\({}^{2}\) in the SC up to 0.07 W/m\({}^{2}\) in the muscle, showing a reduction of 99.5% of the exposure level.
Furthermore, the comparison between the multi-layered models and the homogeneous model showed the same behavior of the 28 GHz case: the homogeneous model (11.3 W/m\({}^{2}\)) tends to underestimate the S\({}_{\rm ab}\) with respect to the peaks estimated in the SC of the multi-layered model; more specifically, the deviation varies from 14% to the maximum of around 18%. Conversely, the results obtained in the dermis of the multi-layered models (i.e., 6.17 W/m\({}^{2}\), 6.17 W/m\({}^{2}\), and 6.19 W/m\({}^{2}\) in the 2-, 3-, and 4-layered model respectively) are almost the same, showing a maximum variation of 0.3%.
### Analytical Approach
In this section the results about the peak of absorbed power density (S\({}_{\rm ab}\)) calculated when an incident plane wave with normal incidence impacts on the most superficial tissues models are shown. The values are obtained for an incident power density (S\({}_{\rm inc}\)) imposed to the impinging wave equals to 10 W/m\({}^{2}\) according to the ICNIRP reference level for the general public in the frequency range 2-300 GHz [19].
Fig. 4 summarized the peak values of S\({}_{\rm ab}\) calculated at the interface air/most-external-layer for both the frequencies and in all the models. More specifically, the here reported results are related to S\({}_{\rm ab}\) absorbed by the SC for the 2-, 3-and 4-layered models and by the dermis for the homogeneous one.
These results are in line with the previous ones obtained with the computational approach; indeed, for both frequencies the data shown that the homogeneous model still underestimates the S\({}_{\rm ab}\) at the air/skin interface by 35% and 30%, for 28 and 39 GHz,
Figure 4: Peak values of the absorbed power density (S\({}_{\rm ab}\)) at the interface air/most-external layer of each layered model with the incident plane wave TEM-polarized with perpendicular incidence.
respectively, when compared with the multi-layered models. Moreover, as before, the exposure levels in multi-layer models resulted almost identical (< 4% deviation across all the models for 28 GHz and < 5% deviation for the 39 GHz).
## 4 Discussion
Wearable wireless technologies are attractive for various communication and sensing applications, including personal healthcare, smart home, sport and so on. The healthcare has been the primary target application so far, however recently wearable communicating devices also demonstrated a potential for other usages such as military and entertainment [24]. The wearable devices may be a part of Wireless Body-Area Networks (WBAN) introduced in the IEEE 802.15.6 standard [5]. In WBAN, information deriving from the sensors is collected in a central unit and then transmitted to an external device (e.g., the smartphone) [4]. Recently, the wearable networks have also included 5G technology. Indeed, the use of the 5G protocol permits, for example, the possibility of the involvement of augmented, mixed, and virtual realities [24]. For this reason, 5G bands are involved in wearable communication, particularly in the mm-wave band (>24 GHz).
Since wearable devices are necessarily positioned on the human body, the question of the power absorbed by human tissues is crucial and timely, particular if mmWave wearable antenna are considered. Indeed, only few studies (see as examples [25, 26, 27]) aimed to assess the exposure generated specifically by wearable antennas in the 5G frequencies band using both simplified that detailed anatomical human models. In particular, only one very recent paper by Gallucci et al, [27] computationally assessed the human exposure due to the EMF emitted by wearable antennas, each one tuned to a 5G band (one tuned to f = 3.5 GHz and the second one to 26.5 GHz), positioned on the trunk of four realistic human models of the Virtual Population [10].
However, to numerically characterize antenna/body interactions, the use of appropriate tissue model is crucial as it directly impacts the accuracy of results. Indeed, particularly for mmWave frequencies up to 100 GHz, modelling the skin by a single layer of homogeneous dermis tissue with constant dielectric properties over its entire thickness, as it is done in the most popular anatomical models [10], could be an oversimplification to realistically represent the skin structure. As a consequence, stratified multi-layered models were introduced in literature [12, 13] These models are typically composed of the stratum corneum, dermis, fat, and muscle. However, in literature there is not yet a consensus about the approach to employ in studies of computational exposure assessment to model the cutaneous tissue.
This work is inserted in this context, investigating the exposure levels induced by two wearable patch antennas tuned in the mmWave bands, using models with different stratifications to investigate the impact that the choice of a multi-layered model rather than the homogeneous one has on the exposure assessment. Specifically, four planar models with increasing complexity were considered: from a homogeneous model with dermis properties to the four-layered model composed of the SC, dermis, fat, and muscle. The exposure was quantified by the assessment of the \(\mathrm{S}_{\mathrm{ab}}\) averaged over both 1 cm\({}^{2}\) and 4 cm\({}^{2}\).
Analyzing the data of the peak value of the \(\mathrm{S}_{\mathrm{ab}}\) when the antenna is tuned to 28 GHz, it is observed that the use of the homogeneous skin model led to an underestimation of the
exposure level in the most external layer of the model with respect to the multi-layered models ranging from 18% to 21%, according to the 2-3- or 4-layered models. This trend is similar to the scenario with the antenna at 39 GHz showing an underestimation ranging from 14 % to 18%. In parallel, the analytical results confirmed the tendency found through the computational approach. Indeed, here the homogeneous model underestimates the exposure of 35.3% for the configuration with the antenna at 28 GHz, and 29.9% with the antenna at 39 GHz. Moreover, the grouping of these results by frequency shows the fact that the lower the frequency, the more noticeable the underestimation of the homogeneous model over the stratified models. This evidence is confirmed by the studies in literature, even though their number is limited; firstly, Bonato et al. [28] simulated the homogeneous, and the three- and four-layered models in three different exposure configurations to a 5G mobile phone antenna at 27 GHz (by varying the distance antenna-user), showing that the homogeneous model tended to underestimate the exposure in all the scenarios. Sasaki et al. [29] used the Monte Carlo simulation approach with varying the tissue thicknesses of the homogeneous and two-layered models. All the planar multi-layered models were hit by plane waves at frequencies from 0.1 to 1 THz and with 1 W/m\({}^{2}\) of incident power density. In their work, they demonstrated that the power transmittance increases when the skin is deeper modeled. Finally, Christ et al. [17] conducted a study in which incident plane waves at frequencies from 6 to 100 GHz impact on several stratified most superficial tissue models. Here, the reflection coefficient and the temperature increase were studied. This work highlighted the same trend of underestimation by the homogeneous dermis model by more than a factor of three, confirming the trend found in the present work and in the abovementioned studies.
Comparing the responses of the different multi-layered models, our data suggested that there are no substantial differences between the multi-layered models, particularly for the most external layer. In this regard, it was found that for the scenario with the lowest frequency the maximum variation of S\({}_{\text{ab}}\) is of 9% and it was between the four- and the three-layered models, precisely in the fat, whereas the greatest variation in the SC is of 3% between the two- and the four-layered models. This means that, with this frequency, choosing a stratified model rather than another always structured, the maximum expected impact on the exposure is 9%, particularly in the inner layer. For what concern the 39 GHz scenario, this maximum variation resides in the SC, and it is 5%, reducing the impact that the choice of a certain stratified model has on the exposure assessment, whereas for the inner strata the variation is almost 2%.
Finally, the comparison of the values reported in the left column with the peaks in the right column of Fig.3 showed that the peaks in the inner tissues (i.e., muscle) assessed in the scenario with the lowest frequency are higher than the peaks S\({}_{\text{ab}}\) observed in the case with 39 GHz. Indeed, the difference between the peaks with the antenna at 28 GHz and the ones with 39 GHz is more evident in the inner strata rather than the outer layers so much so that the variation in the SC is of 0.7%, whereas in the muscle it is 66.4%. This is in line with the decrease of the penetration depth corresponding to the increase of the frequency.
Overall, from the comparison with the IEEE International Guidelines [19], in any of the studied configurations the limit of 20 W/m\({}^{2}\) is exceeded, neither on the interface air/skin nor in the inner layers. However, the study was focused on the question about the best way to
represent the most superficial tissue and, in light of the here presented findings, it is evident the difference between the exposure levels in the first layer of the multi-layered models (i.e., SC) and in the dermis of the homogeneous model, with an underestimation of almost 20% by the homogeneous model, used in most of the exposure assessment studies so far. This trend brings to the light the necessity to clearly define which is the layer where it is appropriate to estimate the power absorption because the correct exposure assessment derives from this definition.
Finally, the present study confirmed the results found in literature in which the homogeneous model underestimates the exposure levels and, moreover, expands these findings to more complex scenarios, no longer with an impinging plane wave, but with two real wearable antennas.
## 5 Conclusions
In conclusion, the present work aimed to assess the exposure due to two different mmWave wearable antennas using four most superficial tissues models of increasing complexity to investigate their effect on the exposure level.
The problem was addressed through both the computational and the analytical approach and both of them revealed the same considerations: comparing the \(\mathsf{S}_{\mathsf{ab}}\) estimated at the most external layer of all the models, for both the frequencies, the peak in the homogeneous model is always lower than the ones of the layered models. This finding means that the use of the homogeneous skin model in exposure assessment studies with such high frequencies could underestimate the exposure, if compared with the high-detailed skin model.
## Acknowledgement
The authors wish to thank ZMT Zurich MedTech AG (www.zmt.swiss, accessed on 21 February 2023) for having provided the simulation software Sim4Life. The authors wish to thank the European Defence Agency (EDA) for the support to this work in the context of the project \(\mathsf{N}^{\mathsf{o}}\). B 0987 IAP2 GP "Biological Effects of Radiofrequency Electromagnetic Fields (RFBIO)" funded by the Italian MoD.
|
2305.15778 | Automatic Root Cause Analysis via Large Language Models for Cloud
Incidents | Ensuring the reliability and availability of cloud services necessitates
efficient root cause analysis (RCA) for cloud incidents. Traditional RCA
methods, which rely on manual investigations of data sources such as logs and
traces, are often laborious, error-prone, and challenging for on-call
engineers. In this paper, we introduce RCACopilot, an innovative on-call system
empowered by the large language model for automating RCA of cloud incidents.
RCACopilot matches incoming incidents to corresponding incident handlers based
on their alert types, aggregates the critical runtime diagnostic information,
predicts the incident's root cause category, and provides an explanatory
narrative. We evaluate RCACopilot using a real-world dataset consisting of a
year's worth of incidents from Microsoft. Our evaluation demonstrates that
RCACopilot achieves RCA accuracy up to 0.766. Furthermore, the diagnostic
information collection component of RCACopilot has been successfully in use at
Microsoft for over four years. | Yinfang Chen, Huaibing Xie, Minghua Ma, Yu Kang, Xin Gao, Liu Shi, Yunjie Cao, Xuedong Gao, Hao Fan, Ming Wen, Jun Zeng, Supriyo Ghosh, Xuchao Zhang, Chaoyun Zhang, Qingwei Lin, Saravan Rajmohan, Dongmei Zhang, Tianyin Xu | 2023-05-25T06:44:50Z | http://arxiv.org/abs/2305.15778v4 | # Empowering Practical Root Cause Analysis by Large Language Models for Cloud Incidents
###### Abstract.
Ensuring the reliability and availability of cloud services necessitates efficient root cause analysis (RCA) for cloud incidents. Traditional RCA methods, which rely on manual investigations of data sources such as logs and traces, are often laborious, error-prone, and challenging for on-call engineers. In this paper, we introduce RCACopilot, an innovative _on-call system empowered by the Large Language Model_ for automating RCA of cloud incidents. RCACopilot matches incoming incidents to corresponding handlers based on their alert types, aggregates the critical runtime diagnostic information, predicts the incident's root cause category, and provides an explanatory narrative. We evaluate RCACopilot using a real-world dataset consisting of a year's worth of incidents from Transport service in Microsoft. Our evaluation demonstrates that RCACopilot achieves RCA accuracy up to 0.766. Furthermore, the diagnostic information collection component of RCACopilot has been successfully in use at Microsoft for over four years.
Root Cause Analysis, Large Language Models, Cloud Systems +
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
+
Footnote †: This research was primarily conducted during an internship at Microsoft Research Asia.
## 1. Introduction
Cloud computing serves as an indispensable infrastructure for numerous applications and services upon which people rely daily. As the adoption of cloud services continues to grow, ensuring their reliability, availability, and security becomes increasingly vital [(12; 26; 30)]. However, the complexity of cloud systems makes them vulnerable to a variety of incidents that could pose significant challenges to these crucial properties [(43)]. A typical incident life-cycle consists of four stages: (1) _Detection_[(31; 41; 42)]: When an anomalous system behavior is observed, an alert is raised by monitors or users of the service (internal engineers or external customers). (2) _Triaging_[(8; 9; 4)]: After the detection, the incident is assigned to the appropriate engineering team after an initial assessment. (3) _Diagnosis_[(28)]: Assigned on-call engineers (OCEs) inspect different aspects of the incident and have several rounds of back-and-forth communication to identify the root cause. (4) _Mitigation_[(1; 17)]: Several actions are taken by OCEs to mitigate the incident and to restore service health.
Root cause analysis (RCA) is pivotal in promptly and effectively addressing these incidents. By accurately diagnosing the underlying problem and preventing its recurrence, RCA not only restores service availability swiftly but also fortifies the overall reliability of cloud services. However, identifying the root causes of these incidents often represents a daunting and time-consuming task that requires significant human expertise and intervention [(30)].
Traditional approaches to cloud incident RCA typically involve the manual collection and analysis of various types of data, such as logs [(16; 22; 25; 46; 47)], metrics [(32)], traces [(45)], and incident tickets [(17; 36)]. This manual process is not only laborious and error-prone, but can also be challenging due to varying levels of available information - what we term as the 'information spectrum'. The 'information spectrum' describes a continuum of information availability, ranging from situations with too little information to those inundated with an excess. At either end of this spectrum, root cause analysis can become particularly challenging. The relevant information for RCA might be buried within the voluminous data, leading to an information overload for OCEs. OCEs may find it challenging to quickly pinpoint the relevant information amidst the sea of data, hindering efficient incident resolution. Conversely, OCEs could also encounter situations where they lack the necessary information to understand and address the root causes of incidents accurately. Beyond these challenges, the collected data itself is often noisy, incomplete and inconsistent, further complicating the RCA process.
Specifically, the engineering team documents the frequent troubleshooting steps in the form of troubleshooting guides (TSGs) to facilitate the handling of future incidents. However, the volume of TSGs is overwhelming for OCEs, making the search for the most relevant guide a time-consuming task that might cause system downtime. Moreover, TSGs struggle
to keep pace with the ever-evolving nature of cloud systems, thus often falling short when new incident types emerge. Even when a relevant TSG is located, it may not cover all the intricacies of the specific incident. This could be due to variations in system configurations, the presence of multiple interacting root causes, or previously unknown issues.
At the heart of RCA lies the fundamental challenge of _efficently collecting and interpreting comprehensive_, _incident-specific data_ within a limited time frame. OCEs must quickly discern the relevance of various data types to the incident at hand and interpret them correctly. However, the complexity and sheer volume of data generated by cloud systems often impede rapid decision-making. Furthermore, the expertise required to analyze various data types, along with the diverse range of possible incident causes, exacerbates the difficulty of the task. As a result, OCEs may spend an inordinate amount of time analyzing data and formulating hypotheses, detracting from time that could be better spent resolving the incident and restoring system functionality.
Data-driven and Artificial Intelligence (AI) techniques have been leveraged for automating the incident management (Han et al., 2017; Chen et al., 2017). While there are existing techniques that recommends relevant TSGs (Kang et al., 2017) and automates the workflows (Shi et al., 2018) of TSGs, their utility is limited by the inherent challenges associated with TSGs. Despite these automated processes, OCEs still find themselves investing significant manual effort in sifting through the vast amounts of information, interpreting the data, and identifying the root causes of incidents.
The recent advent and success of Generative Pretrained Transformer (GPT) models in performing complex tasks (Beng et al., 2017; Chen et al., 2017), suggests a promising avenue for enhancing RCA. Specifically, GPT models can be used to parse through high-volume data, discern relevant information, and produce succinct, insightful outputs. This significantly alleviates the burden on OCEs to manually sift through vast amounts of data, helping them focus on resolving the incident more quickly and effectively. Additionally, GPT models can adapt to new and evolving types of incidents, learning from previous data to improve future predictions. While GPT models can process and generate text efficiently, they lack intrinsic domain-specific knowledge, especially in specialized areas such as cloud incident management. This lack of understanding of specific contexts, such as cloud incidents, can limit their accuracy in predicting incident root causes and generating appropriate explanations.
Recently, Ahmed _et. al._(Ahmed et al., 2017) proposed to finetune a pretrained GPT model with domain-specific dataset for generating root causes of an incident just by leveraging the title and summary information available at the time of incident creation. While they have demonstrated promises of GPT models in incident root causing, finetuning posses several limitations: (1) As accurate root cause analysis requires various sources of complex unstructured data (e.g., logs, telemetry, traces), just using generic title and initial summary information might miss useful signals to reach to conclusive diagnosis details; (2) Finetuning is costly and requires a huge volume of training samples, whereas we only have access to a few hundred high-quality manually labeled category information; (3) It is challenging to continuously update a finetuned GPT model with evolving nature and scope of incidents; therefore such models are prone to generate more hallucinated results over time.
In this paper, we introduce RCACopilot, a novel approach to cloud incident root cause analysis that shifts away from the traditional reliance on TSGs. RCACopilot operates as an on-call system, empowering OCEs to construct 'handlers' - automated workflows tailored to each alert type defined by monitors, made up of reusable actions defined by their expertise. These predefined handlers automatically streamline the collection of incident-specific diagnostic information from multiple sources, thus ensuring a more focused and relevant data accumulation process to avoid issues on either end of the information spectrum. Subsequently, the large language model (LLM) component of RCACopilot processes this diagnostic data, autonomously identifying the categories and providing explanations of incident root causes. The combination of bespoke handlers and the analytical capabilities of the LLM allows RCACopilot to significantly enhance adaptability and scalability in incident response. As a result, RCACopilot can effectively handle a diverse array of incident types while reducing the need for extensive human intervention.
The diagnostic information collection component of RCACopilot has been in use at Microsoft for over four years. Recently, the root cause prediction component has been prototyped and tested by some incident teams at Microsoft before its final rolling in production.
**Summary.** This paper makes the following contributions:
* We propose RCACopilot, an automated tool for cloud incident RCA that enables on-call engineers to construct incident-specific automatic workflows for efficient data collection from multiple sources.
* We introduce the integration of a large language model within RCACopilot that autonomously analyzes the collected diagnostic data to predict incident root cause categories and generate explanations, demonstrating the potential of the large language model in enhancing RCA.
* We showcase the real-world applicability of RCACopilot by presenting its successful adoption within Microsoft. This illustrates its practical effectiveness in enhancing RCA efficiency, demonstrating the feasibility and benefits of our approach in real-world cloud computing scenarios.
## 2. Background and Motivation
In this section, we first introduce the concept and importance of incident root cause analysis. We then present real-world
examples of troubleshooting guides and illustrate their inherent limitations. Lastly, we discuss the potential advantages of integrating a large language model into the RCA process, which motivates our work.
### Incident Root Cause Analysis
In the realm of cloud services, an incident refers to any event that disrupts normal service operations or causes degradation in the quality of services. When such incidents occur, root cause analysis is performed to identify the underlying issue causing the disruption.
RCA in cloud services is a multi-faceted process:
* _Data Collection:_ Gathering relevant data from various sources such as logs, metrics, traces, or alerts is the first step in RCA.
* _Data Analysis:_ The collected data is then analyzed to identify patterns, anomalies, or correlations that can possibly provide clues about the root cause of the incident.
* _Hypothesis Verification:_ Based on the data analysis, hypotheses about the possible root cause are formulated and then verified by OCEs.
Given the complexity and dynamism nature of cloud systems, along with the immense volume of data involved, conducting RCA is a challenging task, which requires substantial expertise and time. Take the scale of our corporation's email service as an example, which delivers over 150 billion messages daily. Ensuring the smooth operation of such a large-scale service demands an efficient and effective RCA approach. This is pivotal in maintaining a reliable and high-performing communication infrastructure, particularly for organizations that rely heavily on Microsoft's email server for their email communication.
### The Opportunities and Challenges of Multi-Source Data in Incident Management
Managing incidents in the complex ecosystem of cloud services necessitates a comprehensive understanding of system states. This comprehension often stems from the consolidation of multi-source data, which includes traces, logs, and metrics. Traces represent tree-structured data detailing the flow of user requests, logs are semi-structured text recording hardware and software events, while metrics monitor service status or user-perceived metrics, forming time series data. While these individual data sources yield valuable insights, capitalizing on their potential has challenges. Traditional approaches such as TSGs, though useful, may fail to exploit the full wealth of multi-source data due to inherent limitations.
#### 2.2.1. Opportunities of Multi-Source Data
Different data sources provide different perspectives on the system state. For instance, logs can offer detailed event sequences, metrics can reflect system performance over time, and traces can reveal the propagation of requests across services. Integrating these data sources can provide a more comprehensive view of the system, enabling more accurate and efficient incident diagnosis and resolution. Furthermore, multi-source data can facilitate correlation and causality analysis, which is crucial for root cause analysis. By analyzing the relationships between different data sources, we can identify patterns and anomalies that may indicate the root cause of an incident.
#### 2.2.2. Challenges of Multi-Source Data
Despite its potential, effectively leveraging multi-source data in incident management is challenging. The sheer volume and complexity of data from various sources can be overwhelming, making it difficult to extract meaningful insights. Worse still, different data sources may provide inconsistent or conflicting information. Moreover, real-world data is often noisy, which can complicate analysis and lead to false conclusions.
#### 2.2.3. Limitations of TSGs
Traditional TSGs represent an early attempt to leverage multi-source data for incident management. They guide OCEs to gather and analyze data from various sources to diagnose and resolve incidents. However, TSGs face several inherent limitations:
* _Manual data integration:_ TSGs typically require OCEs to gather data from different sources manually. This process can be time-consuming and error-prone. Notwithstanding the existence of diverse troubleshooting guides and TSG recommendation techniques (Krishnan et al., 2017), dependence on TSGs still remains a significant stress and burnout for OCEs due to the inherent limitations of the manual process.
* _Outdated information:_ TSGs, as static documents, often struggle to stay up-to-date with the evolving system changes and new insights about incident root causes. This lag can lead OCEs to follow outdated or suboptimal troubleshooting steps. For example, a new feature ("Exception Table") to check Poison Message exceptions, mentioned as the
Figure 1. A TSG for a poisoned message incident.
second step in Figure 1, was not immediately incorporated into the TSG upon its release, causing potential inefficiencies in incident resolution.
* _Insufficient details and coverage:_ High-level instructions often appear in TSGs, lacking in detail and specific guidance, which forces OCEs into additional research and prolongs incident resolution. In the TSG example from Figure 1, the third step instructs to check the Poison Message Logs, leaving out crucial details and causing confusion for OCEs unfamiliar with this incident type. Additionally, TSGs may overlook common checks, such as disk space checks, leading to partial or inadequate incident resolutions.
### The Promise of Large Language Models for Incident Management
The rapid advancements in natural language processing and machine learning have led to the development of powerful LLMs, which are reported to be effective at various downstream tasks with zero-shot and few-shot training (Deng et al., 2017; Chen et al., 2018). These models have shown exceptional performance in translation, summarization, and question-answering. Leveraging their potential for incident management in cloud computing systems could revolutionize the way OCEs identify and resolve incidents. By automating the interpretation aspect of incident management, LLMs can help alleviate the stress and cognitive load associated with complex on-call tasks for OCEs, which enables OCEs to focus more on higher-level jobs and decision-making.
### Our Motivation
The motivation for our work is rooted in the challenges faced when using manual TSGs to diagnose incidents and identify the underlying root causes. Recognizing the limitations of manual TSGs, our goal is to develop an automated diagnostic process that harnesses the capabilities of LLMs to address various cloud incidents more effectively.
Different from previous work (Zhu et al., 2018), which employs AI techniques to generate automated workflow from existing TSGs, our goal is to enable experienced OCEs to construct an automated pipeline for incident diagnosis. This approach allows OCEs to be directly assisted in identifying the root cause without the need to investigate intermediate diagnostic information, though they still have the option to do so.
We envision a future in which root cause analysis is predominantly automated, requiring minimal manual verification only when necessary. Our approach seeks to provide OCEs with timely, relevant, and accurate information for specific incidents, leading to more efficient RCA.
By leveraging LLMs, our research aims to alleviate the stress and cognitive load associated with incident management, ultimately enhancing the efficiency and effectiveness of OCEs in addressing incidents.
## 3. Insights from Incidents
We conducted a comprehensive study of the one-year incidents from an email service from Microsoft, employing rigorous qualitative analysis methods. Specifically, each incident was carefully reviewed and categorized based on the characteristics of the problem, the source of the issue, and the impact on the system by our experienced OCEs. We paid particular attention to the root causes of the incidents, the effectiveness of the response, and the recurrence of similar issues. While our insights were indeed intuitively derived, they were firmly grounded in empirical data and analysis. Our study not only yielded valuable insights into incident patterns and challenges but also informed the development and refinement of our approach.
_Insight 1: determining the root cause based on a single data source can be challenging._ As an illustration, consider Incident 2 in Table 1, where a single server failed to perform DNS resolution for incoming packets due to the exhaustion of UDP hub ports on a front door machine. This example highlights the difficulties in relying solely on a single source (monitor alert) to diagnose complex issues.
When a mailbox server sends mail to external email recipients, it uses specific front-door servers (proxies). However, each front-door server has a limited number of available SMTP outbound proxy connections. If a mailbox server's proxy connection request fails, it will be unable to send messages to external recipients. In this incident, the monitor first raises an alert indicating detected failures when connecting to the front door server. However, this alert only signifies a connection issue between the mail server and the front door server, without even suggesting a DNS resolution problem. Consequently, the root cause remains unclear.
_Insight 2: incidents stemming from similar or identical root causes often recur within a short period._ We found that most recurring incidents (93.80%) tend to reappear within a brief span of 20 days, as shown in Figure 2. For instance, consider the category of Incident 9 from Table 1. This type of incident, triggered by invalid customer configuration,
Figure 2. Recurring incidents proportion vs. time interval.
led to an accumulation of unprocessed messages in the queue, thereby significantly undermining its availability. Intriguingly, incidents of this category recurred 11 times in a span of merely 15 days. Likewise, the DispatcherTaskCancelled incidents (No. 10 in Table 1) and the DeliveryHang incidents (No. 3) reappeared 22 times and 6 times within a week and a single month, respectively. These can be attributed to several factors. Unresolved root causes from the initial response may lead to the same issue re-emerging, especially if the problem is complex or not fully understood. Secondly, systemic vulnerabilities, if not addressed, can be repeatedly exploited, causing similar incidents. Thirdly, external dependencies, such as reliance on a service that frequently experiences outages, can also lead to recurring incidents. These patterns suggest that by leveraging insights from previous incidents, we could swiftly identify the root cause of new occurrences with the same root cause.
_Insight 3: incidents with new root causes occur frequently and pose a greater challenge to analyze._ TSGs can help OCEs diagnose issues by providing clear investigation guidance. However, when incidents arise from new,
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline No. & Sev. & Scope & Category & Occur. & Symptom & Cause \\ \hline
1 & 1 & Forest & AuthCertlssue & 3 & Tokens for requesting services were not able to be created. Several services reported users experiencing outages. & A previous invalid certificate overrided the existing one due to misconfiguration. \\ \hline
2 & 2 & Machine & HubPortExhaustion & 27 & A single server failed to do DNS resolution for the incoming packages. & The UDP hub ports on the machine had been run out. \\ \hline
3 & 2 & Forest & DeliveryHang & 6 & Mailbox delivery service hang for a long time. & Number of messages queued for mailbox delivery exceeded the limit. \\ \hline
4 & 2 & Forest & CodeRegression & 15 & An SMTP authentication component’s availability dropped. & Bug in the code. \\ \hline
5 & 2 & Forest & CertForBogusTenants & 11 & The number of concurrent server connections exceeded a limit. & Spammers abused the system by creating a lot of bogus tenants with connectors using a certificate domain. \\ \hline
6 & 1 & Forest & MaliciousAttack & 2 & Forest-wide processes crashed over threshold. & Active exploit was launched in remote PowerShell by serializing malicious binary blob. \\ \hline
7 & 2 & Forest & UseRoutResolution & 9 & Poisoned messages sent to the forest made the system unhealthy. & A configuration service was unable to update the settings leading to the crash. \\ \hline
8 & 2 & Forest & FullDisk & 2 & Many processes crashed and threw IO exceptions. & A specific disk was full. \\ \hline
9 & 2 & Forest & InvalidJournaling & 11 & Messages stuck in submission queue for a long time. & The customer set an invalid value for the Transport config and caused TenantSettingsNotFoundException. \\ \hline
10 & 3 & Forest & DispatcherTaskCancelled & 22 & Normal priority messages across a forest had been queued in submission queues for a long time. & Network problem caused the authentication service to be unreachable. \\ \hline \hline \end{tabular}
\end{table}
Table 1. Examples of cloud incidents in different root cause categories.
Figure 3. Distribution of incident category frequency.
previously unencountered root causes, OCEs face a set of challenges. For such incidents, no TSG exists, and OCEs may struggle to identify the underlying issues. For instance, Incident 1 is a high-severity (severity 1) incident caused by misconfiguration, which blocked the authentication token generation to lead to severe outages. Similarly, Incident 6 is a malicious attack caused by an attacker launching an exploit with a malicious blob. This type of attack had never been encountered before, leaving OCEs without an existing TSG to reference. Lower severity level (severity 2) incidents, such as Incident 5, are also susceptible to this challenge when the spammer first abuses the system. As Figure 3 shows, incidents with a new root cause category account for 24.96% (163 among 653) of all incidents. If OCEs spend their time searching for nonexistent TSGs, the incident's impact could escalate further. Recognizing this challenge, it is necessary to propose a new approach that can effectively infer, categorize and explain the root causes for such unseen incidents, thereby reducing the time OCEs take to identify and address these unique incidents.
## 4. RCACopilot
RCACopilot has two stages: the diagnostic information collection stage and the root cause prediction stage as shown in Figure 4.
**Diagnostic information collection stage**: This is the initial stage, where the incident is parsed and matched to the pre-defined incident handler. Each handler is tailored to a specific alert type. Upon matching the incident with the appropriate handler, RCACopilot proceeds to collect relevant diagnostic data from a variety of sources.
**Root cause prediction stage**: Once the diagnostic information is collected, RCACopilot transitions into the root cause prediction stage. In this phase, RCACopilot applies its predictive module to determine the likely root cause category of the incident. This prediction is not a mere categorization, but it is also supplemented with an explanation detailing how RCACopilot arrived at the given prediction. Subsequently, the predicted category label is presented to experienced OCEs for review.
### Diagnostic Information Collection Stage
Driven by Insight-1 in Section 3, RCACopilot aims to collect multi-source data for RCA. Specifically, for each alert type, an incident handler is constructed, comprising a series of actions to collect diagnostic information. Alert types are used to categorize alerts based on specific monitors and thresholds. Incidents sharing the same alert type exhibit similar symptoms, though they may stem from different root causes.
The RCACopilot incident handler is a workflow that consists of a series of actions. Each action is a function that can be executed to collect specific diagnostic information from a target data source. OCEs can build and modify these handlers based on their expertise. The handler includes three distinct actions: _scope switching action, query action_, and _mitigation action_, which will be explained in Section 4.1.2. Each action generates an output, guiding the control flow of the incident handler. We use a RCACopilot handler that diagnoses Incident 7 in Table 1 as an example to illustrate the handler usage.
#### 4.1.1. Incident handler
The decision-making process that OCEs employ when handling an incident resembles a decision tree's control flow. The root node in the incident handler is the incident alert type, which is gathered from the system monitor. We distilled OCE operations into three actions when constructing the incident handler. As OCE operations can be similar to different incident types (e.g., conducting a common disk check or query to a database), we designed RCACopilot handler actions to be reusable across all handlers. We also maintain the versions of the handlers in the database, which can be used to track their historical changes.
RCACopilot's incident handlers can be updated and modified dynamically by OCEs, allowing them to stay abreast with the most recent system changes and newly discovered root causes. For instance, when a new metric is introduced into the system, OCEs only need to construct a new action to collect the relevant data and incorporate it into the corresponding incident handler, which can ensure timely adaptation.
#### 4.1.2. Handler action
RCACopilot leverages the synergy of multi-source data. The system uses predefined actions in the incident handler to automatically collect relevant diagnostic information from diverse sources. The automated integration of data not only saves time but also reduces the likelihood of human error. It also provides a more comprehensive view of the system state, facilitating efficient and accurate incident resolution. This significantly lightens the workload of OCEs, reducing stress and burnout, and enhancing the effectiveness of the incident resolution process. The action in the handler could be one of the following:
**Scope switching action**: This action facilitates precision in RCA by allowing adjustments to the data collection scope based on the specific needs of each incident. For instance, as depicted in Figure 5, if an alert originates at the 'forest' level, signifying an issue within a specific forest, and the problem type is identified as 'Busy Hub', the scope switching action can adjust the scope to the'machine' level. This modification allows for a more fine-grained investigation, specifically assessing if a singular hub server is overly taxed.
The implementation of this action ensures that we efficiently navigate the information spectrum. When the investigation requires a more targeted approach, this action can narrow the data collection scope. Conversely, if a more holistic view is necessary, it can widen the scope, say from a single machine to an entire forest. This flexibility contributes
to a more balanced and effective diagnostic data collection process.
**Query action**: Query action can query data from different sources and output the query result as a key-value pair table. This type of action can also be hooked to executing a specific script with pre-defined parameters. Usually, scripts are internal automatic investigation tools for a service, and only the service team has access to the tools.
For instance, in Figure 5, the "Known issue?" action node queries the database to see whether the current incident is a known one or not based on its alert messages. If it is a known issue, execution flow will enter the "True" branch to give mitigation actions directly. Otherwise, a query script that can aggregate threads with the same stack traces will be executed. It will obtain an instantaneous list of the stacks on all the managed threads in the target process and then group common stacks together in order to identify potential deadlocks/blocking code paths in the process.
The query action can also output an enum value to decide the next action node to execute, e.g., after getting the top error message on the exception stack traces, i.e., "Get top error msg" node, the next action node to be run depends on the exception type. Based on the error messages, a specific team will be reported and engaged, as shown in Figure 5.
**Mitigation action**: This action refers to the strategic steps suggested to alleviate an incident, such as "restart service" or "engage other teams", as depicted in Figure 5. It's important to note that handlers do not always provide exact mitigation strategies for every incident, due to handlers' pre-defined nature, which may not cover all possible situations. For instance, Incident 4 in Table 1, categorized under code regression, presents a case where identification and rectification of such code issues can be challenging. In cases where the incident handler is uncertain, it will only offer intermediate diagnostic information to the OCEs without mitigation.
#### 4.1.3. Multi-source diagnostic information
RCACopilot's diagnostic information collection stage serves as a valuable tool for OCEs by aggregating data from a myriad of sources. OCEs only need to customize the action in the handler to acquire the diagnostic information from a target source. For instance, as illustrated in Figure 6, RCACopilot can assimilate diverse data such as error logs, exception stack traces, and socket metrics related to a specific incident. The error log and exception stack trace alone does not provide sufficient insight to identify the root cause of the incident. However, when supplemented with the socket metrics, a more comprehensive picture emerges. In this example, it is clear that the UDP socket is exhausted, which is the root cause.
In the case of new incidents, RCACopilot can perform a range of common checks, such as evaluating the provisioning status or analyzing thread stacks. This assists OCEs in gaining a holistic understanding of the situation. Note that the information collected is pre-defined in the actions of the RCACopilot handler, ensuring that only relevant data is gathered, thus avoiding overwhelming information that is unnecessary. By providing this comprehensive diagnostic information, RCACopilot empowers OCE teams to troubleshoot issues efficiently. They can use the gathered information as guidance to address incidents more effectively.
### LLMs for Incident Explanation
Upon thorough investigation, each incident within our service is manually assigned a root cause category by our seasoned OCEs. OCEs will use the categories to classify the historical incidents and guide the new incoming incidents' RCA. However, reasoning the incidents and inferring their categories are time-consuming and potentially overwhelming for OCEs, who have a tight time budget. Given this, we have identified the categorization of incident root causes as our primary downstream task.
Recently, LLMs have demonstrated remarkable capabilities in understanding the context of downstream tasks and generating relevant information from demonstrations, making them a possible choice for incident RCA. However, reasoning the incident root cause is not a simple task, and LLMs may not be able to achieve the optimal results on long-tail or domain-specific tasks without any guidance (Beng et al., 2018; Chen et al., 2019). Chain-of-Thoughts (CoT) prompting is a gradient-free technique that
Figure 4. RCACopilot architecture.
elicits LLMs to generate intermediate reasoning steps that lead to the final answer. In few-shots CoT prompting, a few manual demonstrations that are composed of a question and a reasoning chain that leads to an answer for each of them. Inspired by the above ideas, diagnostic information provided by RCACopilot handlers can be used as ingredients for the reasoning process of the incidents.
#### 4.2.1. Embedding model
Our observation is that the _semantics of incidents can be revealed from the context in which the diagnostic information is described._ A common approach to extracting such contextual semantics involves the use of embedding models. The objective is to map the diagnostic information into an embedding space (i.e., numeric vector space), where the distances between vectors represent the semantic similarity of incidents. Choosing a computationally efficient embedding model allows us to preserve accuracy while handling a large number of incidents.
We employ FastText as our embedding model, which is efficient, insensitive to text input length, and generates dense matrices, making it easy to calculate the Euclidean distance between similar vectors. Furthermore, since our downstream task is domain-specific to the incident root cause reasoning, and the incident-related information is internal to our company, we opt to train a FastText model on our historical incidents rather than using a pre-trained large language model as our embedding model, which is costly and inefficient. Additionally, we provide users with the flexibility to customize their embedding model if desired.
#### 4.2.2. Diagnostic information summary
LLMs have shown potential for automatic summarization (Sutton et al., 2017). Nonetheless, the length of the diagnostic information collected from RCACopilot handlers is often too extensive. As shown in Figure 6, the diagnostic information of an incident can have more than 2000 tokens with low readability of the log messages. The considerable number of tokens in the incident description can pose challenges for the LLM to effectively process and may introduce noise. Therefore, feeding the diagnostic information of an incident directly into the LLM to make a prediction could not be an ideal choice, let alone using the information from multiple sources. In this regard, we add another layer to leverage the LLM's ability to summarization to summarize the diagnostic information first before making the diagnosis reasoning. We construct the prompt in the way of Figure 7. We ask LLM to summarize the diagnostic information into 120-140 words without outputting any unrelated information. This summarization process makes the diagnostic information more concise and informative, which forms the basis for the later CoT prompting. Figure 8 illustrates a more readable and concise text generated by
Figure 5. A RCACopilot handler for too many messages stuck in the delivery queue alert.
Figure 6. Diagnostic information for hub port exhaustion.
RCACoplot, which is a summary (113 tokens) of the previous diagnostic information example in Figure 6, highlighting the key details such as the number of UDP ports used and the process utilizing the most. Specifically, we employ the tiktoken (2017) tokenizer to count text tokens.
#### 4.2.3. Nearest neighbor search
Incidents are heterogeneous, making it impractical to combine all past incidents' information for sampling due to the prompt length limitations, even after summarization. To selectively choose past cases as samples in the prompt, we design a new similarity formula:
\[Distance(a,b)=||a-b||_{2}\]
\[Similarity(a,b)=\frac{1}{1+Distance(a,b)}*e^{-\alpha|T(a)-T(b)|}\]
to calculate the similarity between two incidents. It first computes the Euclidean distance for every pair of incident vectors. Importantly, it also takes into account the temporal distance between incidents, reflecting our Insight-2 in Section 3. Here, \(T(x)\) stands for the date of incident \(x\). This consideration of temporal distance is crucial as it influences the relevance of past incidents to the current ones. After calculating similarities, we select the top \(K\) incidents as demonstrations for the LLM. This approach ensures a diverse and representative set of incidents for effective LLM reasoning. The values of \(\alpha\) and \(K\) have been determined as 0.3 and 5, respectively, through empirical evaluation, as will be presented in Section 5.4.
#### 4.2.4. Prediction prompt construction
CoT prompting is a gradient-free technique that guides LLMs to produce intermediate reasoning steps leading to the final answer. In few-shot CoT prompting, several demonstrations include a question and a reasoning chain that directs the answer. Without hinging on the hand-crafting of demonstrations, AutoCoT (Song et al., 2019) has shown the power of automatically constructing the prompt to form the reasoning chains. By drawing inspiration from this concept, we can view the summarized diagnostic information and the labeled root cause categories as questions and reasoning, so finding the nearest incident neighbor is the automatic reasoning chain construction, aligning with the CoT prompting context well. We construct the prompt like Figure 9 to ask the LLM to choose the most likely incident that has the same root cause as the current incident, and also we explicitly push the LLM to reason by using "give your explanation" indications in the prompt.
### Implementation
We have developed and deployed RCACoplot using a combined total of 58,286 lines of code, consisting of 56,129 lines of C# and 2,157 lines of Python.
To facilitate the building of the RCACoplot incident handler, we have implemented RCACoplot's handler construction as a web application. To support a new type of alert in RCACoplot, OCEs only need to add a new handler in the handler construction GUI according to her expertise (see Appendix A). After the new handler has been constructed, it will be stored in the database, and OCEs can modify it by creating new action nodes or deleting old nodes.
## 5. Evaluation
We aim to answer the following questions in our evaluation:
1. How effective and efficient is RCACoplot as an on-call system when predicting root cause categories and assisting OCEs? RCACoplot achieves 0.766 and 0.533 for
Figure 8. The summarized diagnostic information.
Figure 7. Prompt to summarize diagnostic information.
Figure 9. The prompt to predict incident category.
Micro-F1 and Macro-F1 separately when predicting the root cause category of cloud incidents, outperforming all our baselines with a low running overhead (4.205 seconds). RCACopilot is also able to generate new root cause category labels for unseen incidents with explanations.
2. How do different components of RCACopilot facilitate its diagnosis and prediction? RCACopilot has proven that the diagnostic information collection component, GPT summarization, and chain-of-thoughts prompting all contribute to RCACopilot's prediction effectiveness.
3. Is RCACopilot suitable for deployment in real production services, and are RCACopilot's results trustworthy? RCACopilot's diagnostic information collection module has been deployed across 30 teams within Microsoft for over four years. To evaluate the trustworthiness of RCACopilot, each experiment was conducted over three rounds, and RCACopilot can consistently achieve a high Micro-F1 score of over 0.70 and a Macro-F1 score exceeding 0.50.
All experiments are performed on the server with Intel(R) Core(TM) i7-9700 CPU @ 3.00GHz, 32.0 GB physical memory, and Intel UHD Graphics 630. The OS of the server is Windows 11 Enterprise.
### Target System and Dataset
We evaluate RCACopilot in a global email service system named Transport within the Microsoft. The Transport team focuses on developing and maintaining the components responsible for mail flow, routing, and delivery. This system interacts with various other services to ensure seamless integration with a multitude of products and services, including serviceA, serviceB, and serviceC. Hence, it is representative of complex, real-world systems that interact with multiple components. With around 150 billion messages being delivered daily, Transport operates at a colossal scale and caters to customers worldwide, adding another layer of diversity and complexity. The system ensures the secure and effective transmission of emails between users, utilizing various protocols such as SMTP, IMAP, and POP3. Given its crucial role in communications infrastructure, it is essential to have effective and efficient incident management capabilities.
We collect a one-year dataset of 653 incidents from Microsoft's Transport service to investigate RCACopilot's efficacy in practice. It is important to note that each of these incidents represents complex issues in a large-scale, globally distributed system, and thus each provides valuable insights. The dataset is manually labeled with root cause categories by experienced OCEs, which serves as our ground truth. We divide the incident cases into training (75%) and testing sets (25%).
We conduct experiments on two large language models in RCACopilot, _i.e._, GPT-3.5-turbo, and GPT-4 (8K tokens), which are the latest models from OpenAI. We choose GPT-4 as the default model in RCACopilot because it has the best performance.
### Compared Approaches
We have selected XGBoost, FastText, and fine-tuned LLMs as our baselines to compare with RCACopilot. We have also made another two variants, i.e., GPT-4 Prompt and Embed. to evaluate the design of RCACopilot.
* **XGBoost** provides a parallel tree boosting that has been commonly used in the networking system diagnosis.
* **FastText** is a popular lightweight textual embedding approach, which has been adopted in testbed studies with fault injections for root cause diagnosis tasks.
* **Fine-tune GPT** is to fine-tune a pre-trained GPT-3.5 model with our training dataset and evaluate its performance on our testing dataset with the temperature parameter set to 0. Note that GPT-4 is currently not available for fine-tuning.
* **GPT-4 Prompt** is a variant of RCACopilot that directly predict category with RCACopilot's diagnosis information summaries.
* **GPT-4 Embed.** is a variant of RCACopilot that changes the embedding model from FastText to GPT embedding.
### Effectiveness and Efficiency
We evaluate RCACopilot's effectiveness by predicting the root cause category of an incident based on the summarized diagnostic information using micro and macro F1-score metrics. These metrics calculate the harmonic mean of the precision and recall. The micro F1-score aggregates the performance of all classes, taking into account the contribution of each sample, while the macro F1-score focuses on the performance of each individual class. RCACopilot achieves a micro F1-score of 0.766 and a macro F1-score of 0.533 on our testing dataset.
As shown in Table 2, RCACopilot outperforms other approaches, and it tends to incur an acceptable higher runtime overhead. The performance of baseline approaches is poor,
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**F1-score**} & \multicolumn{2}{c}{**Avg. Time (s)**} \\ \cline{2-5} & **Micro** & **Macro** & **Train.** & **Infer.** \\ \hline FastText [(45)] & 0.076 & 0.004 & 10.592 & 0.524 \\ XGBoost [(3)] & 0.022 & 0.009 & 11.581 & 1.211 \\ Fine-tune GPT [(1)] & 0.103 & 0.144 & 3192 & 4.262 \\ \hline GPT-4 Prompt & 0.026 & 0.004 & – & 3.251 \\ GPT-4 Embed. & 0.257 & 0.122 & 1925 & 3.522 \\ \hline RCACopilot (GPT-3.5) & 0.761 & 0.505 & 10.562 & 4.221 \\
**RCACopilot (GPT-4)** & **0.766** & **0.533** & 10.562 & 4.205 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Effectiveness of different methods.
since multiple root cause categories exhibit a long tail (imbalanced) distribution, as shown in Figure 3, and traditional machine learning models (FastText and XGBoost) and fine-tuning GPT model need a large amount of training data to produce accurate predictions. Directly employing GPT-4 prompt or GPT-4 embedding approach without our design lacks domain-specific knowledge for GPT-4 to make decisions. On the contrary, RCACopilot leverages the powerful LLM to learn the domain-specific knowledge from minimal cases, so that it can achieve the best performance. Results indicate that RCACopilot not only provides higher accuracy but also maintains a reasonable level of efficiency, making it a suitable choice for incident root cause analysis.
When facing incidents that RCACopilot has never seen before, RCACopilot is capable of generating a new category keyword to depict the new incident case. For example, Incident 8 in Table 1 is a new incident case that RCACopilot has never encountered. RCACopilot's prediction component is able to predict it as a new category "I/O Bottleneck". Although OCEs subsequently categorize it as "DiskFull" in post-investigation, the fundamental aspects of the problem identified by RCACopilot align closely with the human-derived label. The corresponding RCACopilot's explanation, illustrating how it arrived at the "I/O Bottleneck" categorization, is provided in Figure 10.
### Comparison Analysis
To understand how different components of RCACopilot facilitate root cause analysis, we conduct an ablation study on the different RCACopilot's components.
**Evaluation on diagnostic information.** First, we evaluate the impact of diagnostic information on effectiveness. In particular, we compare diagnostic information collected from the collection stage with other different incident-related information, namely, incident alert information and RCACopilot handler action output. AlertInfo includes the alert type and alert scope. Alert type is a pre-defined anomaly description from a monitor, which only reflects a symptom of the incident instead of the root cause, e.g., an exception type from external monitors. The alert scope is the scope of the incident, e.g., a single machine. ActionOutput is the output of a series of executed RCACopilot actions, which are hashed as key-value pairs. As shown in Table 3, using diagnostic information alone can outperform others in both Micro-F1 (0.689) and Macro-F1 scores (0.510). The interesting observation here is that mixing the diagnostic information with others will not enhance RCACopilot's predictive capabilities. This demonstrates that an excess of information can negatively impact the LLM's prediction performance.
**Evaluation on GPT summarization.** We evaluate the role of GPT summarization in enhancing RCACopilot's effectiveness. As depicted in Table 3, utilizing summarized diagnostic information leads to the highest Micro-F1 and Macro-F1 scores, marking improvements of 0.077 and 0.023, respectively, over the non-summarized diagnostic information. The results demonstrate that the summarization step effectively condenses the information, allowing for more efficient and accurate processing of incident data.
**Evaluation on few-shots CoT reasoning.** We assess how few-shots CoT reasoning contributes to improving effectiveness. GPT-4 Prompt approach in Table 2, which directly predicts the category without any sample, only achieves 0.026 and 0.004 for Micro-F1 and Macro-F1 respectively. As
Figure 11. Effectiveness of using different K and alpha.
Figure 10. RCACopilot’s explanation of an incident.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{3}{c}{**Data Source**} & \multicolumn{2}{c}{**F1-score**} \\ \hline AlertInfo & DiagnosticInfo & ActionOutput & Micro & Macro \\ \hline & ✓ & & 0.689 & 0.510 \\ & ✓ & & **0.766** & **0.533** \\ \hline ✓ & & & 0.379 & 0.245 \\ ✓ & ✓ & & 0.525 & 0.511 \\ ✓ & & ✓ & 0.431 & 0.247 \\ & ✓ & ✓ & 0.501 & 0.449 \\ ✓ & ✓ & ✓ & 0.440 & 0.349 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Effectiveness of different prompt context for RCACopilot. ✓: stands for the summarized diagnostic information.
shown in Figure 10(a) and Figure 10(b), we compare the performance of RCACopilot with different numbers of samples in the Chain-of-thoughts reasoning. Our analysis reveals that the best combination of the number of samples and alpha values are 5 and 0.3, which achieves the highest F1 scores. Note that more samples in the CoT reasoning do not always incur an improvement for RCACopilot, and the value of the alpha plays an important role in deciding the effectiveness. When the alpha is appropriate, it allows RCACopilot to better capture the time relationships between different incidents, leading to more accurate predictions.
### Deployment Status and Scale
We have successfully deployed RCACopilot's diagnostic information collection module across over 30 teams within Microsoft, where it has been in active use for over four years. The system is tailored to each team's specific requirements, with custom handlers built for each unique setting. Not all handlers are currently enabled in the production environment, as some are still under development and rigorous testing. We observe that the average running time for each incident ranges from 15 seconds to 841 seconds (see Appendix A). The highest running time is attributable to the team's large-scale and complex system infrastructure. As part of our commitment to continuous improvement and quality user experience, we have incorporated a feedback mechanism in emails to garner user perspectives from OCEs. According to our collected feedback, most OCEs expressed satisfaction with the diagnostic information provided by RCACopilot.
### Tustworthiness
While GPT has shown great potential and impressive results in various tasks, it is known to exhibit some instability in certain complex tasks such as question answering, as noted by Tan et al. (Tan et al., 2018). These instabilities could potentially lead to variable results. In order to ensure the trustworthiness and stability of the GPT's predictive capabilities in RCACopilot, each experiment has been conducted three rounds. In each round, RCACopilot was able to maintain a high level of performance, with the Micro-F1 consistently above 0.70 and the Macro-F1 remaining above 0.50.
## 6. Discussion
RCACopilot's effectiveness depends on the ability of the LLM. Currently, RCACopilot is only integrated with OpenAI's GPT models, and we have not yet explored the potential effectiveness of other available LLMs. As such, the model's performance may vary depending on the strengths and weaknesses of the specific LLM employed.
We conducted our evaluation of RCACopilot's prediction module using the incident dataset from Transport. The dataset was prepared with the assistance of experts in Transport team, given their extensive experience and established practice of incident labeling. Note that the effectiveness of RCACopilot is also influenced by the quality of the root cause categories. Currently, all root cause categories are manually labeled by our experienced OCEs. RCACopilot's diagnosis information collection has been deployed in over 30 teams. Consequently, a valuable future work would be to evaluate RCACopilot across different services to gain a more comprehensive understanding of its generalizability and adaptability.
RCACopilot's handler is designed to respond based on alerts generated by monitors. This implies that for incidents that the monitor does not detect, RCACopilot will not be able to match a handler, thereby limiting its applicability.
We conducted three rounds of experiments to evaluate RCACopilot's effectiveness. However, the occasional instability of LLMs can influence their effectiveness, causing variations across different rounds. Another potential threat to internal validity lies in the implementation of our approach and those we compared against. To mitigate this risk, two authors have carefully checked the code. In particular, we implemented the baselines based on the matured frameworks.
## 7. Related Work
**Root cause analysis**. Root cause analysis in large cloud services has become a popular topic of research in the system and software engineering communities (Bahdan et al., 2017; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). It aims to identify the root causes of failures and performance issues based on various data sources, such as metrics, logs, and traces. Previous studies have proposed different approaches for root cause analysis using one of these data sources. For example, some methods rely on metrics to extract failure patterns (Chen et al., 2018; Chen et al., 2018) or to construct service dependency graphs (Chen et al., 2018; Chen et al., 2018). Others use logs to analyze a subset of log messages (Chen et al., 2018; Chen et al., 2018) or to examine the details within each log message (Chen et al., 2018; Chen et al., 2018). Moreover, some techniques utilize trace to locate the faulty service (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). Different from prior work, we build a system that can automatically integrate metrics, logs, and traces for root cause analysis with state-of-the-art large language models.
**Large Language Models**. In recent years, the rise of LLM has brought new opportunities to the field of software systems by enabling various tasks such as code generation, summarization, repair, testing, and root cause analysis (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). For example, Mastropaolo _et al._(Mastropaolo et al., 2018) studied the ability of fine-tuned T5 in the following tasks: automatic bug fixing, generation of assert statements, code summarization, and injection of code mutants. LANCE (Mastropaolo et al., 2018) uses fine-tuned T5 to automatically generate logging statements for Java methods. VulRepair (VulRepair, 2018) also fine-tune T5 on vulnerability repairs datasets to automatically propose vulnerability fixes. Zhang _et al._(Zhang et al., 2018) proposes to use prompting for LLM to improve code version control. Ahmed _et al._(Ahmed et al., 2017) fine-tune GPT-x
models to recommend root causes and mitigation steps to facilitate cloud incident management. In contrast to previous studies, RCACopltot employs advanced LLMs to summarize diagnosis data and leverage the chain-of-thoughts ability to predict and explain root causes.
## 8. Conclusion
RCACopltot represents a pioneering tool in the realm of cloud incident management, facilitating efficient root cause analysis for OCEs. It introduces a unique approach to multi-source data collection through its diagnostic information collection stage, utilizing predefined incident handlers. These handlers, constructed by OCEs, systematically gather multi-source diagnostic information, which sets the foundation for the subsequent analysis. Furthermore, RCACopltot integrates a large language model in its root cause prediction stage. This model autonomously processes the collected diagnostic data, predicting and explaining the root cause category. This integration of AI techniques into cloud incident management demonstrates the potential of RCACopltot in enhancing the efficiency and accuracy of root cause analysis.
|
2307.03763 | Accelerating global parameter estimation of gravitational waves from
Galactic binaries using a genetic algorithm and GPUs | The Laser Interferometer Space Antenna (LISA) is a planned space-based
gravitational wave telescope with the goal of measuring gravitational waves in
the milli-Hertz frequency band, which is dominated by millions of Galactic
binaries. While some of these binaries produce signals that are loud enough to
stand out and be extracted, most of them blur into a confusion foreground.
Current methods for analyzing the full frequency band recorded by LISA to
extract as many Galactic binaries as possible and to obtain Bayesian posterior
distributions for each of the signals are computationally expensive. We
introduce a new approach to accelerate the extraction of the best fitting
solutions for Galactic binaries across the entire frequency band from data with
multiple overlapping signals. Furthermore, we use these best fitting solutions
to omit the burn-in stage of a Markov chain Monte Carlo method and to take full
advantage of GPU-accelerated signal simulation, allowing us to compute
posterior distributions in 2 seconds per signal on a laptop-grade GPU. | Stefan H. Strub, Luigi Ferraioli, Cédric Schmelzbach, Simon C. Stähler, Domenico Giardini | 2023-07-07T14:35:09Z | http://arxiv.org/abs/2307.03763v1 | Accelerating global parameter estimation of gravitational waves from Galactic binaries using a genetic algorithm and GPUs
###### Abstract
The Laser Interferometer Space Antenna (LISA) is a planned space-based gravitational wave telescope with the goal of measuring gravitational waves in the milli-Hertz frequency band, which is dominated by millions of Galactic binaries. While some of these binaries produce signals that are loud enough to stand out and be extracted, most of them blur into a confusion foreground. Current methods for analyzing the full frequency band recorded by LISA to extract as many Galactic binaries as possible and to obtain Bayesian posterior distributions for each of the signals are computationally expensive. We introduce a new approach to accelerate the extraction of the best fitting solutions for Galactic binaries across the entire frequency band from data with multiple overlapping signals. Furthermore, we use these best fitting solutions to omit the burn-in stage of a Markov chain Monte Carlo method and to take full advantage of GPU-accelerated signal simulation, allowing us to compute posterior distributions in 2 seconds per signal on a laptop-grade GPU.
Gravitational Waves, LISA, Galactic Binaries, LISA Data Challenge, GPU
## I Introduction
The detection of gravitational waves (GWs) by the LIGO detector in 2015 marked a significant breakthrough in astrophysics [1]. This achievement spurred the development of the Laser Interferometer Space Antenna (LISA), a space-based interferometric system capable of detecting low frequency GWs in the \([0.1,100]\,\mathrm{m}\mathrm{H}\mathrm{z}\) range, free from terrestrial seismic and anthropogenic noise sources [2]. LISA is a L-class mission of the European Space Agency (ESA) and currently set for launch in 2037.
The primary sources in the LISA frequency band are tens of millions of Galactic binaries (GBs) emitting quasi-monochromatic gravitational waves. These sources are far from merging, allowing for their gravitational waves to be continuously measured during LISA's nominal 4 year operational time [2]. It is estimated that tens of thousands of these overlapping signals are resolvable by an experiment of LISA's arm length, resolution and measurement duration, while the rest blurs into a galactic foreground noise. Accurately estimating the parameters of GBs provides valuable information for studying the dynamical evolution of binaries [3; 4; 5; 6; 7].
Several methods have been proposed for extracting GB signals, including maximum likelihood estimate (MLE) [8; 9; 10] and Bayesian approaches. MLE methods are used to find the best matching simulated signal to the data, while Bayesian methods provide a posterior distribution that describes the uncertainty of the source parameters. The most successful Bayesian approaches are Markov chain Monte Carlo (MCMC) based methods, such as blocked annealed Metropolis-Hastings (BAM) [11; 12; 13], an MCMC algorithm with simulated annealing, or the reversible jump Markov chain Monte Carlo (RJMCMC) [14; 15] method, which allows for varying parameter dimensions and thus variable numbers of GBs to construct the posterior distribution.
In our previous work [16], we demonstrated that signal extraction can be divided into two parts for both isolated and overlapping signals in the frequency domain. The first part involves optimizing the GB parameters in order to achieve the best fit between the simulated signal and the available data. In the second part, Gaussian process regression [17] is used to model the log-likelihood function, which allows for the computation of the posterior distribution without the need to simulate the GW signal for each sample. In this paper, we extend the work to analyze the full galactic signal population from a simulated LISA data stream.
Furthermore, with recent advances in simulating a GB signal using GPUs, we swapped the Gaussian process regression modeling with directly computing the log-likelihood function using a GPU [18]. For sampling, we use a Metropolis-Hastings algorithm with a proposal distribution independent of the current state of the chain. Therefore, we can make full use of calculating the log-likelihood for 10'000 signals in parallel and build the Markov chain in the next step. That way we are able to compute the posterior distribution of a single signal within only 1.8 seconds on a late 2018-released Quadro RTX 4000 Mobile GPU built inside a laptop. We demonstrate the benefit of such a speed up by solving for the GB of the LISA Data Challenge (LDC)1-4, part of LDC1, which is also called Radler [19]. This challenge encompasses a dataset containing instrument noise as well as 26 million GB signals. Additionally, the pipeline has also been tested on LDC2a, called Sangria, where the injected MBHBs are subtracted, resulting in a dataset comprising 30 million GBs along with instrument noise [19].
In Section II we introduce Bayesian parameter estimation, and Section III provides a detailed description of the new pipeline. In Section IV the performance of the
pipeline is showcased through its successful handling of the LISA Data Challenges LDC1-4 and LDC2a. Lastly, Section V discusses the performance of the pipeline and the potential for further pipeline development.
## II Bayesian formulation for signal extraction
Gravitational Waves are ripples in the fabric of space-time caused by the acceleration of massive objects, such as merging black holes, neutron stars and white dwarfs. LISA is a planned space-based mission designed to detect these elusive signals with unprecedented precision. However, the expected LISA data, denoted as \(d(t)\), will be contaminated by instrument noise and unresolved signals, making the extraction of the underlying gravitational wave signal, denoted as \(s(t,\theta)\), a challenging task. To tackle this, Bayesian inference and data analysis techniques provide a powerful framework. For convenience, we will omit the notation for dependence on \(t\) for the data \(d\) and the signals \(s(\theta)\) in the following.
In Bayesian inference, we aim to infer the probability distribution of the parameters \(\theta\) describing the gravitational wave signal \(s(\theta)\) given the observed data \(d\). This is done using Bayes' theorem, which relates the posterior distribution \(p(\theta|d)\), the prior distribution \(p(\theta)\), the likelihood \(p(d|\theta)\), and the model evidence \(p(d)\) as follows:
\[p\left(\theta|d\right)=\frac{p\left(d|\theta\right)p\left(\theta\right)}{p \left(d\right)} \tag{1}\]
The posterior distribution \(p(\theta|d)\) represents the updated probability distribution of the parameters \(\theta\) after taking into account the measured data \(d\). The prior distribution \(p(\theta)\) incorporates any prior knowledge or assumptions about the parameters. The model evidence \(p(d)\) is a normalization factor that ensures the posterior distribution integrates to unity, and it is independent of \(\theta\), hence does not affect the relative probabilities.
In GW data analysis, the likelihood \(p(d|\theta)\) quantifies the probability of measuring the data stream \(d\) given the parameters \(\theta\) of the gravitational wave signal. The log-likelihood is commonly used due to its mathematical convenience and is defined as:
\[\log p(d|\theta)=-\frac{1}{2}\langle d-s(\theta)|d-s(\theta)\rangle, \tag{2}\]
where \(\langle x(t)|y(t)\rangle\) is the scalar product between two time-domain signals \(x(t)\) and \(y(t)\), and it is defined as:
\[\langle x(t)|y(t)\rangle=4\mathcal{R}\left(\int_{0}^{\infty}\frac{\tilde{x}(f )\tilde{y}^{*}(f)}{S(f)}\,df\right), \tag{3}\]
Here, \(\tilde{x}(f)\) marks the Fourier transform of \(x(t)\), and \(S(f)\) is the one-sided power spectral density of the noise, which characterizes the noise properties of the LISA detector. The noise is estimated and constantly updated during the search. The noise estimate for GB analysis is discussed in Section III.3 and III.6.
To eliminate the laser noise in the LISA arms' laser measurements, time-delay-interferometry (TDI) will be employed, which combines the measurements into three observables: X, Y, and Z [20; 21; 22; 23; 24]. Consequently, the data \(d\) and the signal \(s(\theta)\) consist of TDI responses with multiple channels, and we write the inner product as the following sum
\[\langle d-s\left(\theta\right)|d-s\left(\theta\right)\rangle=\sum_{\alpha\in \mathcal{M}}\langle d_{\alpha}-s_{\alpha}\left(\theta\right)|d_{\alpha}-s_{ \alpha}\left(\theta\right)\rangle \tag{4}\]
Here, \(\mathcal{M}=X,Y,Z\) represents the default TDI setting, or \(\mathcal{M}=A,E,T\) where
\[A =\frac{1}{\sqrt{2}}\left(Z-X\right)\] \[E =\frac{1}{\sqrt{6}}\left(X-2Y+Z\right) \tag{5}\] \[T =\frac{1}{\sqrt{3}}\left(X+Y+Z\right)\]
are uncorrelated with respect to instrument noise [25]. In this work we utilize \(A\), \(E\), and \(T\). However, to save computational time, we consider only \(A\) and \(E\) for signals with frequencies \(f<f_{*}/2=1/(4\pi L)\approx 9.55\,\mathrm{mHz}\), as the contribution of the gravitational wave response for \(T\) is suppressed [14]. By setting the threshold at half the transfer frequency \(f_{*}\), we adopt a more conservative approach.
## III Extracting Galactic Binary Signals in the Full LISA Frequency Band
The simulation of a GW from a GB system involves eight parameters denoted as \(\theta=\left\{\mathcal{A},\lambda,\beta,f,\hat{f},\iota,\phi_{0},\psi\right\}\)[26]. These parameters are utilized to model the GW signal, where \(\mathcal{A}\) represents the amplitude, \(\lambda\), and \(\beta\) correspond to the sky coordinates in terms of ecliptic longitude and ecliptic latitude, respectively. The parameter \(f\) represents the frequency of the GW, \(\hat{f}\) denotes the first-order frequency derivative, \(\iota\) represents the inclination angle, \(\phi_{0}\) represents the initial phase, and \(\psi\) corresponds to the polarization angle. In this study, we consider only the first-order frequency derivative and neglect higher-order frequency derivatives.
To obtain the MLE we can maximize the signal-to-noise ratio (SNR) defined as
\[\rho=\frac{\langle d|s\left(\theta^{\prime}\right)\rangle}{\sqrt{\langle s \left(\theta^{\prime}\right)|s\left(\theta^{\prime}\right)\rangle}}=\frac{ \langle d|s\left(\theta\right)\rangle}{\sqrt{\langle s\left(\theta\right)|s \left(\theta\right)\rangle}}. \tag{6}\]
which is independent of \(\mathcal{A}\) with \(\theta^{\prime}=\theta\setminus\{\mathcal{A}\}\) and obtain
\[\mathcal{A}_{\max}=\frac{\langle d|s\left(\theta^{\prime}\right)\rangle}{\langle s \left(\theta^{\prime}\right)|s\left(\theta^{\prime}\right)\rangle} \tag{7}\]
analytically [16].
### Frequency segments
Because fitting \(>\)10'000 signals globally is a currently untractable problem, we split the data into small segments in the frequency domain. In order to have a few signals in one segment while keeping the number of segments small for stability and efficiency, we determine the segment size to be double the size of the broadest signal expected for each frequency segment \(B_{\text{segment}}(f)=2B_{\max}(f)\). The width of a signal in the frequency domain is influenced by various factors contributing to signal broadening. These factors include the frequency change of the source itself, LISA's orbital motion around the sun, and LISA's cartwheel motion.
To obtain the widest expected broadening we multiply the highest frequency derivative with the observation time \(B_{F}=\dot{f}_{\max}T_{\text{obs}}\). Where \(\dot{f}_{\max}\) is determined by [14]
\[\dot{f}=\frac{96}{5}\pi^{8/3}\mathcal{M}_{c}^{5/3}f^{11/3} \tag{8}\]
where \(\mathcal{M}_{c}=\frac{\left(m_{1}m_{2}\right)^{3/5}}{\left(m_{1}+m_{2}\right) ^{1/5}}\) is the chirp mass and \(f\) the frequency. For \(\dot{f}_{\max}\) the masses of the binary are set to the Chandrasekhar limit \(m_{1}=m_{2}=1.4\,\mathrm{M}_{\odot}\)[27].
LISA's orbit around the sun and cartwheel motion smear the signal by \(B_{\text{O}}=10^{-4}f\) and \(B_{\text{C}}=4\cdot\frac{1}{1yr}\) respectively due to Doppler shift [28]. Since the smearing can increase or decrease the frequency, the resulting bandwidth is \(2B_{\text{O}}\) and \(2B_{\text{C}}\) respectively. As a result, the broadest signal expected has a width of \(B_{\max}=B_{\text{F}}+2B_{\text{O}}+2B_{\text{C}}\) which is shown in Figure 1.
In Algorithm 1, we outline the procedure for generating the list of frequency segments \(B_{\text{search}}\) for a given global frequency interval. The lower bound of the frequency range, \(f_{\min}=0.3\,\mathrm{m}\mathrm{H}\mathrm{z}\), is chosen based on the absence of expected detectable GBs at frequencies lower than \(0.3\,\mathrm{m}\mathrm{H}\mathrm{z}\). The upper bound, \(f_{\max}=f_{\text{Nyquist}}\), is determined by the sampling frequency where the Nyquist criterion states that the sampling frequency should be at least twice the maximum frequency of interest in order to accurately capture the signal [29].
### Prior
In Table 1 we list the prior distribution \(\Theta\) for all parameters. The frequency boundary is the padded frequency segment of interest \(f_{\text{segment}}\in B_{\text{search}}\). The padding is half of the broadest signal expected \(f_{\text{padding}}=\left(\max(f_{\text{segment}})-\min(f_{\text{segment}}) \right)/4\) in case a signal is at the boundary of two neighboring segments like for example the yellow and grey signals at \(4.226\,\mathrm{m}\mathrm{H}\mathrm{z}\) in Figure 2. For the upper bound of \(\dot{f}\) we use \(f_{\max}\) determined by (8). Since we search for detached and interacting binaries the lower bound \(\dot{f}\) is negative and is the same as in [14]. The amplitude boundary is determined by a lower and upper bound SNR and is related to the amplitude by [14]
\[\mathcal{A}\left(\rho\right)=2\rho\left(\frac{S\left(f\right)}{T_{obs}\,\sin^{ 2}\left(f/f_{*}\right)}\right)^{1/2}. \tag{9}\]
### Noise estimate within a frequency segment
For estimating the maximum likelihood of the GBs within a frequency segment the noise is estimated individually for each segment by calculating the periodogram [30; 31]
Figure 1: Frequency segment widths to analyze GBs for \(T_{obs}=2\,\mathrm{yr}\).
\[S_{A}(f)=\frac{2|A(f)|^{2}}{Nf_{\text{sample}}} \tag{10}\]
for each frequency window including the padding as determined for the prior listed in Table 1. \(N\) marks the number of bins within the padded window and \(f_{\text{sample}}\) represents the sampling frequency of the data \(d\). In order to reduce the influence of loud signals within the window itself, the median of \(S_{A}(f)\) is taken as the constant estimate for the full padded frequency segment. This brings a dynamic noise estimate during the search of signals which is updated after each found signal is subtracted from the data. The estimate for other TDI variables \(E,T\) is analog to the estimate of \(A\).
### Galactic Binary search algorithm within a frequency segment
In Algorithm 2 we present the GB search algorithm for given data \(d_{\text{analyze}}\) to analyze on a given frequency segment \(f_{\text{segment}}\in f_{\text{search}}\), which outputs a list \(\tilde{\theta}_{\text{in}}=\{\theta_{\text{MLE},1},\theta_{\text{MLE},2},...\}\) of GB-parameters within the unpadded \(f_{\text{segment}}\). Furthermore, \(n_{\text{signals}}\) is the maximum number of signals per segment.
To save computational time, we limit the integral of the scalar product (3) to the padded frequency segment. To obtain the MLE we use the differential evolution (DE) [32] algorithm and for the global optimization of all found signals within the unpadded region \(\tilde{\theta}_{\text{in}}\) we use the Sequential Least Squares Programming (SLSP) method [33]. Both methods are part of the SciPy library [34]. The pipeline is set to search \(n_{\text{searches}}=3\) times for the same signal with varying initial parameters \(\theta^{\prime}_{\text{init}}\) in case the search algorithm gets stuck at a local optimum.
Furthermore, we generalize the SNR to multiple signals \(\tilde{\theta}=\{\theta_{1},\theta_{2},...\}\)
\[\rho=\frac{\left\langle d|\underset{\theta\in\tilde{\theta}}{\sum}s\left( \theta\right)\right\rangle}{\sqrt{\left\langle\underset{\theta\in\tilde{ \theta}}{\sum}s\left(\theta\right)|\underset{\theta\in\tilde{\theta}}{\sum}s \left(\theta\right)\right\rangle}}. \tag{11}\]
``` Functionlocal_GB_search\((f_{\text{segment}},n_{\text{signals}},d_{\text{analyze}})\) \(\tilde{\theta}_{\text{found}}\leftarrow\{\,\}\) \(\tilde{\theta}_{\text{in}}\leftarrow\{\,\}\) \(\tilde{\theta}_{\text{out}}\leftarrow\{\,\}\) \(d_{\text{residual}}\leftarrow d_{\text{analyze}}\) fori in \(\{1,2,...,n_{\text{signals}}\}\)do \(\tilde{\theta}^{\prime}_{\text{MLE}}\leftarrow\{\,\}\) forj in \(\{1,2,...,n_{\text{searches}}\}\)do \(\theta^{\prime}_{\text{init}}\) randomly drawn from prior \(\theta^{\prime}_{\text{MLE}}\leftarrow\underset{\,\
Next, we proceed to analyze the odd segments in a similar manner. The found signals in the odd segments, denoted as \(\tilde{\theta}_{\text{odd}}\), are subtracted from the original data \(d\). Finally, we repeat the analysis of the even segments, now free from the influence of neighboring signals located in the neighboring odd segments. By subtracting the signals found in the odd segments and re-analyzing the even segments, we ensure that each segment of \(B_{\text{search}}\) is analyzed independently without being affected by signals of neighboring signals.
```
Functionglobal_GB_search\((f_{\text{search}},n_{\text{signals}},d_{\text{analyze}})\)\(\tilde{\theta}\leftarrow\{\,\}\) for all\(f_{\text{segment}}\)in\(f_{\text{search}}\)doin parallel \(\tilde{\theta}\leftarrow\tilde{\theta}\cup\textit{local_GB_search}(f_{\text{ segment}},n_{\text{signals}},d_{\text{analyze}})\) endfor return\(\tilde{\theta}\)
```
**Algorithm 3**The search algorithm for multiple frequency segments \(f_{\text{search}}\).
The even segments where no signals in neighboring segments were detected and less than 3 signals were found are not analyzed a second time. Because the subtraction of the signals in odd windows did not influence these even segments and there is no need to repeat the search. For these segments, the found signals of the first even segments analysis are directly used for the catalog.
The LISA data will be a time-evolving data set with new data being constantly added. Therefore the found signals of previous runs can be used to speed up the analysis where \(\tilde{\theta}_{\text{initial}}\) if \(j=1\) in Algorithm 2 is set to a signal within that frequency segments found in the previous run. Especially for signals \(f>10\,\text{mHz}\) and \(T_{\text{obs}}>1\,\text{yr}\) the success rate of _local_GB_search_ becomes small if \(\tilde{\theta}_{\text{initial}}\) is randomly drawn from the prior. It is advantageous to use the found signals of a previous shorter data set analysis, for example, \(T_{\text{obs}}=6\,\text{months}\), as the initial value of the search algorithm.
The global solution is then \(\tilde{\theta}_{\text{recovered}}=\tilde{\theta}_{\text{even}}\cup\tilde{ \theta}_{\text{odd}}\) where \(\tilde{\theta}_{\text{even}}\) is the solution of the third run. In Figure 2 we show the solution \(\tilde{\theta}_{\text{recovered}}\) for four neighboring segments at a region with multiple detectable and overlapping signals. We demonstrate with the pipeline a successful recovery rate of 25 out of 30 injected GBs. Among the 25 recovered signals, 24 of them correspond to individual injected signals, indicating a high level of accuracy in the recovery process. In addition, it is worth noting that the recovered signals at \(f=4.22\,\text{mHz}\) is a composite of two injected signals. However, the remaining unrecovered signal is characterized by low amplitudes \(\mathcal{A}\).
### Global noise estimate
For the global noise estimate, we subtract each recovered signal \(\tilde{\theta}_{\text{recovered}}\) from the data
\[d_{\text{residual}}=d-\sum_{\theta\in\tilde{\theta}_{\text{recovered}}}s( \theta). \tag{12}\]
where \(s(\theta)\) represents the signal corresponding to each MLE \(\theta\). Furthermore, we proceed to estimate a smooth noise curve, denoted as \(S_{A,\text{welch}}(f)\), across the entire frequency domain. This estimation is performed by apply
\begin{table}
\begin{tabular}{c c c c c} run & \(f_{\text{search}}\) & \(n_{\text{signals}}\) & \(d_{\text{analyze}}\) & output \\ \hline
1 & \(B_{\text{even}}\) & 3 & \(d\) & \(\tilde{\theta}_{\text{even}}\) \\
2 & \(B_{\text{odd}}\) & 10 & \(d-\sum\limits_{\theta\in\tilde{\theta}_{\text{even}}}s(\theta)\) & \(\tilde{\theta}_{\text{odd}}\) \\
3 & \(B_{\text{even}}\) & 10 & \(d-\sum\limits_{\theta\in\tilde{\theta}_{\text{odd}}}s(\theta)\) & \(\tilde{\theta}_{\text{even}}\) \\ \end{tabular}
\end{table}
Table 2: Inputs and outputs of the search pipeline _global_GB_search_ across all frequency segments \(B\) for given data \(d\).
Figure 2: Displayed are the data, injected signals, and recovered signals of the Radler data challenge with \(T_{\text{obs}}=2\,\text{yr}\). The red lines are the boundaries of four adjacent frequency segments. The first plot illustrates the absolute value of the A TDI channel, while the second plot depicts the amplitude \(\mathcal{A}\) across the frequency spectrum. The red lines mark the boundaries of the frequency segments. The plot is extended to the left and right by the padding of the segments at the borders.
ing Welch's method, utilizing 500 windows and a Hann window function [35]. Next, we address the remaining outlier peaks, mainly of unresolved signals, by implementing a smoothing procedure. We define a frequency window of 30 bins and adjust any values above the window's median to be twice the median value. This process is repeated by shifting the window by 15 frequency bins until the entire power spectral density (PSD) is smoothed. The result is denoted as \(S_{A,\mathrm{median}}(f)\).
To further enhance the smoothing effect, we utilize the Savitzky-Golay filter [36]. The filter is configured with an order of 1, and we apply two different window lengths depending on the frequency range. For observations with \(T_{\mathrm{obs}}\) equal to either 1 or 2 years, frequencies below 0.8 mHz are smoothed using a window length of 10, while frequencies above 0.8 mHz are smoothed using a window length of 70. In the case of \(T_{\mathrm{obs}}=0.5\) yr, frequencies below 0.8 mHz employ a window length of 10, and frequencies above 0.8 mHz are smoothed using a window length of 50.
Finally, to obtain a PSD estimate for each desired frequency bin, we spline interpolate the smoothed PSD, resulting in our estimate of the residual noise curve denoted as \(S_{A,\mathrm{residual}}(f)\).
The noise estimates, depicted in Figure 3, exhibit a strong agreement with the instrument noise \(S_{A,\mathrm{instrument}}(f)\), except for the frequency range between 0.2 mHz and 5 mHz. In this range, the unresolved background signals (GBs) merge into the galactic foreground noise, leading to deviations in the noise estimate. The noise of the other TDI channels \(E\) and \(T\) are computed the same way.
### GPU accelerated posterior distribution derivation
In order to derive the posterior distribution, we employ the Metropolis-Hastings Monte Carlo (MHMC) algorithm [37, 38]. This algorithm suggests new parameters \(\theta_{\mathrm{p}}\) based on a proposal distribution \(g(\theta_{\mathrm{p}}|\theta_{\mathrm{c}})\), which generally depends on the current state of the chain \(\theta_{\mathrm{c}}\). The proposed parameters are then accepted with probability
\[P(\theta_{\mathrm{p}},\theta_{\mathrm{c}})=\min\left(1,\left[\frac{p(d\mid \theta_{\mathrm{p}})}{p(d\mid\theta_{\mathrm{c}})}\frac{g(\theta_{\mathrm{c} }\mid\theta_{\mathrm{p}})}{g(\theta_{\mathrm{p}}\mid\theta_{\mathrm{c}})} \right]^{\frac{1}{T}}\right) \tag{13}\]
where \(\mathcal{T}\) is the temperature for simulated annealing.
Previously, [16] demonstrated that the MLE can be effectively utilized to accelerate the computation of the posterior distribution. The posterior distribution tends to be concentrated within a relatively compact region of the parameter space. As a result, it becomes unnecessary to sample beyond specific parameter space boundaries when employing Markov chain Monte Carlo (MCMC) methods to estimate the posterior. By identifying the reduced parameter space \(\Theta_{\mathrm{reduced}}\) where the posterior is concentrated, we can skip the burn-in phase typically required in MCMC sampling. Moreover, this approach allows for a proposal distribution \(g(\theta_{\mathrm{p}})=g(\theta_{\mathrm{p}}|\theta_{\mathrm{c}})\) that is independent of the current state of the chain \(\theta_{\mathrm{c}}\). This is achieved by randomly drawing samples within \(\Theta_{\mathrm{reduced}}\). The independence from the chain's state enables the parallel computation of the log-likelihood for all samples in the first step, followed by the construction of the chain during the second step, where if the proposed sample \(\theta_{\mathrm{p}}\) is rejected the chain stays at the current sample \(\theta_{\mathrm{c}}\) as described in Algorithm 4. This approach leverages the computational power of a GPU to rapidly compute the log-likelihood of 10'000 samples in parallel, facilitating a more efficient and rapid estimation of the posterior distribution.
To establish the reduced parameter space \(\Sigma_{\mathrm{reduced}}\) we use the inverse of the Fisher Information Matrix (FIM)
\[F_{ij}=\langle\partial_{i}p(d\mid\theta_{\mathrm{MLE}})|\partial_{j}p(d\mid \theta_{\mathrm{MLE}})\rangle, \tag{14}\]
where \(\partial_{i}\) denotes the partial derivative with respect to the \(i^{\mathrm{th}}\) component of the parameter vector \(\theta\). To compute the derivatives of the FIM, the second-order forward finite difference method is employed with a step size of \(10^{-9}\) times the search space determined by the prior distribution \(\Theta\). The estimated uncertainty vector is \(\sigma=\sqrt{\mathrm{diag}(F^{-1})}\).
In our investigation, we set the volume of the parameter space to \(\Theta_{\mathrm{reduced}}=[\theta_{\mathrm{MLE}}-4\sigma,\theta_{\mathrm{ MLE}}+4\sigma]\). It is sufficient to set the boundary for the frequency parameter to \(\Theta_{\mathrm{reduced}}^{f}=[\theta_{\mathrm{MLE}}^{f}-\sigma_{f},\theta_{ \mathrm{MLE}}^{f}+\sigma_{f}]\) where \(\sigma_{f}\) denotes the estimated uncertainty of the frequency. The frequency derivative parameter space is not reduced and spans the full prior \(\Theta_{\mathrm{reduced}}^{f}=\Theta^{f}\). Due to degeneracy, we neglect the distribution of the polarization and
Figure 3: Noise estimates and power spectrum density (PSD) of the TDI A channel of the 1 yr Sangria data set. \(S_{A,\mathrm{instrument}}\) is the noise PSD used for creating the data. The difference between \(S_{A,\mathrm{residual}}\) and the true PSD between 0.2 mHz and 5 mHz is due to the unresolved GBs which can be seen as red crosses in Figure 7. It is expected, that most GBs in that frequency range are unresolvable and therefore merge into the galactic foreground noise.
the initial phase and define a narrow search space for them. Hence, we set \(\Theta^{\psi}=[\theta_{\text{MLE}}^{\psi}-\frac{\pi}{1000},\theta_{\text{MLE}}^{ \psi}+\frac{\pi}{1000}]\) and \(\Theta^{\phi_{0}}=[\theta_{\text{MLE}}^{\phi_{0}}-\frac{2\pi}{1000},\theta_{ \text{MLE}}^{\phi_{0}}+\frac{2\pi}{1000}]\).
Simulated annealing is useful to further speed up the computation of the posterior. As a start, we use uniform sampling in the reduced parameter space \(\Theta_{\text{reduced}}\) as the proposal distribution with high a temperature. Next, we utilize the obtained posterior distribution as the new proposal distribution by employing multivariate kernel density estimation (KDE) techniques [39, 40]. To address the challenge of high-dimensional KDE computations, we group parameters into 2-dimensional parameter pairs. Specifically, we group \(\mathcal{A}-\iota\), \(\lambda-\beta\), and \(f-\hat{f}\) together and perform KDE on each pair. This allows us to overcome the computational limitations associated with kernel density estimation involving four or more parameters. By gradually lowering the temperature \(\mathcal{T}\) during the simulated annealing process, we can achieve more refined and accurate estimations of the posterior distribution while maintaining computational efficiency.
The results presented in the next Section IV are created with six different temperatures \(\mathcal{T}=\{15,10,5,3,2,1\}\) and constant \(n_{\text{samples}}=10\,000\). The number of samples \(n_{\text{samples}}\) could be changed for each temperature.
```
Functionposterior(\(\theta_{\text{MLE}},d_{\text{posterior}},\Theta_{\text{reduced}},\tilde{ \mathcal{T}},n_{\text{samples}}\)) \(\Theta_{\text{sample}}\leftarrow\Theta_{\text{reduced}}\) for\(\mathcal{T}\) in \(\tilde{\mathcal{T}}\)do \(\tilde{\theta}_{\text{posterior}}\leftarrow\{\,\}\) \(\tilde{L}\leftarrow\{\,\}\) \(\tilde{\theta}_{\text{samples}}\leftarrow n_{\text{samples}}\) randomly drawn from \(\Theta_{\text{sample}}\) for all\(\theta\) in \(\tilde{\theta}_{\text{samples}}\)do in parallel on GPU \(\tilde{L}\leftarrow\tilde{L}\cup\{p(d_{\text{posterior}}\mid\theta)\}\) endfor \(\theta_{c}\leftarrow\tilde{\theta}_{1}\) \(L_{\text{current}}\leftarrow\tilde{L}_{1}\) for\(i\)in \(\{2,3,...,n_{\text{samples}}\}\)do \(\alpha\leftarrow\min\left(1,\left[\frac{L_{i}}{L_{\text{current}}}\frac{g( \theta_{c})}{g(\theta_{i})}\right]^{\frac{1}{\mathcal{T}}}\right)\) with probability \(\alpha\)do \(\theta_{c}=\tilde{\theta}_{i}\) \(L_{\text{current}}=\tilde{L}_{i}\) \(\tilde{\theta}_{\text{posterior}}\leftarrow\tilde{\theta}_{\text{posterior}}\cup\{\theta_{c}\}\) endfor \(\Theta_{\text{sample}}\leftarrow\text{KDE}(\tilde{\theta}_{\text{posterior}})\) endfor return\(\tilde{\theta}_{\text{posterior}}\)
```
**Algorithm 4**The GPU accelerated posterior distribution algorithm.
The algorithm to compute the posterior distribution for a single signal \(\theta_{\text{MLE}}\in\tilde{\theta}_{\text{recovered}}\) is presented in Algorithm 4. The computation of the likelihood \(p(d_{\text{posterior}}\mid\theta)\) on the GPU is based on [41] described in [18]. The data for the input is
\[d_{\text{posterior}}=d-\sum_{\theta\in\tilde{\theta}_{\text{ recovered}}}s(\theta)+s(\theta_{\text{MLE}}). \tag{15}\]
The reduced parameter space \(\Theta_{\text{reduced}}\) is determined with \(\theta_{\text{MLE}}\) and \(d_{\text{posterior}}\) as described above.
The resulting posterior distribution is the posterior given the overlapping MLEs \(\tilde{\theta}_{\text{overlap}}\subset\tilde{\theta}_{\text{goal}}\)
\[p(\theta_{\text{MLE}}|d_{\text{posterior}})=p(\theta|d,\tilde{\theta}_{\text{ overlap}}) \tag{16}\]
which has a narrower posterior distribution than the marginalized posterior
\[p(\theta_{\text{MLE}}|d)=\int p(\theta,\tilde{\theta}_{\text{overlap}}|d)p( \tilde{\theta}_{\text{overlap}})\,\mathrm{d}\tilde{\theta}_{\text{overlap}}. \tag{17}\]
Overlapping signals lead to a joint posterior distribution. To approximate the marginalized posterior for such cases, one approach is to increase the estimated noise by computing the noise of the partial residual \(S_{A,\text{partial}}(f)\). This is achieved by subtracting the found signals only partially from the original data, leaving some residual signal components in the data
\[d_{\text{partial}}=d-s_{\text{partial}}\sum_{\theta\in\tilde{\theta}_{ \text{recovered}}}s(\theta). \tag{18}\]
where \(s_{\text{partial}}\in[0,1]\) is a scaling factor which we set to \(s_{\text{partial}}=0.7\). By analyzing this partial residual, one can obtain an approximation of the marginalized posterior distribution that takes into account the presence of overlapping signals. In Figure 3 the difference between \(S_{A,\text{residual}}(f)\) and \(S_{A,\text{partial}}(f)\) is clearly visible for \(f\in[2\,\mathrm{mHz},10\,\mathrm{mHz}]\) where most signals are found.
### Pipeline
To conclude we present in Algorithm 5 the full pipeline to extract GBs within a given frequency range of \(f_{\text{min}}\) and \(f_{\text{max}}\). The output is the list of MLEs \(\tilde{\theta}_{\text{recovered}}\) and the list of MCMC chains \(\tilde{\theta}_{\text{posteriors}}\) which provide the posterior distribution.
## IV Results
The analysis of Radler's, LDC1-4, started with the first \(0.5\,\mathrm{yr}\) and continued with \(1\,\mathrm{yr}\) and \(2\,\mathrm{yr}\), where the found signals from the previous analysis are used as initial guesses for the DE algorithm. The Sangria data set, LDC2a, with the massive black hole binaries subtracted, is analyzed once for the full \(1\,\mathrm{yr}\) of data. The global frequency band is set to \(f_{\text{min}}=0.3\,\mathrm{mHz}\) and \(f_{\text{max}}=f_{\text{ Nyquist}}\) where \(f_{\text{ Nyquist}}=33.3\,\mathrm{mHz}\) for the Radler challenge and \(f_{\text{ Nyquist}}=100\,\mathrm{mHz}\) for the Sangria challenge.
### Computation times
Each segment of \(f_{\text{search}}\) in _global_GB_search_ can be analyzed in parallel as noted with "do in parallel" in Algorithm 3. Therefore the shortest time to analyze the data set \(T_{\text{parallel}}\) is determined by the sum of the three frequency segments which take the longest for each sequential analysis of \(B_{\text{even}}\), \(B_{\text{odd}}\) and \(B_{\text{even}}\) segments as listed in Table 2. The duration to analyze a segment varies a lot, where segments with no detectable signal are analyzed within \(2\,\text{min}\) and the longest computation time of a segment containing multiple detectable signals took \(126\,\text{min}\).
The data analysis to obtain the MLEs was run on a high-performance computer. In Table 3 we present the search times for finding the MLE solutions of the Radler data set. The pipeline demonstrates its efficiency by analyzing the longest observation time of \(T_{\text{obs}}=2\,\text{yr}\) in only \(6\,\text{h}\). In terms of computational cost, the analysis necessitates approximately \(3\,300\,\text{h}\) of CPU core hours. If commercial high-performance computing services such as those provided by Google are utilized, the estimated cost would amount to approximately \(100\,\text{USD}\)[42].
Furthermore, the computation of posterior distributions according to Section III.7 takes \(1.8\) seconds per signal on a Quadro RTX \(4000\) Mobile GPU. Therefore, for example for the \(8\,385\) recovered signals of the Sangria challenge it took \(4.2\,\text{h}\) on a single laptop to compute all posterior distributions.
### Matching recovered signals with injected signals
To evaluate the accuracy of the recovered signals \(\theta_{\text{rec}}\in\tilde{\theta}_{\text{recovered}}\), we are matching them with the injected signals \(\theta_{\text{inj}}\in\tilde{\theta}_{\text{injected}}\) with similar frequencies. In order to determine matches quantitatively, we use the scaled error
\[\delta(s(\theta_{\text{rec}}),s(\theta_{\text{inj}}))=\frac{\langle s(\theta_ {\text{rec}})-s(\theta_{\text{inj}}),s(\theta_{\text{rec}})-s(\theta_{\text{ inj}})\rangle}{\langle s(\theta_{\text{rec}}),s(\theta_{\text{rec}})\rangle} \tag{19}\]
which is dependent on the amplitude of the signals.
In other works the scaled correlation, also called overlap,
\[\mathcal{O}(s(\theta_{\text{rec}}),s(\theta_{\text{inj}}))=\frac{\langle s( \theta_{\text{rec}})\rangle,s(\theta_{\text{inj}})\rangle}{\sqrt{\langle s( \theta_{\text{rec}}),s(\theta_{\text{rec}})\rangle\langle s(\theta_{\text{ inj}}),s(\theta_{\text{inj}})\rangle}} \tag{20}\]
of two signals \(s(\theta_{\text{rec}})\) and \(s(\theta_{\text{inj}})\) is used [43].
Figure 4 shows the sky locations of all recovered signals in ecliptic coordinates. The recovered signals follow the geometry of the galaxy, with high \(\delta\) (yellow dots), slightly off the center of the galaxy. In Table 4 we present the number of recovered and matched signals for each analysis where we also include the overlap \(\mathcal{O}\) as a match metric for comparison with other evaluations which used the overlap [14, 15, 9, 10]. The consistently high match rate of all analyses speaks of good quality recoveries. In Figure 5 we see only small changes in the cumulative distribution function across the analyses. Only the cumulative
Figure 4: Scatter plot of the scaled error \(\delta\) across the ecliptic longitude and ecliptic latitude of the \(2\,\text{yr}\) Radler data set. The range of the errorbar is clipped at \(10^{-3}\) to \(10^{0}\).
\begin{table}
\begin{tabular}{c c c c} Challenge & \(T_{\text{obs}}\) (yr) & CPU core time (h) & \(T_{\text{parallel}}\) (h) \\ \hline Radler & \(0.5\) & \(1\,607\) & \(3.2\) \\ Radler & \(1\) & \(2\,106\) & \(4.3\) \\ Radler & \(2\) & \(3\,269\) & \(5.5\) \\ \end{tabular}
\end{table}
Table 3: Computational times of the Radler LDC1-4 data with different \(T_{\text{obs}}\). The CPU time is the sum of the computational time of all analyzed frequency segments. \(T_{\text{parallel}}\) is the shortest computation time if the segments are analyzed in parallel on multiple CPU threads.
distribution function of \(\mathcal{O}\) for the Sangria data set has a higher count for smaller \(\mathcal{O}\).
Given the potentially high correlation between a low-amplitude signal and a loud signal, even when the scaled error suggests a poor match, we classify the recovered signals with \(\delta<0.3\) as "matched" signals. For each matched signal, we calculate the error using \(\Delta\beta=|\beta_{\text{rec}}-\beta_{\text{inj}}|\). In Figure 6, we present the error histograms for all parameters. Notably, there is a clear trend of decreasing errors with longer observation times, as expected. The error histograms for the 1 yr analyses of the Radler and Sangria experiments exhibit similar patterns, consistent with the results of the 1 yr analysis.
\begin{table}
\begin{tabular}{c c c c c c c c} Challenge & \(T_{\text{obs [yr]}}\) & Injected (\(\rho>10\)) & Recovered & \(\delta<0.3\) & Match rate \({}_{\delta<0.3}\) & \(\mathcal{O}>0.9\) & Match rate \({}_{\mathcal{O}>0.9}\) \\ \hline Radler & 0.5 & 6 813 & 3 937 & 3 418 & 87\% & 3 407 & 87\% \\ Radler & 1 & 11 814 & 7 112 & 6 270 & 88\% & 6 251 & 88\% \\ Sangria & 1 & 11 814 & 8 385 & 7 173 & 86\% & 7 186 & 86\% \\ Radler & 2 & 18 332 & 11 952 & 10 369 & 87\% & 10 363 & 87\% \\ \end{tabular}
\end{table}
Table 4: The variables of interest include the count of detectable injected GB sources, the count of recovered sources, the count of matches with injected sources, and the match rate. The match rate is determined by dividing the number of matched signals by the total number of recovered signals. The overlap is included to get an evaluation comparable to other analyses [9; 10; 15].
Figure 5: Cumulative distribution function of \(\delta\) and \(\mathcal{O}\) in the top plots and the survival function in the bottom plots. The plots of the overlap \(\mathcal{O}\), on the right, are comparable with other analyses such as [14; 15]
Figure 6: Error histogram of all matched signals with \(\delta<0.3\).
our expectations. For the frequency and amplitude parameters, we display the relative errors. The relatively higher errors observed for \(\phi_{0}\) and \(\psi\) can be attributed to the inherent degeneracy between these two parameters. However, it is evident that the degeneracy diminishes with increasing \(T_{\rm obs}\).
### Galaxy
The recovered signals that meet the matching criteria are visualized as green dots in Figure 7. Additionally, it is evident from the plot that the recovered signals without a satisfactory match predominantly have lower amplitudes \(\mathcal{A}\). The ability of LISA to recover signals is contingent upon the sensitivity curve, which exhibits lower sensitivity at lower frequencies. Consequently, only signals with higher amplitudes are recoverable at low frequencies. The majority of the recovered signals are concentrated in the central region of the Milky Way, which is also the location of a significant portion of the sources.
For each matched signal with \(\dot{f}>0\), where we assume that the evolution of the GB is purely driven by the emission of GWs, we can estimate the luminosity distance [14]
\[D_{L}=\frac{5\dot{f}}{48\mathcal{A}\pi^{6/3}f_{0}^{5/3}} \tag{21}\]
which is a good estimate of the distance in Euclidean space for objects in the Milky Way. Therefore we are able to convert their GBs to the galactocentric coordinate system and present them in Figure 8. The upper plot illustrates the distribution of all injected signals with \(f>0.3\,\mathrm{mHz}\) and \(\dot{f}>0\), while the lower plot depicts the recovered GBs. It should be noted that the number of recovered GBs is lower than the injected ones due to the majority of injected GBs having a low (SNR), rendering them unrecoverable. Notably, a significant number of recovered GBs are located in close proximity to the sun, which aligns with expectations as closer sources exhibit higher SNR. This trend is also evident in the galactocentric 3D plot shown in Figure 9.
### Posterior
Assessing the posterior distribution of \(10\,000\) signals presents challenges, particularly in the absence of ground truth for comparison. However, leveraging statistical techniques allows us to evaluate the quality of the uncertainty estimates. Additionally, we can quantify the enhanced precision of the posterior distribution as \(T_{\rm obs}\) increases. In Figure 10, we observe the evolution of accuracy and precision for one signal's sky location as a function of \(T_{\rm obs}\). Notably, for \(T_{\rm obs}=0.5\,\mathrm{yr}\), the accuracy and precision are comparatively lower than those achieved with longer \(T_{\rm obs}\).
Since the posterior distribution for the sky locations is approximate of Gaussian shape, we can estimate the uncertainty by computing the standard deviation \(\sigma_{\beta}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(\beta_{i}-\mu_{\beta})^{2}}\) with mean \(\mu_{\beta}=\frac{1}{N}\sum_{i=1}^{N}x_{i}\), where \(N=n_{\rm samples}\) is the length of the MCMC chain and \(\beta_{i}\) is the \(i^{th}\) sample of the chain. The uncertainty for the other parameters is computed analogously. In the next step, we can estimate the angular confidence area
\[\sigma_{\rm area}=\int_{\mu_{\beta}-\sigma_{\beta}}^{\mu_{\beta}+\sigma_{\beta }}\int_{\mu_{\lambda}-\sigma_{\lambda}}^{\mu_{\lambda}+\sigma_{\lambda}}\sin \beta\,d\lambda\,d\beta \tag{22}\]
of the sky location for each signal. Figure 11 displays the histogram of all analyses, revealing a notable trend. As \(T_{\rm obs}\) increases, the number of posteriors with small confidence areas also increases. This observation aligns with the findings depicted in Figure 10, showing how the posterior of a signal becomes narrower with longer \(T_{\rm obs}\)
Figure 7: Scatter plot of recovered GBs and injected GBs of the \(2\,\mathrm{yr}\) Radler data set. The upper plot is across the GB amplitude \(\mathcal{A}\) and frequency \(f\) and the lower plot is across the ecliptic sky locations. The green dots are the recovered GBs with \(\delta<0.3\) which are categorized as matched signals. The blue circles on the plot represented the recovered signals that did not have a close match with any of the injected signals. The red crosses represent the injected signals that did not have a good match with any of the recovered signals. These signals were not effectively captured or identified during the recovery process.
However, it is important to note that the total number of extracted signals from the data also rises as the SNR of signals improves with longer observation times. Consequently, the number of signals with wider confidence areas also increases.
To assess the quality of the posterior estimate, we can examine whether the true parameters lie within the confidence interval as expected. If the accuracy and precision of the posterior are correct, we would expect to find the true parameters approximately 68% of the time within the interval of \(1\sigma\) standard deviation.
If the number of parameters within the confidence interval is higher than expected, it suggests that the precision is worse, meaning the posterior distribution is too wide. On the other hand, if the number of parameters within the confidence interval is lower than expected, it indicates potential inaccuracies in the posterior estimate. This could mean that the posterior distribution is not located at the true parameters, and/or it is excessively precise, where the posterior distribution is too narrow.
The results for individual parameters are presented in
Figure 8: The GB distribution of the Milky Way galaxy seen perpendicular to the galactic plane according to the simulated Radler data set. The red dot marks the sun. The top plot shows the distribution of the injected GBs and the bottom plot the distribution of the recovered GBs.
Figure 10: Posterior distribution of the sky location for the 3 analyses of the Radler data set of the signal with \(\theta^{\prime}_{\rm inj}=4.169906\,{\rm mHz}\). The dashed black lines mark the true values of the matched injected signal \(\theta_{\rm inj}\).
Figure 9: Recovered GBs plotted as blue dots in the galactocentric coordinate system. The red dot marks the sun. The density of GBs is represented by 2D contour lines on the planes.
Table 5, where we compute the standard deviation and check if the true parameter falls within the 68% confidence interval. Due to degeneracy, the evaluation of \(\phi_{0}\) and \(\psi\) is omitted as it would not yield proper assessment. We observe that the uncertainty estimate for the sky locations, with observation times of 1 yr or more, is close to the expected rate. However, the other parameters exhibit a lower rate than expected. This can be attributed to multiple reasons. Firstly, it could be due to inaccurate estimation, where the true value does not align with the posterior distribution. Secondly, the posterior distribution might be too narrow. Lastly, the assumption of a Gaussian distribution for the other parameters, as used in computing the standard deviation, may not hold true. For multi-messenger astronomy, the good agreement between estimated and true uncertainty in sky location is of highest relevance.
## V Conclusion
The extention of the previous pipeline, outlined by [16], allows now to obtain the MLE of GBs in the full frequency range where most of the GBs are overlapping with each other. As detailed in Section IV.1, the extraction of 18 000 signals from a data set with \(T_{\mathrm{obs}}=2\) yr can be accomplished in a mere 6 hours. This acceleration also reduces the computational costs significantly to only 100 USD with today's hardware [42], which brings extracting GBs from the full frequency band towards diminishing costs.
Additionally, we have leveraged the power of parallel computation, utilizing GPUs, to compute the posterior distribution for identified MLEs within a remarkable time frame of 2 seconds per signal. The computation of all posterior distributions can be completed in approximately 9 hours on a single laptop-grade GPU of the year 2018. These advancements not only enable efficient analysis of a large number of signals but also allow for rapid estimation of the posterior distributions.
The next crucial step involves integrating the presented pipeline into a comprehensive global analysis of data encompassing various astrophysical sources and phenomena, such as GBs, MBHBs, extreme mass ratio inspirals, glitches, and data gaps. The incorporation of this pipeline into the development of a global analysis framework offers substantial acceleration in the GB analysis process. This acceleration leads to a notable reduction in the associated costs for future pipeline developments.
## VI Acknowledgements
We thank the LDC working group [19] for the creation and support of the LDC1-4, LDC2a. Furthermore, we acknowledge the GPU implementation of FASTLISARESPONSE[41]. The calculations of Algorithm 3 were run on the CPUs of the Euler cluster of ETH Zurich and are gratefully acknowledged. This project is supported by the Swiss National Science Foundation (SNF 200021_185051). The full pipeline and evaluation tools are available at [44].
|
2301.02903 | Transferring Pre-trained Multimodal Representations with Cross-modal
Similarity Matching | Despite surprising performance on zero-shot transfer, pre-training a
large-scale multimodal model is often prohibitive as it requires a huge amount
of data and computing resources. In this paper, we propose a method (BeamCLIP)
that can effectively transfer the representations of a large pre-trained
multimodal model (CLIP-ViT) into a small target model (e.g., ResNet-18). For
unsupervised transfer, we introduce cross-modal similarity matching (CSM) that
enables a student model to learn the representations of a teacher model by
matching the relative similarity distribution across text prompt embeddings. To
better encode the text prompts, we design context-based prompt augmentation
(CPA) that can alleviate the lexical ambiguity of input text prompts. Our
experiments show that unsupervised representation transfer of a pre-trained
vision-language model enables a small ResNet-18 to achieve a better ImageNet-1K
top-1 linear probe accuracy (66.2%) than vision-only self-supervised learning
(SSL) methods (e.g., SimCLR: 51.8%, SwAV: 63.7%), while closing the gap with
supervised learning (69.8%). | Byoungjip Kim, Sungik Choi, Dasol Hwang, Moontae Lee, Honglak Lee | 2023-01-07T17:24:11Z | http://arxiv.org/abs/2301.02903v1 | # Transferring Pre-trained Multimodal Representations with Cross-modal Similarity Matching
###### Abstract
Despite surprising performance on zero-shot transfer, pre-training a large-scale multimodal model is often prohibitive as it requires a huge amount of data and computing resources. In this paper, we propose a method (**BeamCLIP**) that can effectively transfer the representations of a large pre-trained multimodal model (CLIP-ViT) into a small target model (e.g., ResNet-18). For unsupervised transfer, we introduce _cross-modal similarity matching_ (CSM) that enables a student model to learn the representations of a teacher model by matching the relative similarity distribution across text prompt embeddings. To better encode the text prompts, we design _context-based prompt augmentation_ (CPA) that can alleviate the lexical ambiguity of input text prompts. Our experiments show that unsupervised representation transfer of a pre-trained vision-language model enables a small ResNet-18 to achieve a better ImageNet-1K top-1 linear probe accuracy (66.2%) than vision-only self-supervised learning (SSL) methods (e.g., SimCLR: 51.8%, SwAV: 63.7%), while closing the gap with supervised learning (69.8%).
## 1 Introduction
Learning transferable representations is crucial for successful downstream tasks. Contrastive learning such as SimCLR [4] and MoCo-v2 [6] have shown notable success by forcing features of individual classes to be clustered and sufficiently scattered [41]. But their linear probe performances are still far behind the supervised learning as shown in Figure 1. Recently, large-scale vision and language pre-trained (VLP) models provide highly transferable visual representations via language supervision. However, learning VLP models from scratch is prohibitive as it requires large amounts of training data and computing resources. For example, training CLIP [32] requires 400M paired image-text data and several hundreds of GPUs. ALIGN [22] further scales up to leverage alternative texts specified for descriptions of web images. While these models are often based on large Transformers [40], small ConvNets such as ResNet-50 [17] and MobileNet [20] are still widely used in practice [1] and even more crucial for low-resource
Figure 1: **ImageNet-1K top-1 linear probe accuracy on ResNet-18 representations. By transferring CLIP-ViT [32] vision-language representations to ResNet-18, the BeamCLIP can learn better visual representations than vision-only self-supervised learning (SSL) methods in terms of the linear probe accuracy.**
environments. We reformulate representation learning in terms of knowledge transfer from a large pre-trained model to a small practical model.
Large-scale vision-language pre-trained models exhibit strong alignments between different modalities. CLIP [32] learns visual concepts from natural language supervision, mapping image and text into the same vector space. As their training data is not only huge but inaccessible, however, conventional knowledge distillation [19] based on the source training data is no longer a viable option. Instead, we propose _cross-modal similarity matching_ (CSM). Imagine your goal is to learn a high quality representation for an input dog image in CIFAR-10 [23] as in Figure 2. Since CLIP was trained on numerous image-caption pairs, angular distances from the dog image embedding to the caption embeddings of other anchor prompt texts such as "A photo of cat" or "A photo of horse" must comprehensively preserve their visual differences. By training the student to preserve angular relations witnessed from the teacher, our model achieves near benchmark performance without accessing the original data for training CLIP.
To better encode the text prompts, we design _context-based prompt augmentation_ (CPA) that can alleviate the lexical ambiguity of the input text prompts. We find that lexical ambiguity in prompt texts can lead to semantically incorrect text embeddings. This may result in unexpected discrepancies of image-text alignment in the teacher's embedding space. Also, it is known that the zero-shot performance of CLIP can be improved by designing task-specific prompt texts. Inspired by this, we design CPA that extends the basic prompt of CLIP to better encode prototypical anchor representations.
Our experimental results show that the **BeamCLIP** ("beam" means to transmit) achieves the strongest and near benchmark performance on ImageNet-1K [10] top-1 linear probe accuracy when using most popular ResNet-18 and ResNet-50 as the student network. We also compare the effectiveness of the BeamCLIP against zero-shot transfer learning. Further, we provide ablation study results to show how much each component contributes to the performance.
Contributions of this paper can be summarized as follows:
* We propose a method (**BeamCLIP**) that can effectively transfer the representations of a large pre-trained multimodal model (e.g., CLIP-ViT) into a small target model (e.g., ResNet-18 or ResNet-50). To achieve this, we introduce _cross-modal similarity matching_ (CSM) and _context-based prompt augmentation_ (CPA). (Figure 2).
* We empirically show that BeamCLIP enables a small target model (e.g., ResNet-18) to achieve a better ImageNet-1K linear probe accuracy than vision-only self-supervised learning (SSL) methods, by effectively transferring CLIP-ViT representations. (Figure 1, Table 2, and Table 3.
* We also explore the zero-shot capability of the BeamCLIP (Table 5) and analyze the effectiveness of the BeamCLIP on various target datasets (Table 6 and Table 7).
## 2 Related Work
Vision and language pre-trainig.Vision and language pre-training (VLP) aims to jointly learn vision and language representations that can be transferred to the downstream tasks such as visual question answering (VQA), image captioning, and vision and language navigation (VLN). There are BERT-based vision and language models such as VLBERT [35], ViLBERT [26], and UNITER [7]. Also, there are contrastive learning-based models such as CLIP [32] and ALIGN [22]. These models use contrastive loss [30] to learn aligned vision and language representations by performing a task of matching a large-scale image and text pairs. The BeamCLIP aims to transfer the rich representations of large-scale vision and language pre-trained models such as CLIP and ALIGN to a small target model.
Self-supervised learning.Self-supervised learning (SSL) aims to learn highly transferable representations by using unlabeled data. In computer vision, at the early stage, task-specific self-supervised methods were introduced. These include Context Prediction [11], Rotation Prediction [14], and Colorization [43]. More recently, contrastive learning-based methods were introduced as a task-agnostic approach. These include SimCLR [4] and MoCo-v2 [6]. However, since contrastive self-supervised methods require a large batch size, non-contrastive methods have been introduced. These include SwAV [3], BYOL [16], and SimSiam [5]. In this paper, we empirically show that the BeamCLIP can
provide better visual representations than the state-of-the-art SSL methods by leveraging a large-scale pre-trained multimodal model.
Knowledge distillation.Knowledge distillation (KD) [19] aims to transfer rich knowledge from a strong teacher model to a target student model. In a conventional setting, it encourages the student model to mimic the task-specific prediction of the teacher model. As the student model is trained to predict the same probability distribution over pre-defined classes as the teacher model's, using Kullback-Leibler (KL) divergence is a natural metric to measure the error between the two models. For a classification task, the loss function can be formulated as follows:
\[\mathcal{L}_{\text{KD}}=\sum_{i}H(p_{i},q_{i}^{S})+\sum_{i}KL(p_{i}^{T}||p_{i} ^{S}). \tag{1}\]
The first term indicates the supervised loss, where \(p_{i}\) denotes the one-hot labels and \(H(p,q)\) denotes cross-entropy. The second term is the distillation loss, where \(p_{i}^{T}\) and \(p_{i}^{S}\) are the softmax predictions of the teacher and student models, respectively.
Similarity-based knowledge distillation.Recently, similarity-based knowledge distillation such as SEED [13], OSS [8], and ISD [37] was introduced in the context of self-supervised learning (SSL). SEED [13] showed that the linear probe accuracy of a small student (ResNet-18) can be improved by transferring the representations of a larger teacher (ResNet-50) pre-trained by SSL methods such as MoCo-v2 [6]. Unlike this, OSS [8] aims to transfer representations of an evolving teacher (ResNet-50) into a smaller student (ResNet-18) on the fly. Unlike SEED and OSS, ISD [37] considered the same size student and teacher network (ResNet-18), and showed a student can learn visual representations by iteratibly distilling the similarity of teacher's representations. These works are closely related to our work. Unlike these works, the BeamCLIP aims to transfer rich vision and language representations of large-scale pre-trained models such as CLIP-ViT/16 [32] into a smaller network such as ResNet-18.
Prompt engineering.Recently, researchers showed that prompt engineering [2] is surprisingly effective at improving the performance of large-scale language models (LLMs) on downstream tasks without fine-tuning. Prompts are input texts of language models that usually consist of a task description or several examples. To further simplify prompt engineering, prompt tuning [24] proposed to add \(k\) learnable tokens to the input texts, while having language models frozen. Similar to GPT-3, it is known that the zero-shot performance of CLIP [32] can be improved by designing the prompt texts to each task. For example, on satellite image classification datasets, "A satellite photo of a {label}" provides better performance than the default "A photo of a {label}". Inspired by this, we propose context-based prompt augmentation that extends the basic prompt of CLIP to better encode prototypical text anchor representations by alleviating the lexical ambiguity of class label texts.
## 3 Method
Problem formulation.Formally, our problem is to transfer aligned cross-modal representations of a strong teacher model \(f_{\theta}^{T}(\cdot)\) into a target student model \(f_{\theta}^{S}(\cdot)\) with unlabeled data \(\mathcal{D}_{u}=\{x_{i}\}_{i=1}^{N}\). Given each unlabeled image \(x_{i}\), we formulate representation transfer as a regression task that matches teacher representations \(f_{\phi}^{T}(x_{i})\) to a student's \(f_{\theta}^{S}(x_{i})\). As the student network is parameterized by \(\theta\), the learning objective is
\[\arg\min_{\theta}\sum_{i}^{N}\|f_{\theta}^{S}(x_{i})-f_{\phi}^{T}(x_{i})\|_{2 }^{2}. \tag{2}\]
Normalizing the representations via \(l_{2}\)-normalization (_i.e.,_\(q_{i}=f_{\theta}^{S}(x_{i})/\|f_{\theta}^{S}(x_{i})\|_{2}\) and \(k_{i}=f_{\phi}^{T}(x_{i})/\|f_{\phi}^{T}(x_{i})\|_{2}\)) leads to the following simplification:
\[\arg\min_{\theta}\sum_{i}^{N}\|q_{i}-k_{i}\|_{2}^{2}=\arg\min_{\theta}\sum_{i} ^{N}(2-2q_{i}\cdot k_{i}). \tag{3}\]
The problem now involves maximizing the cosine similarity between \(l_{2}\)-normalized representations from teacher and student models.
Method overview.The overview of **BeamCLIP** is shown in Figure 2. The teacher model of the BeamCLIP consists of an image encoder \(f_{\phi}^{T}(\cdot)\) and a text encoder \(g_{\psi}^{T}(\cdot)\). These encoders are pre-trained under a simple task of matching images to texts with large-scale corpora. Image representations \(f_{\phi}^{T}(x_{i})\) and text representations \(g_{\psi}^{T}(t_{i})\) are thus well-aligned within a cross-modal embedding space. We provide the details of the BeamCLIP in the following sections. More specifically, we describe how to extend the basic problem setting by leveraging the unique features of CLIP where vision and language representations are precisely aligned. Throughout the paper, we use the notation CLIP-ViT/16 to denote the CLIP [32] model that uses Vision Transformer (ViT) [12] with the patch size of 16x16 as the image encoder. Similar to this, CLIP-RN50 denotes the CLIP model with ResNet-50 [17] as the image encoder.
### Similarity-based cross-modal representation transfer
To effectively distill cross-modal representations, we use similarity-based matching as described above. Our similarity-based representation transfer utilizes two carefully designed loss functions: (1) instance similarity matching (ISM) loss and (2) cross-modal similarity matching (CSM) loss.
Instance similarity matching.This objective is directly derived from Eq. 3. Given a query image \(x_{i}\), it encourages the student image encoder \(f_{\theta}^{S}(\cdot)\) to regress the representation of the teacher image encoder \(f_{\phi}^{T}(\cdot)\). We apply conventional image augmentations (see Appendix B.1) on a query image \(x_{i}\), and the same augmented image \(\hat{x}_{i}\) is fed to both the teacher and student image encoders. Given unlabeled query images \(\mathcal{D}_{u}=\{x_{i}\}_{i=1}^{N}\), it is formulated as follows:
\[\mathcal{L}_{\text{ISM}}=-\sum_{i=1}^{N}(\frac{f_{\theta}^{S}(\hat{x}_{i})}{ \|f_{\theta}^{S}(\hat{x}_{i})\|_{2}}\cdot\frac{f_{\phi}^{T}(\hat{x}_{i})}{\|f _{\phi}^{T}(\hat{x}_{i})\|_{2}})=-\sum_{i=1}^{N}(q_{i}\cdot k_{i}). \tag{4}\]
However, the similarity signal from a single instance is not enough to constraint the student representations. For example, the topological ambiguity may occur in image encoding, since two symmetric representations have the same cosine similarity compared to a single teacher representation (see Appendix B.2). We conjecture that this can be mitigated by incorporating multiple anchor points to the query points. Based on this idea, we introduce cross-modal similarity matching loss.
Cross-modal similarity matching.To better align a student representation \(q_{i}\) with the teacher representation \(k_{i}\), we introduce cross-modal similarity matching (CSM) loss. We use multiple anchor points to cope with the ambiguity problem mentioned above. Further, we use text representations as anchor points, since we can easily generate prototypical anchor points by using text prompts and class
Figure 2: **Overview of the BeamCLIP. Representation transfer can be viewed as a task in which, given a query input, a student model learns to regress a vector representation of a teacher model. The BeamCLIP first measures the normalized cross-modal similarity of the query image compared to anchor text representations in the teacher’s embedding space. Then, it encourages the student to mimic the same cross-modal similarity in the student’s embedding space. To better align image representations, our method uses self-supervised pre-training of the student model. Finally, to avoid text ambiguity, we uses context-based prompt augmentation.**
labels. Since image and text representations are precisely aligned in CLIP, we can effectively apply this approach. More specifically, the BeamCLIP first measures the normalized image-text similarity of the query image compared to prototypical text points in the teacher's embedding space. Then, it encourages the student to mimic the same image-text similarity in the student's embedding space.
More formally, we generate multiple anchor representations \(A=\{a_{j}\}_{j=1}^{M}\) by encoding class texts \(C=\{c_{j}\}_{j=1}^{M}\) with the teacher text encoder \(g_{\psi}^{T}(\cdot)\) (in other words, \(a_{j}=g_{\psi}^{T}(c_{j})\)). To measure the similarity regarding to multiple anchor representations \(A\), we define the normalized cross-modal similarity as follows:
\[s_{j}(k_{i},A)=\frac{\exp{((k_{i}\cdot a_{j})/\tau)}}{\sum_{m=1}^{M}\exp{((k_{ i}\cdot a_{m})/\tau)}} \tag{5}\]
where \(\tau\) is a temperature hyperparameter that is set to 0.01 in our experiments.
Then, we evaluate the cross-modal similarity distribution by using a set of normalized cross-modal similarities:
\[P(k_{i}|A)=[s_{1}(k_{i},A),...,s_{M}(k_{i},A)]. \tag{6}\]
Then, the student model is optimized to mimic the normalized cross-modal similarity of the teacher's embedding space by minimizing the cross entropy, _i.e.,_\(H(P(k_{i}|A),P(q_{i}|A))\).
We further minimize the entropy of normalized cross-modal similarities in the student embedding space _i.e.,_\(H(P(q_{i}|A))\). This minimization helps the student provide query representations \(q_{i}\) that are more attracted to anchor representations \(A=\{a_{j}\}_{j=1}^{M}\). This entropy minimization is also known to be effective in other domains such as semi-supervised learning [15; 29].
Altogether, the CSM loss is formulated as follows:
\[\mathcal{L}_{\text{CSM}}=\sum_{i=1}^{N}H(P(k_{i}|A),P(q_{i}|A))+\sum_{i=1}^{N} H(P(q_{i}|A)). \tag{7}\]
**Final Loss.** The final loss of the BeamCLIP is formulated as follows:
\[\mathcal{L}_{\text{BeamCLIP}}\,=\mathcal{L}_{\text{CSM}}+\lambda_{\text{ISM}} \mathcal{L}_{\text{ISM}} \tag{8}\]
where \(\lambda_{\text{ISM}}\) is the scale hyperparameter that is set to 10 in our experiments.
### Context-based prompt augmentation
We found that lexical ambiguity in prompt texts can lead to semantically incorrect text embeddings. This may result in an unexpected discrepancy of image-text alignment in the teacher's embedding space. For example, Flowers102 [28] dataset has some classes with unusual and ambiguous flower names, such as "snapdragon", "bird of paradise", and "coll's foot".1 Therefore, incorrect prototypical anchor points might be compared with a query image. To address this issue of semantic ambiguities in the text, we introduce context-based prompt augmentation (CPA), a data-driven approach that augments basic prompts with contextual text such as Wikipedia descriptions or hierarchical labels.
\begin{table}
\begin{tabular}{c c c} \hline \hline Label Index & Label Name & Text Prompt \\ \hline
7 & bird of paradise & “A photo of \{bird of paradise\}. \{Strelitzia is a genus of five species of perennial plants, native to South Africa. It belongs to the plant family & \\ & & & Strelitziaceae\}.” \\ \hline
10 & snapdragon & “A photo of \{snapdragon\}. \{Antirrhinum is a genus of plants commonly known as dragon flowers or snapdragons & \\ & & because of the flowers’ fancied resemblance to the face of a dragon that opens and closes its mouth when laterally squeezed\}.” \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of context-based prompt augmentation for ambiguous class labels on Flowers102.
For prompt tuning with Wikipedia descriptions, we use the template "A photo of a {label}. {Wikipedia description}". We use this template for Flowers102 and Pets37 in our experiments. We provide some examples from the Flowers102 dataset in Table 1. For prompt tuning with hierarchical labels, we use the template "A photo of a {fine label}, categorized as {coarse label}". We use this template for CIFAR100 and ImageNet in our experiments. Analogous examples from CIFAR100 can be found in Table 11 in Appendix B.3.
### Other details
Self-supervised pre-training of student.To help the student mimic the teacher's cross-modal embedding space better, we pre-train the student image encoder with a self-supervised method. Since self-supervised pre-training such as SimCLR [4], MoCo-v2 [6], and SwAV [3] provides a weakly clustered embedding space based on similarities, it can be used as a better initial state for the student to mimic the teacher's embedding space. The details can be found in B.4. We show the effect of SSL pre-training of the student in the experiment section (see Table 4 and 7).
Optimization.For optimization we use SGD with cosine annealing schedule (SGDR) [25]. To stabilize training, we use a momentum encoder that updates its weights via exponential moving average (EMA) [18; 16]. The momentum encoder of a student \(\theta_{\hat{S}}\) is updated using the following rule:
\[\theta_{\hat{S}}\gets m\theta_{\hat{S}}+(1-m)\theta_{S} \tag{9}\]
where \(\theta_{S}\) is the image encoder of a student model and \(m\) is a momentum hyperparameter that is set 0.99 in our experiments. The model hyperparameters are summarized in Table 12 in Appendix B.5.
## 4 Experiments
Downstream datasets.We evaluate the BeamCLIP on six standard benchmark datasets: CIFAR10 [23], CIFAR100 [23], STL10 [9], Flowers102 [28], Pets37 [31], and ImageNet-1K [10]. Following convention, we split the datasets into train, validation, and test sets. Then, we use train set for transfer, and test set for evaluation. For ImageNet, we use the validation set as a test set, since its test set does not provide labels. More details on the datasets are summarized in Table 8 in Appendix A.
### Representation transfer with unlabeled target data
Setting.We compare the BeamCLIP with various self-supervised methods in terms of linear probe accuracy on ImageNet-1K. Following the conventional protocol, we use ResNet-18 and ResNet-50 [17] as the base encoder and evaluate the learned representations by using logistic regression. We use LBFGS algorithm [44] for logistic regression. Its hyperparameter \(C\) is determined through coarse-grained hyperparameter search on the validation split. And, the accuracy is evaluated in the test split. We found that it provides the best linear probe accuracy when \(C\) is set to 30. We perform our experiments on 8 NVIDIA A100 GPUs and it takes about 30 hours for 200 epoch training.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & & & & \multicolumn{3}{c}{Epochs} \\ \cline{4-7} Method & Teacher & Student & Batch & 200 & 400 & 800 \\ \hline Supervised & ✗ & RN50 & 256 & & 76.2 & \\ \hline SimCLR [4] & ✗ & RN50 & 512 & 65.6 & 66.7 & 67.4 \\ MoCo-v2 [6] & ✗ & RN50 & 256 & 67.5 & 70.1 & 71.1 \\ BYOL-GA [16] & ✗ & RN50 & 4096 & 70.6 & n/a & n/a \\ SwAV [3] & ✗ & RN50 & 256 & 72.0 & 74.3 & n/a \\ \hline
**BeamCLIP** (ours) & CLIP ViT-B/16 & RN50 & 512 & **74.8** & **75.1** & **75.0** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **ImageNet-1K top-1 linear probe accuracy on ResNet-50**. We compare the BeamCLIP with vision-only self-supervised methods in terms of linear probe accuracy on ImageNet-1K. The BeamCLIP representations provide higher linear probe accuracy than self-supervised methods. This means better transferability. The values are quoted from the original paper, and n/a means ”not available” from the paper.
**Transfer to ResNet-50.** Table 2 shows the comparison of the BeamCLIP with vision-only self-supervised methods such as SimCLR [4], MoCo-v2 [6], SwAV [3], BYOL [16], and SimSiam [5]. The BeamCLIP provides better visual representation by achieving 74.8% top-1 linear probe accuracy on ImageNet-1K [10]. While self-supervised methods take long training epochs to achieve comparable accuracy, BeamCLIP-RN50 achieves better accuracy with less training epochs. Also, note that BeamCLIP-RN50's representations provide better accuracy than CLIP-RN50's representations.
**Transfer to ResNet-18.** To check if the BeamCLIP can transfer CLIP representations into smaller models than ResNet-50 (24M), we also measure ImageNet-1K top-1 linear probe accuracy on ResNet-18 (11M). ResNet-18 is trained from scratch (not self-supervised pre-trained with SimCLR), while transferring CLIP ViT-B/16 representations. As shown in Table 3, BeamCLIP learns better representations than SSL methods such as SimCLR [4], MoCo-v2 [6], BYOL [16], and SwAV [3]. More importantly, the BeamCLIP provides better performance than OSS [8] that simultaneously learns and transfers representations from ResNet-50. The learning curve is presented in Figure 8 in Appendix C.1.
**Effect of self-supervised pre-training.** Table 4 shows ImageNet-1K top-1 linear probe accuracy on BeamCLIP-RN50 representations by using different SSL pre-training. With the better SSL method (SwAV [3] > SimCLR [4]), the BeamCLIP can learn better representations with an increased linear probe accuracy.
### Representation transfer with unlabeled non-target data
To check if the BeamCLIP also inherits the powerful zero-shot capability of CLIP, we compare zero-shot accuracy of CLIP variants on ImageNet-1K. For zero-shot measure, we use CC-3M [34] and ImageNet-21K (12M samples) [33] that do not have overlap with ImageNet-1K. Table 5 shows the comparison of zero-shot accuracy. The BeamCLIP-RN50 achieves about 57.5% zero-shot accuracy that is highly comparable with CLIP RN-50 (59.6%). The learning curve is presented in Figure 9 of Appendix C.1.
As baselines, we choose two representative distillation methods among many methods: (1) conventional knowledge distillation (KD) [19] and (2) contrastive representation distillation (CRD) [38]. Since conventional KD aims to mimic the task-specific predictions of the teacher model unlike the BeamCLIP, we apply the KL divergence on minimizing the cross-modal similarity distribution (_i.e.,_\(P(q_{i}|A)\) and \(P(k_{i}|A)\)), instead of the Cross-entropy (CE). CRD proposes a variant of InfoNCE loss for representation distillation, which we apply on normalized representations (_i.e.,_\(q_{i}\) and \(k_{i}\)). The details of each method can be found in the related work section 2.
**Results.** Table 6 shows a comparison of teacher and student accuracy on various datasets. We empirically demonstrate that the BeamCLIP can effectively transfer vision and language representations of a large teacher model (CLIP ViT-B/16) into a small student model (ResNet-50). We find that the KL divergence used in conventional knowledge distillation (KD) is not effective in transferring CLIP-ViT representations. Also, the contrastive learning-based approach is not effective. Unlike this, the BeamCLIP can effectively transfer CLIP ViT-B/16 representations into ResNet-50, achieving very high accuracy that is comparable or better than the teacher accuracy. KD simply minimizes the error between single instances. We conjecture that the cross-modal similarity to multiple anchor points introduced in the BeamCLIP helps the student preserve the topology of the teacher's embedding space.
Also, note that context-based prompt augmentation helps achieve better accuracy after representation transfer. Since Flowers102 has many ambiguous labels, our experiment shows that text prompt augmentation significantly increases the student's accuracy compared to the teacher's accuracy.
**Ablation study.** Table 7 shows the ablation study results of the BeamCLIP. Our empirical findings are as follows: (1) Instance similarity matching (ISM) is not enough by itself to preserve the topology
**BeamCLIP (ours)** & RN50 & CC-3M & CLIP ViT-B/16 & IN-1K & \(64\ast\) & 100 & 49.5 \\
**BeamCLIP (ours)** & RN50 & IN-21K (12M) & CLIP ViT-B/16 & IN-1K & \(64\ast\) & 50 & 53.6 \\
**BeamCLIP (ours)** & RN50 & IN-21K (12M) & CLIP ViT-B/16 & IN-1K & \(64\ast\) & 200 & 57.5 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Comparison of zero-shot accuracy on ImageNet-1K.** On ImageNet-1K, the BeamCLIP RN50 achieves about 57.5% zero-shot accuracy that is highly comparable with CLIP RN-50 (59.6%). To achieve such a high zero-shot accuracy, CLIP uses very large image-text pair data (WIT-400M). Instead, the BeamCLIP can achieve the comparable zero-shot accuracy by effectively transferring the teacher’s representations, while using only 3% data (ImageNet-21K (12M)). Note that OpenCLIP provides only about 36.5% zero-shot accuracy with the similar amount of data (CC-12M).
of the teacher's embedding space. (2) Cross-modal similarity matching (SCM) compared to multiple anchor points helps the student mimic the teacher's embedding space. (3) Self-supervised pre-training of the student (SSL PT) helps the student mimic the teacher's embedding space. (4) Entropy minimization (EntMin) helps to improve the accuracy. (5) Context-based prompt augmentation (CPA) helps measure the similarity more precisely. As shown in the table, Flowers102 dataset is sensitive to self-supervised pre-training of student. We conjecture that since Flowers102 dataset has only 1020 training samples for the 102 classes, it is not enough to probe the teacher's representation space.
**Qualitative result.** To see the quality of the transferred representations, we analysed text-image retrieval results on the Flowers102 dataset. Figure 3 compares the top-5 text-image retrieval results between CLIP-RN50 and BeamCLIP-RN50. A red rectangle denotes an incorrect result. Compared to CLIP-RN50, BeamCLIP-RN50 provides much improved results, since its representations are transferred from CLIP-ViT/16 with higher zero-shot accuracy. More interestingly, BeamCLIP-RN50 provides surprisingly good text-image retrieval results, even though unseen text prompts such as "a photo of {pink rose}" or "a photo of {yellow rose}" are given.
\begin{table}
\begin{tabular}{l|c|c|c c c c|c c c} \hline \hline \multicolumn{2}{c|}{Method} & \multicolumn{1}{c|}{\multirow{2}{*}{
\begin{tabular}{c
### Effect of random text prompts
We measured how effective the BeamCLIP is in cases where the class names of the target dataset are not perfectly given. Figure 4 shows the effect of the randomly sampled text prompts on CIFAR100. We can see that the BeamCLIP is still effective, even when (a) the subset of the 100 class names of CIFAR100 are given as the text prompts, or (b) the text prompts are randomly sampled from a non-target dataset (ImageNet-1K). The exact values in Figure 4 are presented in Table 15 and Table 16 in Appendix C.2. Also, the additional results on CIFAR10 are provided in Appendix C.2.
## 5 Limitations and Conclusion
**Limitations.** With the help of rich representations of pre-trained CLIP, the BeamCLIP can learn better representations than SSL methods. However, since SSL methods can increase the performance at longer training epochs, the performance margin may be decreased in such a setting. Another shortcoming is that context-based prompt augmentations may require additional engineering efforts.
**Conclusion.** In this paper, we provide the BeamCLIP that can effectively transfer large pre-trained vision-language model (e.g., CLIP-ViT) into a small target model (e.g., ResNet-18) with cross-modal similarity matching (CSM) and context-based prompt augmentation (CPA). We empirically show that the BeamCLIP can learn better visual representations than vision-only self-supervised learning (SSL) methods, by leveraging a pre-trained vision-language model (CLIP). The BeamCLIP is not intended to be another CLIP, but an effective CLIP student.
## Broader impact
This research aims to provide a simple and effective way to leverage CLIP for representation learning. With the help of CLIP, the BeamCLIP can learn better representations than self-supervised learning (SSL) methods. Since training CLIP requires very large data and hundreds of GPUs, it is important to provide a way to effectively reuse the pre-trained CLIP rather than training from scratch on a target model. We believe that the BeamCLIP can help to save cost and time.
## Acknowledgements
We thank anonymous reviewers for their valuable comments. This work was fully supported by LG AI Research.
Figure 4: **The effect of random text prompts on CIFAR100. (a) The text prompts are randomly sampled from the set of 100 class names of CIFAR100. The red dotted line denotes the teacher’s accuracy as an upper bound. It is more efficient as it is closer to this line. As shown in the blue line, the BeamCLIP (CE+EntMin) can effectively transfer the CLIP representations, even when the class names of the target dataset are partially given. (b) The text prompts are randomly sampled from the 1000 class names of ImageNet. The BeamCLIP (CE+EntMin) is still effective, even though the class names are randomly sampled from a non-target dataset (ImageNet-1K).** |
2307.00242 | Existence and Instability of Standing Wave for the Two-wave Model with
Quadratic Interaction | In this paper, we establish the existence and instability of standing wave
for a system of nonlinear Schr\"{o}dinger equations arising in the two-wave
model with quadratic interaction in higher space dimensions under mass
resonance conditions. Here, we eliminate the limitation for the relationship
between complex constants $a_{1}$ and $a_{2}$ given in \cite{HOT}, and consider
arbitrary real positive constants $a_{1}$ and $a_{2}$. First of all, according
to the conservation identities for mass and energy, using the so-called virial
type estimate, we obtain that the solution for the Cauchy problem under
consideration blows up in finite time in $H^{1}(\mathbb{R}^{N})\times
H^{1}(\mathbb{R}^{N})$ with space dimension $N\geq 4$. Next, for space
dimension $N$ with $4<N<6$, we establish the existence of the ground state
solution for the elliptic equations corresponding to the nonlinear
Schr\"{o}dinger equations under the frequency and mass resonance by adopting
variational method, and further achieve the exponential decay at infinity for
the ground state. This implies the existence of standing wave for the nonlinear
Schr\"{o}dinger equaitons under consideration. Finally, by defining another
constrained minimizing problems for a pair of complex-valued functions, a
suitable manifold, referring to the characterization of the standing wave,
making appropriate scaling and adopting virial type estimate, we attain the
instability of the standing wave for the equations under frequency and mass
resonance in space dimension $N$ with $4<N<6$ by virtue of the conservations of
mass and energy. Here, we adopt the equivalence of two constrained minimizing
problems defined for pairs of complex-valued and real-valued functions $(u,v)$,
respectively, when $(u,v)$ is a pair of real-valued functions. | Zaihui Gan, Yue Wang | 2023-07-01T06:06:00Z | http://arxiv.org/abs/2307.00242v1 | # Existence and Instability of Standing Wave for the Two-wave Model with Quadratic Interaction
###### Abstract
In this paper, we establish the existence and instability of standing wave for a system of nonlinear Schrodinger equations arising in the two-wave model with quadratic interaction in higher space dimensions under mass resonance conditions. Here, we eliminate the limitation for the relationship between complex constants \(a_{1}\) and \(a_{2}\) given in [7], and consider arbitrary real positive constants \(a_{1}\) and \(a_{2}\). First of all, according to the conservation identities for mass and energy, using the so-called virial type estimate, we obtain that the solution for the Cauchy problem under consideration blows up in finite time in \(H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\) with space dimension \(N\geq 4\). Next, for space dimension \(N\) with \(4<N<6\), we establish the existence of the ground state solution for the elliptic equations corresponding to the nonlinear Schrodinger equations under the frequency and mass resonance by adopting variational method, and further achieve the exponential decay at infinity for the ground state. This implies the existence of standing wave for the nonlinear Schrodinger equaitons under consideration. Finally, by defining another constrained minimizing problems for a pair of complex-valued functions, a suitable manifold, referring to the characterization of the standing wave, making appropriate scaling and adopting virial type estimate, we attain the instability of the standing wave for the equations under frequency and mass resonance in space dimension \(N\) with \(4<N<6\) by virtue of the conservations of mass and energy. Here, we adopt the equivalence of two constrained minimizing problems defined for pairs of complex-valued and real-valued functions \((u,v)\), respectively, when \((u,v)\) is a pair of real-valued functions.
**Keywords:** Schrodinger equations; Two-wave model; Quadratic interaction; Standing wave; Ground state; Instability.
**MSC(2020):** Primary 35B44; 35Q55; Secondary 35J11.
**Statements and Declarations:** No conflict of interest exists in the submission of this manuscript. No data was used for the research described in this manuscript.
## 1 Introduction
We consider in this paper the nonlinear Schrodinger equations in the two-wave model with quadratic interaction
###### Abstract
We consider the nonlinear Klein-Gordon equations (1.4) with the nonlinear Klein-Gordon equations (1.4) and the nonlinear Klein-Gordon equations (1.4) with the nonlinear Klein-Gordon equations (1.4). The nonlinear Klein-Gordon equations (1.4) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.4) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.4) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.4) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.4) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.4) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) are nonlinear Klein-Gordon equations (1.1) and the nonlinear Schrodinger equations (1.1) are nonlinear Klein-Gordon equations (1.
and \((u(x),v(x))\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\backslash\{(0,0)\}\). Thus, \((\phi(t,x),\psi(t,x))=(e^{i\omega_{1}t}u(x),e^{i\omega_{2}t}v(x))\) is a standing wave solution of the evolution equations (1.1). On the other hand,from the physical viewpoint, the ground state solution of (1.5) plays a key role. Recalling that the definition of the ground state for a single elliptic equation mentioned in Cazenave [3], we can define the ground state for the elliptic system (1.5). A nontrivial solution \((u,v)\) of (1.5) is named a ground state if it is of minimum action among all solutions of (1.5). Specifically, \((u,v)\) satisfies
\[S(u,v)\leq S(\omega,\xi)\]
for any solution \((\omega,\xi)\) of (1.5), where the action \(\mathcal{S}(u,v)\) for a pair of real functions \((u,v)\) is defined by
\[\begin{array}{l} S(u,v)=\frac{a_{2}}{2m_{1}}\int_{\mathbb{R}^{ N}}|\nabla u|^{2}dx+\frac{a_{1}}{4m_{2}}\int_{\mathbb{R}^{N}}|\nabla v|^{2}dx \\ \\ +a_{2}\omega_{1}\int_{\mathbb{R}^{N}}|u|^{2}dx+\frac{a_{1}}{2} \omega_{2}\int_{\mathbb{R}^{N}}|v|^{2}dx+a_{1}a_{2}\int_{\mathbb{R}^{N}}vu^{2 }dx.\end{array} \tag{1.6}\]
In the present paper, we consider three aspects for the nonlinear Schrodinger equations (1.1): one is blowup in finite time for space dimension \(4\leq N<6\), the others are the existence and instability of the standing wave for space dimension \(4<N<6\). For obtaining that the solution to the Cauchy problem for (1.1) blows up in finite time in \(H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\), we need to establish the so-called virial type estimates and to utilize the modified conservation identities for mass and energy. On the other hand, the key to attain the existence of standing wave for equations (1.1) with space dimension \(4<N<6\) is to attain the existence of the ground state solution for equations (1.5), where the ground state solution \((\xi,-\eta)\) is dependent of \(|x|\) alone and has an exponential decay at infinity. Furthermore, for exploring the instability of the standing wave for (1.1) with space dimension \(4<N<6\), we must be concerned with the characterization of the standing waves for (1.1) with minimal action \(S(\xi,-\eta)\), and refer to the exponential decay at infinity of the ground state \((\xi,-\eta)\) of (1.5). Here, \((\xi,-\eta)\) will be decided in Section 4.
Throughout this paper, we will adopt a type of variational method initially proposed in [1, 2]; the main thing for the variational method is to define suitable functionals, manifolds and constrained minimizing problem. Here we define the functional \(Q(u,v)\) for a pair of real-valued functions \((u,v)\) by
\[Q(u,v)=\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx+\frac{a_{1}}{ 2m_{2}}\int_{\mathbb{R}^{N}}|\nabla v|^{2}dx+\frac{N}{2}a_{1}a_{2}\int_{ \mathbb{R}^{N}}vu^{2}dx, \tag{1.7}\]
the manifold \(\mathcal{M}\) as
\[\mathcal{M}:=\left\{(u,v)\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N} )\backslash\{(0,0)\},Q(u,v)=0\right\}, \tag{1.8}\]
and the constrained minimizing problem as
\[K=\inf_{(u,v)\in\mathcal{M}}S(u,v). \tag{1.9}\]
Throughout this paper, we make the following assumptions on these real coefficients \(m_{1},\ m_{2},\ a_{1},\ a_{2}\), in equations (1.1):
\[m_{1}>0,m_{2}>0,a_{1}>0,a_{2}>0. \tag{1.10}\]
This paper is organized as follows. Section 2 is devoted to establishing some basic estimates and to collecting basic lemmas which will be used in the subsequent sections. In section 3, we establish finite time blow up for solution of (1.1) in terms of the initial data \((\phi_{0},\psi_{0})\) and the initial energy. In section 4, we show the existence of the ground state solution and its exponential decay at infinity. In section 5, we justify the instability of the standing waves in view of these conclusions given in the section 2, section 3 and section 4.
## 2 Preliminaries
We impose the initial data on (1.1) as follows:
\[\phi(0,x)=\phi_{0}(x),\quad\psi(0,x)=\psi_{0}(x),\quad x\in\mathbb{R}^{N}, \tag{2.1}\]
where \((\phi_{0}(x),\psi_{0}(x))\) are given functions in \(H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\). From the results of Ginibre and Velo [5], Cazenave[3] as well as Hayashi, Ozawa and Tanakai [7], (1.1)-(2.1) is locally well-posed in \(H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\), and thus for any \((\phi_{0},\psi_{0})\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\), there exists a unique solution \((\phi,\psi)\) to the Cauchy problem (1.1)-(2.1) in \(C\left([0,T);H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\right)\) defined on a maximal time interval \(T_{max}(\phi_{0},\psi_{0})\), either \(T=+\infty\) or \(T<+\infty\) and
\[\lim_{t\to T_{max}^{*}(\phi_{0},\psi_{0})}\left(\|\phi\|_{H^{1}(\mathbb{R}^{N })}+\|\psi\|_{H^{1}(\mathbb{R}^{N})}\right)=+\infty.\]
In addition, the Cauchy problem (1.1)-(2.1) admits the following conservation laws in the energy space \(H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\).
**Lemma 2.1**.: _Suppose that \((\phi,\psi)\in C([0,T);H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N}))\) is a solution of the Cauchy problem (1.1)-(2.1). Then the total mass, total energy and the total momentum are conserved for all \(t\geq 0\): \(L^{2}\)_**-norm (Mass):**__
\[\int_{\mathbb{R}^{N}}\left(a_{2}|\phi(t,x)|^{2}+a_{1}|\psi(t,x)|^{2}\right)dx =\int_{\mathbb{R}^{N}}\left(a_{2}|\phi_{0}(x)|^{2}+a_{1}|\psi_{0}(x)|^{2} \right)dx; \tag{2.2}\]
**Energy:**__
\[E(\phi(t,x),\psi(t,x))=E(\phi_{0}(x),\psi_{0}(x)), \tag{2.3}\]
_where \(E\left(\phi(t,x),\psi(t,x)\right)\) is defined as_
\[\begin{split} E(\phi(t,x),\psi(t,x))&=\frac{a_{2}} {2m_{1}}\int_{\mathbb{R}^{N}}|\nabla\phi(t,x)|^{2}dx+\frac{a_{1}}{4m_{2}}\int _{\mathbb{R}^{N}}|\nabla\psi(t,x)|^{2}dx\\ &\quad+a_{1}a_{2}Re\int_{\mathbb{R}^{N}}\psi(t,x)\overline{\phi} (t,x)^{2}dx;\end{split} \tag{2.3a}\]
**Momentum:**__
\[\begin{split}& a_{2}Im\left(\int_{\mathbb{R}^{N}}\nabla\phi(t,x) \overline{\phi}(t,x)dx\right)+\frac{1}{2}a_{1}Im\Big{(}\int_{\mathbb{R}^{N}} \nabla\psi(t,x)\overline{\psi}(t,x)dx\right)\\ &=a_{2}Im\left(\int_{\mathbb{R}^{N}}\nabla\phi_{0}(x)\overline{ \phi}_{0}(x)dx\right)+\frac{1}{2}a_{1}Im\left(\int_{\mathbb{R}^{N}}\nabla\psi_ {0}(x)\overline{\psi}_{0}(x)dx\right).\end{split} \tag{2.4}\]
Proof.: Multiplying (1.1a) by \(a_{2}\overline{\phi}\) and (1.1b) by \(a_{1}\overline{\psi}\), integrating over \(\mathbb{R}^{N}\) and taking the imaginary part for the resulting equations, we obtain formally
\[\begin{split}&\frac{d}{dt}\left(\frac{1}{2}\int_{\mathbb{R}^{N}} \left(a_{2}|\phi(t,x)|^{2}+a_{1}|\psi(t,x)|^{2}\right)dx\right)\\ &=Im\left(a_{1}a_{2}\int_{\mathbb{R}^{N}}\psi(t,x)\overline{\phi} ^{2}(t,x)dx\right)+Im\left(a_{1}a_{2}\int_{\mathbb{R}^{N}}\overline{\psi}(t,x )\phi^{2}(t,x)dx\right)\\ &=0,\end{split}\]
which implies the conservation of mass
\[\int_{\mathbb{R}^{N}}\left(a_{2}|\phi(t,x)|^{2}+a_{1}|\psi(t,x)|^{2}\right)dx= \int_{\mathbb{R}^{N}}\left(a_{2}|\phi_{0}(x)|^{2}+a_{1}|\psi_{0}(x)|^{2}\right)dx.\]
Next, multiplying \((1.1a)\) by \(2a_{2}\overline{\phi}_{t}\) and \((1.1b)\) by \(a_{1}\overline{\psi}_{t}\), integrating over \(\mathbb{R}^{N}\) and taking the real part, we obtain
\[\frac{d}{dt}\left[\frac{a_{2}}{2m_{1}}\int_{\mathbb{R}^{N}}|\nabla\phi(t,x)|^{2 }dx+\frac{a_{1}}{4m_{2}}\int_{\mathbb{R}^{N}}|\nabla\psi(t,x)|^{2}dx+a_{1}a_{2 }Re\int_{\mathbb{R}^{N}}\psi(t,x)\overline{\phi}(t,x)^{2}dx\right]=0,\]
which yields formally the conservation of energy
\[E\left(\phi(t,x),\psi(t,x)\right)=E\left(\phi_{0}(x),\psi_{0}(x)\right),\]
where the energy \(E\) is defined by \((2.3a)\). Finally, multiplying \((1.1a)\) by \(2a_{2}\nabla\overline{\phi}\) and \((1.1b)\) by \(a_{1}\nabla\overline{\psi}\), integrating over \(\mathbb{R}^{N}\) and taking the real part, we obtain
\[Re\int_{\mathbb{R}^{N}}2a_{2}i\phi_{t}\nabla\overline{\phi}dx+ Re\int_{\mathbb{R}^{N}}a_{1}i\psi_{t}\nabla\overline{\psi}dx\] \[=-Re\int_{\mathbb{R}^{N}}\frac{a_{2}}{m_{1}}\nabla\overline{\phi }\Delta\phi dx-Re\int_{\mathbb{R}^{N}}\frac{a_{1}}{2m_{2}}\nabla\overline{ \psi}\Delta\psi dx\] \[\quad+Re\int_{\mathbb{R}^{N}}2a_{1}a_{2}\psi\overline{\phi} \nabla\overline{\phi}dx+Re\int_{\mathbb{R}^{N}}a_{1}a_{2}\phi^{2}\nabla \overline{\psi}dx.\]
Direct calculation gives
\[\frac{d}{dt}\left[a_{2}Im\left(\int_{\mathbb{R}^{N}}\nabla\phi(t,x)\overline{ \phi}(t,x)dx\right)+\frac{1}{2}a_{1}Im\left(\int_{\mathbb{R}^{N}}\nabla\psi( t,x)\overline{\psi}(t,x)dx\right)\right]=0,\]
which yields formally the conservation of momentum
\[a_{2}Im\left(\int_{\mathbb{R}^{N}}\nabla\phi\overline{\phi}dx \right)+\frac{1}{2}a_{1}Im\left(\int_{\mathbb{R}^{N}}\nabla\psi\overline{\psi }dx\right)\] \[=a_{2}Im\left(\int_{\mathbb{R}^{N}}\nabla\phi_{0}\overline{\phi} _{0}dx\right)+\frac{1}{2}a_{1}Im\left(\int_{\mathbb{R}^{N}}\nabla\psi_{0} \overline{\psi}_{0}dx\right).\]
Let
\[\Sigma:=\left\{u:\ \ u\in H^{1}(\mathbb{R}^{N}),\ \ xu\in L^{2}(\mathbb{R}^{N}) \right\}. \tag{2.5}\]
We then establish a kind of virial type estimate for the Cauchy problem (1.1)-(2.1), which is helpful for exploring the instability of standing waves.
**Lemma 2.2**.: _Assume that \((\phi_{0},\psi_{0})\in\Sigma\times\Sigma\). Let \((\phi,\psi)\in C([0,T);H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N}))\) be the corresponding solution of the cauchy problem (1.1)-(2.1) on \([0,T)\)._
_Put_
\[G(t)=\int_{\mathbb{R}^{N}}|x|^{2}\left(a_{2}|\phi|^{2}+a_{1}|\psi|^{2}\right)dx, \tag{2.6}\]
_then it follows that_
\[\left(|x|\phi(t,.),|x|\psi(t,.)\right)\in C\left((-T_{min},T_{max}),L^{2}( \mathbb{R}^{N})\right)\times C\left((-T_{min},T_{max}),L^{2}(\mathbb{R}^{N}) \right).\]
_Moreover, there hold_
\[\frac{d}{dt}G(t)=4Im\int_{\mathbb{R}^{N}}\left(\frac{a_{2}}{2m_{1}}x\overline{ \phi}\nabla\phi+\frac{a_{1}}{2m_{2}}x\overline{\psi}\nabla\psi\right)dx, \tag{2.7}\]
_and_
\[\begin{split}\frac{d^{2}}{dt^{2}}G(t)&=\frac{2a_{2}} {m_{1}^{2}}\int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx+\frac{2a_{1}}{m_{2}^{2}} \int_{\mathbb{R}^{N}}|\nabla\psi|^{2}dx\\ &\quad+\frac{2a_{1}a_{2}N}{m_{2}}Re\int_{\mathbb{R}^{N}}\overline {\psi}\phi^{2}dx+2a_{1}a_{2}\left(\frac{2}{m_{2}}-\frac{1}{m_{1}}\right)Re\int _{\mathbb{R}^{N}}x\phi^{2}\nabla\overline{\psi}dx\\ &=\frac{2a_{2}}{m_{1}^{2}}\int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx +\frac{2a_{1}}{m_{2}^{2}}\int_{\mathbb{R}^{N}}|\nabla\psi|^{2}dx\\ &\quad+2a_{1}a_{2}N\left(\frac{1}{m_{1}}-\frac{1}{m_{2}}\right) Re\int_{\mathbb{R}^{N}}\overline{\psi}\phi^{2}dx+2a_{1}a_{2}\left(\frac{1}{m_{1}}- \frac{2}{m_{2}}\right)Re\int_{\mathbb{R}^{N}}x\nabla\phi^{2}\overline{\psi}dx.\end{split} \tag{2.8}\]
Proof.: Since \((\phi,\psi)\in C\left([0,T);H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\right)\) is a solution of the Cauchy problem (1.1)-(2.1), applying the results of Ginibre and Velo [5], from \((|x|\phi_{0},|x|\psi_{0})\in L^{2}(\mathbb{R}^{N})\times L^{2}(\mathbb{R}^{N})\), it follows that \((|x|\phi,|x|\psi)\in L^{2}(\mathbb{R}^{N})\times L^{2}(\mathbb{R}^{N})\). This implies that \(G(t)\) given by (2.6) is well defined on \([0,T)\).
Differentiating (2.6) with respect to \(t\) yields
\[\begin{split} G^{\prime}(t)&=\int_{\mathbb{R}^{N}}| x|^{2}\left[a_{2}\left(\phi\overline{\phi}_{t}+\phi_{t}\overline{\phi}\right)+a_{1} \left(\psi\overline{\psi}_{t}+\psi_{t}\overline{\psi}\right)\right]dx\\ &=2Re\int_{\mathbb{R}^{N}}|x|^{2}\left(a_{2}\phi\overline{\phi}_{ t}+a_{1}\psi\overline{\psi}_{t}\right)dx\\ &=-4Im\int_{\mathbb{R}^{N}}\left(\frac{a_{2}}{2m_{1}}x\phi \nabla\overline{\phi}+\frac{a_{1}}{2m_{2}}x\psi\nabla\overline{\psi}\right)dx.\end{split}\] (2.8 - 1)
Note that
\[\left\{\begin{array}{l}\phi_{t}=\frac{i}{2m_{1}}\Delta\phi-ia_{1}\psi \overline{\phi},\quad\overline{\phi}_{t}=-\frac{i}{2m_{1}}\Delta\overline{ \phi}+ia_{1}\overline{\psi}\phi,\\ \psi_{t}=\frac{i}{2m_{2}}\Delta\psi-ia_{2}\phi^{2},\quad\overline{ \psi}_{t}=-\frac{i}{2m_{2}}\Delta\overline{\psi}+ia_{2}\overline{\phi}^{2}, \\ Re\int_{\mathbb{R}^{N}}x\Delta\overline{\phi}\nabla\phi dx=\frac{N-2}{2} \int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx,\quad Re\int_{\mathbb{R}^{N}}x\Delta \overline{\psi}\nabla\psi dx=\frac{N-2}{2}\int_{\mathbb{R}^{N}}|\nabla\psi|^ {2}dx,\\ Re\int_{\mathbb{R}^{N}}x\phi\nabla\Delta\overline{\phi}dx=\frac{N+2}{2} \int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx,\quad Re\int_{\mathbb{R}^{N}}x\psi \nabla\Delta\overline{\psi}dx=\frac{N+2}{2}\int_{\mathbb{R}^{N}}|\nabla\psi|^ {2}dx,\\ Re\int_{\mathbb{R}^{N}}\left(-x\psi\overline{\phi}\nabla\overline{\phi} \right)dx+Re\int_{\mathbb{R}^{N}}x\left(\phi^{2}\nabla\overline{\psi}+\phi \overline{\psi}\nabla\phi\right)dx=Re\int_{\mathbb{R}^{N}}x\phi^{2}\nabla \overline{\psi}dx,\end{array}\right.\] (2.8 - 2)
differentiating (2.8-1) with respect to \(t\) again yields
\[\begin{split} G^{\prime\prime}(t)&=-4Im\int_{\mathbb{R}^{N} }\left[\frac{a_{2}}{2m_{1}}x\left(\phi\nabla\overline{\phi}\right)_{t}+\frac{ a_{1}}{2m_{2}}x\left(\psi\nabla\overline{\psi}\right)_{t}\right]dx\\ &=-2Im\int_{\mathbb{R}^{N}}\left[\frac{a_{2}}{m_{1}}x\left(\phi_{t} \nabla\overline{\phi}+\phi\nabla\overline{\phi}_{t}\right)+\frac{a_{1}}{m_{2}} x\left(\psi_{t}\nabla\overline{\psi}+\psi\nabla\overline{\psi}_{t}\right) \right]dx.\end{split}\] (2.8 - 3)
Simple calculations give
\[\begin{split}& Im\int_{\mathbb{R}^{N}}\left(x\phi_{t}\nabla\overline{ \phi}+x\phi\nabla\overline{\phi}_{t}\right)dx\\ &=Im\int_{\mathbb{R}^{N}}x\left(\frac{i}{2m_{1}}\Delta\phi-ia_{1} \psi\overline{\phi}\right)\nabla\overline{\phi}dx+Im\int_{\mathbb{R}^{N}}x \phi\left(-\frac{i}{2m_{1}}\nabla\Delta\overline{\phi}+ia_{1}\nabla( \overline{\psi}\phi)\right)dx\\ &=Re\int_{\mathbb{R}^{N}}\left(\frac{x}{2m_{1}}\Delta\phi\nabla \overline{\phi}-a_{1}x\psi\overline{\phi}\nabla\overline{\phi}\right)dx+Re \int_{\mathbb{R}^{N}}\left(-\frac{1}{2m_{1}}x\phi\nabla\Delta\overline{\phi}+ a_{1}x\phi\nabla\left(\overline{\psi}\phi\right)\right)dx,\end{split}\] (2.8 - 4)
\[\begin{split}& Im\int_{\mathbb{R}^{N}}\left(x\psi_{t}\nabla \overline{\psi}+x\psi(\nabla\overline{\psi})_{t}\right)dx\\ &=Im\int_{\mathbb{R}^{N}}x\left(\frac{i}{2m_{2}}\Delta\psi-ia_{2 }\phi^{2}\right)\nabla\overline{\psi}dx+Im\int_{\mathbb{R}^{N}}x\psi\left(- \frac{i}{2m_{2}}\nabla\Delta\overline{\psi}+ia_{2}\nabla\overline{\phi}^{2} \right)dx\\ &=\int_{\mathbb{R}^{N}}\left(\frac{x}{2m_{2}}\Delta\psi\nabla \overline{\psi}-a_{2}x\phi^{2}\nabla\overline{\psi}\right)dx+Re\int_{\mathbb{ R}^{N}}\left(-\frac{x}{2m_{2}}\psi\nabla\Delta\overline{\psi}+a_{2}x\psi \nabla\overline{\phi}^{2}\right)dx.\end{split}\] (2.8 - 5)
Combining (2.8-2), (2.8-3), (2.8-4) and (2.8-5) together yields
\[\begin{split} G^{\prime\prime}(t)&=\frac{2a_{2}}{m_ {1}^{2}}\int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx+\frac{2a_{1}}{m_{2}^{2}}\int_{ \mathbb{R}^{N}}|\nabla\psi|^{2}dx\\ &\quad+2a_{1}a_{2}N\left(\frac{1}{m_{1}}-\frac{1}{m_{2}}\right) Re\int_{\mathbb{R}^{N}}\overline{\psi}\phi^{2}dx+2a_{1}a_{2}\left(\frac{1}{m_{1}}- \frac{2}{m_{2}}\right)Re\int_{\mathbb{R}^{N}}x\nabla\phi^{2}\overline{\psi}dx.\end{split}\]
This completes the proof of Lemma 2.2.
We now first list a useful lemma concerning the uniform decay at infinity of certain radial functions in Strass [11].
**Lemma 2.3**.: _[_11_]__( Radial lemma) Let \(N\geq 2\). If \(u\in H^{1}(\mathbb{R}^{N})\) is a radially symmetric function, then_
\[\sup_{x\in\mathbb{R}^{N}}|x|^{\frac{N-1}{2}}|u(x)|\leq c\|u\|_{L^{2}(\mathbb{R }^{N})}^{\frac{1}{2}}\|\nabla u\|_{L^{2}(\mathbb{R}^{N})}^{\frac{1}{2}}\leq c \|u\|_{H^{1}(\mathbb{R}^{N})}.\]
_If, in addition, \(u(x)\) is a non-increasing function of \(|x|\), then_
\[\sup_{x\in\mathbb{R}^{N}}|x|^{\frac{N}{2}}|u(x)|\leq c\|u\|_{L^{2}(\mathbb{R }^{N})}.\]
The following inequality is frequently used throughout this paper.
**Lemma 2.4**.: _(Gagliardo-Nirenberg inequality) Let \(1\leq p,q,r\leq\infty\), and let \(j,m\) be two integers with \(0\leq j<m\). If_
\[j-\frac{N}{p}=(1-\theta)\left(-\frac{N}{q}\right)+\theta\left(m-\frac{N}{r} \right),\]
_for some \(\theta\in[0,1]\), then there exists \(c=c\left(N,m,j,\theta,q,r\right)\) such that_
\[\left\|D^{j}u\right\|_{L^{p}(\mathbb{R}^{N})}\leq c\left\|u\right\|_{L^{q}( \mathbb{R}^{N})}^{1-\theta}\left\|D^{m}u\right\|_{L^{r}(\mathbb{R}^{N})}^{\theta}\]
_for every \(u\in\mathcal{D}(\mathbb{R}^{N})\)._
We recall below some well-known inequalities and Sobolev embedding results [3].
**Lemma 2.5**.: _(Poincare's inequality) Assume \(|\Omega|<\infty\) (or \(\Omega\) is bounded in one direction) and \(1\leq p\leq\infty\). Then there exists a constant \(c\) such that_
\[\|u\|_{L^{p}(\Omega)}\leq c\,\|\nabla u\|_{L^{p}(\Omega)}\]
_for every \(u\in W^{1,p}_{0}(\Omega)\). In particular, \(\|\nabla u\|_{L^{p}(\Omega)}\) is an equivalent norm to \(\|u\|W^{1,p}(\Omega)\) on \(W^{1,p}_{0}(\Omega)\)._
**Lemma 2.6**.: _(Sobolev's embedding theorem) If \(\Omega\) has a Lipschitz continuous boundary, then the following properties hold: (i) If \(p>N\), then \(W^{1,p}(\Omega)\hookrightarrow L^{\infty}(\Omega)\). If \(\Omega\) has a uniformly Lipschitz continuous boundary, then (ii) If \(p>N\), then \(W^{1,p}(\Omega)\hookrightarrow C^{0,\alpha}(\overline{\Omega})\), where \(\alpha=\dfrac{p-N}{p}\)._
**Lemma 2.7**.: _(Rellich's compactness theorem) If \(\Omega\) is bounded and has a Lipschitz continuous boundary, then the following properties hold: (i) If \(p>N\), then the embedding \(W^{1,p}(\Omega)\hookrightarrow L^{\infty}(\Omega)\) is compact. Let in addition \(\Omega\) have uniformly Lipschitz continuous boundary. (ii) If \(p>N\), then the embedding \(W^{1,p}(\Omega)\hookrightarrow C^{0,\lambda}(\overline{\Omega})\) is compact for every \(\lambda\in\left(0,\dfrac{p-N}{p}\right)\). Furthermore, suppose that \(\Omega\) satisfies the strong local Lipschitz condition. If \(mp>N>(m-1)p\), then_
\[W^{j+m,p}(\Omega)\hookrightarrow C^{j,\lambda}(\overline{\Omega})\]
_for \(0<\lambda\leq m-\dfrac{N}{p}\)._
**Lemma 2.8**.: _(Radial compact lemma) The injection \(H^{1}_{r}(\mathbb{R}^{N})\hookrightarrow L^{p}_{r}(\mathbb{R}^{N})\) is compact for \(2<p<\dfrac{2N}{N-2}\), where \(H^{1}_{r}(\mathbb{R}^{N})\) is the space of all radial functions on \(H^{1}(\mathbb{R}^{N})\) and \(L^{p}_{r}(\mathbb{R}^{N})\) the space of all radial functions on \(L^{p}(\mathbb{R}^{N})\)._
## 3 Finite Time Blowup
In this section, we show that, under suitable assumptions on initial data, solution of the nonlinear Schrodinger equations (1.1) with this initial data blowups in finite time. We adopt here essentially a convexity analysis method (see Glassey [6]), which is based on the calculation of the variance
\[\int_{\mathbb{R}^{N}}|x|^{2}\left(a_{2}|\phi(t,x)|^{2}+a_{1}|\psi(t,x)|^{2} \right)dx.\]
As mentioned at the beginning of section 2, the initial-value problem (1.1) -(2.1) is locally well-posed in \(H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\), and keeps conservation of mass, energy and momentum. Furthermore, the blowup result is based on attaining a suitable virial-type identity.
We then clam:
**Theorem 3.1**.: _Let \(4\leq N<6\); \(m_{2}=2m_{1}>0,a_{1}>0,a_{2}>0\); and let_
\[(\phi,\psi)\in C\left([0,T);H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\right)\]
_be the solution to the Cauchy problem (1.1)-(2.1). If in addition, \((|x|\phi_{0},|x|\psi_{0})\in L^{2}(\mathbb{R}^{N})\times L^{2}(\mathbb{R}^{N})\) and either_
_(c-1)_ \(E(\phi_{0},\psi_{0})<0\)_;_
_or_
_(c-2)_ \(E(\phi_{0},\psi_{0})=0\) _and_ \( Im\int_{\mathbb{R}^{N}}\left(\frac{a_{2}}{m_{1}}x\phi_{0}\nabla\overline{\phi} _{0}+\frac{a_{1}}{m_{2}}x\psi_{0}\nabla\overline{\psi}_{0}\right)dx>0\)_;_
_or_
_(c-3)_ \(E(\phi_{0},\psi_{0})>0\) _and_
\[Im\int_{\mathbb{R}^{N}}\left(\frac{a_{2}}{m_{1}}x\phi_{0}\nabla\overline{\phi }_{0}+\frac{a_{1}}{m_{2}}x\psi_{0}\nabla\overline{\psi}_{0}\right)dx\geq\left[ \frac{N}{2m_{1}}E(\phi_{0},\psi_{0})\int_{\mathbb{R}^{N}}|x|^{2}\left(a_{2}| \phi_{0}|^{2}+a_{1}|\psi_{0}|^{2}\right)dx\right]^{\frac{1}{2}};\]
_then there holds \(T<+\infty\) and_
\[\lim_{t\to T}\left(a_{2}^{\frac{1}{2}}\|\phi\|_{H^{1}(\mathbb{R}^{N})}+a_{1}^ {\frac{1}{2}}\|\psi\|_{H^{1}(\mathbb{R}^{N})}\right)=+\infty.\]
Proof.: We prove it by contradiction. Suppose that the maximal existence time \(T\) of the solution \((\phi,\psi)\) to the Cauchy problem (1.1)-(2.1) is infinity. Let
\[G(t)=\int_{\mathbb{R}^{N}}|x|^{2}\left(a_{2}|\phi|^{2}+a_{1}|\psi|^{2}\right)dx. \tag{3.1}\]
It follows from (2.2), (2.3),(2.7) and (2.8) that
\[G^{\prime}(t)=-4Im\int_{\mathbb{R}^{N}}\left(\frac{a_{2}}{2m_{1}}x\phi\nabla \overline{\phi}+\frac{a_{1}}{2m_{2}}x\psi\nabla\overline{\psi}\right)dx, \tag{3.2}\]
and
\[\begin{split} G^{\prime\prime}(t)&=\frac{2a_{2}}{m_ {1}^{2}}\int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx+\frac{2a_{1}}{m_{2}^{2}}\int_{ \mathbb{R}^{N}}|\nabla\psi|^{2}dx+2a_{1}a_{2}N\left(\frac{1}{m_{1}}-\frac{1}{ m_{2}}\right)Re\int_{\mathbb{R}^{N}}\overline{\psi}\phi^{2}dx\\ &=2N\left(\frac{1}{m_{1}}-\frac{1}{m_{2}}\right)E(\phi_{0},\psi_{ 0})\\ &\quad-2N\left(\frac{1}{m_{1}}-\frac{1}{m_{2}}\right)\left(\frac{ a_{2}}{2m_{1}}\int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx+\frac{a_{1}}{4m_{2}}\int_{ \mathbb{R}^{N}}|\nabla\psi|^{2}dx\right)\\ &\quad+\frac{2a_{2}}{m_{1}^{2}}\int_{\mathbb{R}^{N}}|\nabla\phi|^ {2}dx+\frac{2a_{1}}{m_{2}^{2}}\int_{\mathbb{R}^{N}}|\nabla\psi|^{2}dx\\ &=\frac{N}{m_{1}}E(\phi_{0},\psi_{0})+\frac{a_{2}}{m_{1}^{2}} \left(2-\frac{N}{2}\right)\int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx+\frac{a_{1}} {4m_{1}^{2}}\left(2-\frac{N}{2}\right)\int_{\mathbb{R}^{N}}|\nabla\psi|^{2}dx \\ &\leq\frac{N}{m_{1}}E(\phi_{0},\psi_{0}),\end{split} \tag{3.3}\]
where the condition \(m_{2}=2m_{1}>0\) ensures the second to last equality valid, while the last one holds true due to the condition \(N\geq 4\). On the other hand, classical discussion then yields
\[G(t)=G(0)+G^{\prime}(0)t+\int_{0}^{t}(t-s)G^{\prime\prime}(s)dx,\quad 0\leq t<+\infty. \tag{3.4}\]
From (3.3) it follows that
\[G(t)\leq G(0)+G^{\prime}(0)t+\frac{N}{2m_{1}}E(\phi_{0},\psi_{0})t^{2},\quad 0 \leq t<\infty. \tag{3.5}\]
Noting that \(G(t)\) is a nonnegative function and
\[G(0)=\int_{\mathbb{R}^{N}}|x|^{2}\left(a_{2}|\phi_{0}|^{2}+a_{1}|\psi_{0}|^{2} \right)dx\geq 0, \tag{3.6}\]
(3.2) thus yields
\[G^{\prime}(0)=-2Im\int_{\mathbb{R}^{N}}\left(\frac{a_{2}}{m_{1}}.x\phi_{0} \nabla\overline{\phi_{0}}+\frac{a_{1}}{m_{2}}x\psi_{0}\nabla\overline{\psi_{ 0}}\right)dx. \tag{3.7}\]
Hence, under the assumptions (c-1) or (c-2) or (c-3), (3.5) implies that there exists \(T^{*}<+\infty\) such that
\[\lim_{t\to T^{*}}G(t)=\lim_{t\to T^{*}}\int_{\mathbb{R}^{N}}|x|^{2} \left(a_{2}|\phi|^{2}+a_{1}|\psi|^{2}\right)dx=0. \tag{3.8}\]
By Holder's inequality and Schwarz inequality, we have
\[a_{2}\int_{\mathbb{R}^{N}}|\phi|^{2}dx+a_{1}\int_{\mathbb{R}^{N }}|\psi|^{2}dx \tag{3.9}\] \[=-\frac{2}{N}Re\int_{\mathbb{R}^{N}}a_{2}x\phi\nabla\overline{ \phi}dx-\frac{2}{N}Re\int_{\mathbb{R}^{N}}a_{1}x\psi\nabla\overline{\psi}dx\] \[\leq\frac{2}{N}\left\|a_{2}^{\frac{1}{2}}x\phi\right\|_{L^{2}( \mathbb{R}^{N})}\left\|a_{2}^{\frac{1}{2}}\nabla\phi\right\|_{L^{2}(\mathbb{R }^{N})}+\frac{2}{N}\left\|a_{1}^{\frac{1}{2}}x\phi\right\|_{L^{2}(\mathbb{R}^ {N})}\left\|a_{1}^{\frac{1}{2}}\nabla\phi\right\|_{L^{2}(\mathbb{R}^{N})}\] \[\leq\frac{4}{N}\left(\int_{\mathbb{R}^{N}}a_{2}|x|^{2}|\phi|^{2} dx+\int_{\mathbb{R}^{N}}a_{1}|x|^{2}|\psi|^{2}dx\right)^{\frac{1}{2}}\left(\int_{ \mathbb{R}^{N}}a_{2}|\nabla\phi|^{2}dx+\int_{\mathbb{R}^{N}}a_{1}|\nabla\psi| ^{2}dx\right)^{\frac{1}{2}}.\]
Therefore, as \(t\to T^{*}\), (3.9) together with (3.8) implies that
\[a_{2}\int_{\mathbb{R}^{N}}|\phi|^{2}dx+a_{1}\int_{\mathbb{R}^{N}}|\psi|^{2}dx \leq 0,\]
which is a contradiction from the conservation of mass (2.2).
This finishes the proof of Theorem 3.1. \(\Box\)
## 4 The Existence of Standing Waves associated with ground state
Under the assumption \(\omega_{2}=2\omega_{1}>0\), \(m_{2}=2m_{1}\) and \(4<N<6\), if \((u(x),v(x))\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\backslash\{(0,0)\}\) is the ground state solution to (1.5), then \((\phi(t,x),\psi(t,x))\equiv\left(e^{i\omega_{1}t}u(x),e^{i\omega_{2}t}v(x)\right)\) is a
standing wave solution of (1.1). Hence, it is sufficient to explore the existence of the ground state solution of (1.5).
Rewriting equations (1.5) as the following:
\[\begin{cases}-\dfrac{1}{2m_{1}}\Delta u(x)=-\omega_{1}u(x)-a_{1}u(x)v(x)\triangleq g _{1}(u(x),v(x)),&(4.1-a)\\ -\dfrac{1}{2m_{2}}\Delta v(x)=-\omega_{2}v(x)-a_{2}u^{2}(x)\triangleq g _{2}(u(x),v(x)),&(4.1-b)\end{cases} \tag{4.1}\]
concerning the existence and exponential decay at infinity of the ground state solution for equations (4.1), we then claim
**Theorem 4.1**.: _Let \(\omega_{2}=2\omega_{1}>0\), \(m_{2}=2m_{1}\), \(4<N<6\) and (1.10) hold true. There exists \((\xi(x),-\eta(x))\in\mathcal{M}\) with \(\xi(x)>0,\ \ \eta(x)>0\) such that_
1. \(\mathcal{S}\left(\xi(x),-\eta(x)\right)=K=\inf\limits_{u,v\in\mathcal{M}} \mathcal{S}(u,v)\)_;_
2. \((\xi(x),-\eta(x))\) _is a ground state solution of (_4.1_);_
3. \(\xi(x)\) _and_ \(\eta(x)\) _are functions of_ \(|x|\) _alone and have exponential decays at infinity._
_Here, \(\mathcal{S}\) and \(\mathcal{M}\) are defined by (1.6) and (1.8), respectively._
Before proving Theorem 4.1, we first give a key conclusion.
**Lemma 4.2**.: _Assuming that (1.10) holds true, let \(\omega_{2}=2\omega_{1}>0\), \(m_{2}=2m_{1}>0\), \(4<N<6\), and let \((u,v)\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\) be the solution of (4.1). Then the functions_
\[g_{1}(u,v)=-\omega_{1}u(x)-a_{1}u(x)v(x)\ \ \text{and}\ \ g_{2}(u,v)=-\omega_{2}v(x)-a_{ 2}u^{2}(x)\]
_satisfy the following conditions._
_For \(L=\dfrac{N+2}{N-2}\), there hold_
\[1-L=\dfrac{-4}{N-2},\ \ 2<L<3,\ \ 0<L-2<1,\] ( \[S-1\] )
\[-\infty<\liminf_{(u,v)\to(0^{+},0^{-})}\dfrac{g_{1}(u,v)}{u}\leq\limsup_{(u,v )\to(0^{+},0^{-})}\dfrac{g_{1}(u,v)}{u}=-\omega_{1}<0,\] ( \[S-2\] )
\[-\infty<\liminf_{(u,v)\to(0^{+},0^{-})}\dfrac{g_{2}(u,v)}{v}\leq\limsup_{(u,v )\to(0^{+},0^{-})}\dfrac{g_{2}(u,v)}{v}=-\omega_{2}<0,\] ( \[S-3\] )
\[-\infty\leq\limsup_{(u,v)\to(+\infty,-\infty)}\dfrac{g_{1}(u,v)}{u^{L}}\leq 0,\] ( \[S-4\] )
\[-\infty\leq\limsup_{(u,v)\to(+\infty,-\infty)}\dfrac{g_{2}(u,v)}{v^{L}}\leq 0.\] ( \[S-5\] )
_In particular, \(g_{1}(u,v)\) and \(g_{2}(u,v)\) satisfy two stronger conditions for \(L=\dfrac{N+2}{N-2}\):_
\[\lim_{(u,v)\to(\pm\infty,\pm\infty)}\dfrac{|g_{1}(u,v)|}{|u|^{L}}=0,\] ( \[S-4\] )
\[\lim_{(u,v)\to(\pm\infty,\infty)}\frac{|g_{2}(u,v)|}{|v|^{L}}=0.\]
_Furthermore, there exist \(u^{*}>0\) and \(v^{*}<0\) such that_
\[G(u^{*},v^{*})=-a_{1}a_{2}{u^{*}}^{2}v^{*}-a_{2}{\omega_{1}}{u^{*}}^{2}-\frac{a _{1}{\omega_{2}}{v^{*}}^{2}}{2}>0,\]
_where_
\[\frac{\partial G(u^{*},v^{*})}{\partial u^{*}}=-2a_{2}{\omega_{1}}u^{*}-2a_{1 }a_{2}u^{*}v^{*},\]
_and_
\[\frac{\partial G(u^{*},v^{*})}{\partial v^{*}}=-a_{1}{\omega_{2}}v^{*}-a_{1}a _{2}{u^{*}}^{2}.\]
Proof.: Since \(4<N<6\), for \(L=\frac{N+2}{N-2}\), direct calculation leads to (S-1). Noticing that
\[g_{1}(u,v)=-{\omega_{1}}u-a_{1}uv,\quad g_{2}(u,v)=-{\omega_{2}}v-a_{2}u^{2},\]
by (4.1), and using L'Hospital's rule, we have
\[\begin{split}-\infty&<\liminf_{(u,v)\to(0^{+},0^ {-})}\frac{g_{1}(u,v)}{u}\leq\limsup_{(u,v)\to(0^{+},0^{-})}\frac{g_{1}(u,v)}{ u}\\ &=\limsup_{(u,v)\to(0^{+},0^{-})}\frac{-{\omega_{1}}u-a_{1}uv}{ u}=\limsup_{(u,v)\to(0^{+},0^{-})}(-{\omega_{1}}-a_{1}v)=-{\omega_{1}}<0,\end{split}\]
\[\begin{split}-\infty&<\liminf_{(u,v)\to(0^{+},0^ {-})}\frac{g_{2}(u,v)}{v}\leq\limsup_{(u,v)\to(0^{+},0^{-})}\frac{g_{2}(u,v)}{ v}\\ &=\limsup_{(u,v)\to(0^{+},0^{-})}\frac{-{\omega_{2}}v-a_{2}u^{2} }{v}=\limsup_{(u,v)\to(0^{+},0^{-})}\left(-{\omega_{2}}-a_{2}\frac{u^{2}}{v}\right) \\ &=-{\omega_{2}}-a_{2}\limsup_{(u,v)\to(0^{+},0^{-})}\frac{u^{2}}{ v}=-{\omega_{2}}<0,\end{split}\]
\[\begin{split}-\infty&\leq\liminf_{(u,v)\to(+\infty, -\infty)}\frac{g_{1}(u,v)}{u^{L}}\leq\limsup_{(u,v)\to(+\infty,-\infty)}\frac{ g_{1}(u,v)}{u^{L}}\\ &=\limsup_{(u,v)\to(+\infty,-\infty)}\frac{-{\omega_{1}}u-a_{1} uv}{u^{L}}=\limsup_{(u,v)\to(+\infty,-\infty)}\left(\frac{-{\omega_{1}}}{u^{L-1}}-a_{1} \frac{v}{u^{L-1}}\right)\\ &=\limsup_{(u,v)\to(+\infty,-\infty)}\left(-a_{1}\frac{v}{u^{L- 1}}\right)=\limsup_{(u,v)\to(+\infty,-\infty)}\left(\frac{-a_{1}}{(L-1)u^{L-2 }}\right)=0,\end{split}\]
\[\begin{split}-\infty&\leq\liminf_{(u,v)\to(+ \infty,-\infty)}\frac{g_{2}(u,v)}{v^{L}}\leq\limsup_{(u,v)\to(+\infty,-\infty)} \frac{g_{2}(u,v)}{v^{L}}\\ &=\limsup_{(u,v)\to(+\infty,-\infty)}\left(\frac{-{\omega_{2}}} {v^{L-1}}-a_{2}\frac{u^{2}}{v^{L}}\right)=\limsup_{(u,v)\to(+\infty,-\infty)} \left(\frac{-a_{2}}{L(L-1)v^{L-2}}\right)=0,\end{split}\]
\[\lim_{(u,v)\to(\pm\infty,\pm\infty)}\frac{|g_{1}(u,v)|}{|u|^{L}} =\lim_{(u,v)\to(\pm\infty,\pm\infty)}\frac{|-\omega_{1}u-a_{1}uv|} {|u|^{L}}\] ( \[S-13\] ) \[\leq\lim_{(u,v)\to(\pm\infty,\pm\infty)}\left(\frac{\omega_{1}|u |}{|u|^{L}}+\frac{a_{1}|u|v|}{|u|^{L}}\right)=0,\]
\[\lim_{(u,v)\to(\pm\infty,\pm\infty)}\frac{|g_{2}(u,v)|}{|v|^{L}} =\lim_{(u,v)\to(\pm\infty,\pm\infty)}\frac{|-\omega_{2}v-a_{2}u^{ 2}|}{|v|^{L}}\] ( \[S-14\] ) \[\leq\lim_{(u,v)\to(\pm\infty,\pm\infty)}\left(\frac{\omega_{2}|v |}{|v|^{L}}+\frac{a_{2}|u|^{2}}{|v|^{L}}\right)=0.\]
Combining \((S-9)\) with \((S-10),\ (S-11),\ (S-12),\ (S-13)\) and \((S-14)\) yields \((S-1),\ (S-2),\ (S-3),\ (S-4),\ (S-5),\ (S-4)^{*}\) and \((S-5)^{*}\).
We then verify estimate \((S-6)\). Using \((S-7)\), one obtains
\[\begin{split} G(u^{*},v^{*})&=\int_{0}^{u^{*}}\left( -2a_{2}\omega_{1}s-2a_{1}a_{2}sv^{*}\right)ds+\Phi(v^{*})\\ &=-a_{2}\omega_{1}{u^{*}}^{2}-a_{1}a_{2}{u^{*}}^{2}v^{*}+\Phi(v^{ *}).\end{split}\] ( \[S-15\] )
Since \((u^{*},v^{*})\) satisfies \((S-8)\), differentiating \((S-15)\) with respect to \(v^{*}\) yields
\[-a_{1}a_{2}{u^{*}}^{2}+\frac{d\Phi(v^{*})}{dv^{*}}=-a_{1}\omega_{2}v^{*}-a_{1 }a_{2}{u^{*}}^{2},\]
that is,
\[\frac{d\Phi(v^{*})}{dv^{*}}=-a_{1}\omega_{2}v^{*},\]
then
\[\Phi(v^{*})=-\frac{a_{1}}{2}\omega_{2}{v^{*}}^{2}.\] ( \[S-16\] )
This together with \((S-16)\) leads to
\[G(u^{*},v^{*})=-a_{2}\omega_{1}{u^{*}}^{2}-\frac{a_{1}}{2}\omega_{2}{v^{*}}^{ 2}-a_{1}a_{2}{u^{*}}^{2}v^{*}.\] ( \[S-17\] )
Since \(\{(u,v):G(u,v)=0\}\) is an one-dimensional set and
\[G(u,-u)=-a_{2}\omega_{1}u^{2}-\frac{a_{1}}{2}\omega_{2}u^{2}+a_{1}a_{2}u^{3},\]
we have \(G(u,-u)>0\) if \(u\geq u_{0}\) for \(u_{0}\) large enough. Thus we can find \(u^{*}>0\) and \(v^{*}<0\) such that
\[G(u^{*},v^{*})=-a_{2}\omega_{1}{u^{*}}^{2}-\frac{a_{1}}{2}\omega_{2}{v^{*}}^{ 2}-a_{1}a_{2}{u^{*}}^{2}v^{*}>0.\]
This completes the proof of Lemma 4.2.
We now begin to show Theorem 4.1, which will be divided into three subsections.
Proof.: 4.1. Existence of solutions to the minimizing problem (1.9) \(\big{(}\) proof of (1) in Theorem 4.1 \(\big{)}\).
4.2. Existence of ground state solution of (4.1) \(\big{(}\) proof of (2) in Theorem 4.1 \(\big{)}\).
4.3. Exponential decay of the ground state solution of (4.1) \(\big{(}\) proof of (3) in Theorem 4.1 \(\big{)}\).
### Existence of solutions to the minimizing problem (1.9)
--proof of (1) in Theorem 4.1
In order to prove (1) in Theorem 4.1, it is sufficient to show the following result which provides a variational characterization of the ground states to (4.1).
**Proposition 4.3**.: _Let \(\omega_{2}=2\omega_{1}\) and \(4<N<6\), assuming that (1.10) holds true. Then any solution \((u,v)\) of (4.1) belongs to \(\mathcal{M}\). In addition, there exists \((\xi,-\eta)\in\mathcal{M}\) with \(\xi>0,\ \ \eta>0\) such that_
\[\mathcal{S}(\xi,-\eta)=\min_{(u,v)\in\mathcal{M}}\mathcal{S}(u,v). \tag{4.2}\]
_Here, \(\mathcal{S}\) and \(\mathcal{M}\) are defined by (1.6) and (1.8), respectively._
**Remark 4.3.1**.: _Theorem 4.1 indicates that any ground state solution of (4.1) is a solution of the minimization problem (4.2)._
Before proving Proposition 4.3, we recall here the basic properties of Schwarz symmetrization. We first mention the definition of the Schwarz spherical rearrangement (or symmetrization) of a function (see Berestycki-Lions[1]).
**Definition 4.4**.: _[_1_]__(Schwarz symmetrization ) Let \(f\in L^{1}(\mathbb{R}^{N})\) be a nonnegative function, then \(f^{*}\), the Schwarz symmetrized function of \(f\), is the unique spherically symmetric, non-increasing (in \(r=|x|\)), measurable function such that for all \(\alpha>0\), \(m\left\{x\in\mathbb{R}^{N}:f^{*}\geq\alpha\right\}=m\left\{x\in\mathbb{R}^{N}: |f|\geq\alpha\right\}\), where \(m\) is the Lebesgue measure._
We next refer to Berestycki and Lions [1], Appendix AIII for the main properties of the Schwarz symmetrization.
**Lemma 4.5**.: _[_1_]__(Basic properties of Schwarz symmetrization) Let \(f^{*}\) and \(g^{*}\) be the Schwarz symmetrization of functions \(|f|\) and \(|g|\), respectively. Then there hold:_
_(1) For every continuous function_ \(F\) _such that_ \(F(f)\) _is integrable, then_
\[\int_{\mathbb{R}^{N}}F(f)dx=\int_{\mathbb{R}^{N}}F(f^{*})dx.\]
_(2) Riesz inequality: Let_ \(f^{*}\) _and_ \(g^{*}\) _be in_ \(L^{2}(\mathbb{R}^{N})\)_, then_
\[\int_{\mathbb{R}^{N}}f(x)g(x)dx\leq\int_{\mathbb{R}^{N}}f^{*}(x)g^{*}(x)dx.\]
_(3) Let_ \(f\) _,_ \(g\) _be in_ \(L^{2}(\mathbb{R}^{N})\)_, then_ \(\|f^{*}-g^{*}\|_{L^{2}(\mathbb{R}^{N})}\leq\|f-g\|_{L^{2}(\mathbb{R}^{N})}\)_._
_(4) \(\int_{\mathbb{R}^{N}}|f^{*}|^{p}dx=\int_{\mathbb{R}^{N}}|f|^{p}dx\) for all \(1\leq p<\infty\) such that \(f\in L^{p}(\mathbb{R}^{N})\)._
_(5) Let_ \(f\) _be in_ \(\mathcal{D}^{1,2}(\mathbb{R}^{N})\) _if_ \(N\geq 3\ \left(\text{respectively, in }H^{1}(\mathbb{R}^{N})\text{ for any }N\right)\)_, then_ \(f^{*}\) _belongs to_ \(\mathcal{D}^{1,2}(\mathbb{R}^{N})\ \left(\text{respectively, to }H^{1}(\mathbb{R}^{N})\right)\)_, and there holds_ \(\int_{\mathbb{R}^{N}}|\nabla f^{*}|^{2}dx\leq\int_{\mathbb{R}^{N}}|\nabla f |^{2}dx\)_._
_(6) Let_ \(f_{\lambda}(x)=\lambda^{\frac{N}{2}}f(\lambda x)\)_, then_ \((f_{\lambda})^{*}=(f^{*})_{\lambda}\)_._
From the definitions of \(\mathcal{S}(u,v)\), \(\mathcal{Q}(u,v)\) and \(\mathcal{M}\) formulated by (1.6), (1.7) and (1.8), respectively, we claim
**Lemma 4.6**.: _Let \(\omega_{2}=2\omega_{1}>0\), \(4<N<6\) and (1.10) hold true. Then \(\mathcal{S}(u,v)\) is bounded below on \(\mathcal{M}\)._
Proof.: If \((u,v)\in\mathcal{M}\), then from (1.6), (1.7) and (1.8) it follows that
\[\begin{split}\mathcal{S}(u,v)&=\frac{N-4}{2N} \left(\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx+\frac{a_{1}}{2 m_{2}}\int_{\mathbb{R}^{N}}|\nabla v|^{2}dx\right)\\ &\quad+a_{2}\omega_{1}\int_{\mathbb{R}^{N}}|u|^{2}dx+\frac{a_{1}} {2}\omega_{2}\int_{\mathbb{R}^{N}}|v|^{2}dx.\end{split} \tag{4.3}\]
Note that \(\omega_{2}=2\omega_{1}>0\) and \(N>4\), (4.3) yields that \(\mathcal{S}(u,v)>0\) for any \((u,v)\in\mathcal{M}\).
We then formulate a technique lemma.
**Lemma 4.7**.: _Let \(\omega_{2}=2\omega_{1}>0\), and \(4<N<6\), assuming that (1.10) holds true. For_
\[(u,v)\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\setminus\{(0,0) \}\quad\text{and}\quad\lambda>0,\]
_let_
\[u_{\lambda}(x)=\lambda^{\frac{N}{2}}u(\lambda x),\quad v_{\lambda}(x)=\lambda ^{\frac{N}{2}}v(\lambda x). \tag{4.4}\]
_Then there exists a unique \(\beta>0\) (depending on \((u,v)\)) such that \(Q(u_{\beta},v_{\beta})=0\), and_
\[\begin{cases}Q(u_{\lambda},v_{\lambda})>0&for\quad\lambda\in(0,\beta),\\ Q(u_{\lambda},v_{\lambda})<0&for\quad\lambda\in(\beta,\infty),\\ \mathcal{S}(u_{\beta},v_{\beta})\geq\mathcal{S}(u_{\lambda},v_{\lambda})&for \quad any\quad\lambda>0.\end{cases} \tag{4.5}\]
Proof.: From (1.6) and (1.7), direct computation gives
\[\begin{split}\mathcal{S}(u_{\lambda},v_{\lambda})&=\frac{ a_{2}}{2m_{1}}\lambda^{2}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx+\frac{a_{1}}{4m_{2}} \lambda^{2}\int_{\mathbb{R}^{N}}|\nabla v|^{2}dx\\ &\quad+a_{2}\omega_{1}\int_{\mathbb{R}^{N}}|u|^{2}dx+\frac{a_{1}} {2}\omega_{2}\int_{\mathbb{R}^{N}}|v|^{2}dx+\lambda^{\frac{N}{2}}a_{1}a_{2} \int_{\mathbb{R}^{N}}vu^{2}dx,\end{split}\]
\[\begin{split} Q(u_{\lambda},v_{\lambda})&=\frac{a_{2 }}{m_{1}}\lambda^{2}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx+\frac{a_{1}}{2m_{2}} \lambda^{2}\int_{\mathbb{R}^{N}}|\nabla v|^{2}dx+\frac{N}{2}\lambda^{\frac{N }{2}}a_{1}a_{2}\int_{\mathbb{R}^{N}}vu^{2}dx\\ &=\lambda^{\frac{N}{2}}\left[\lambda^{2-\frac{N}{2}}\left(\frac{ a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx+\frac{a_{1}}{2m_{2}}\lambda^{2} \int_{\mathbb{R}^{N}}|\nabla v|^{2}dx\right)+\frac{N}{2}a_{1}a_{2}\int_{ \mathbb{R}^{N}}vu^{2}dx\right].\end{split}\]
Recalling \((S-6)\) in Lemma 4.2, there exists \(\beta>0\) such that \(Q(u_{\beta},v_{\beta})=0\), where \(\beta\) depends on \((u,v)\) with \(v<0\). In addition, there holds
\[Q(u_{\lambda},v_{\lambda})>0\quad for\quad\lambda\in(0,\beta),\quad Q(u_{ \lambda},v_{\lambda})<0\quad for\quad\lambda\in(\beta,+\infty).\]
Note that
\[\frac{d}{d\lambda}\mathcal{S}(u_{\lambda},v_{\lambda})=\lambda^{-1}Q(u_{ \lambda},v_{\lambda}),\]
and
\[Q(u_{\beta},v_{\beta})=0,\]
one knows for any \(\lambda>0\), \(\mathcal{S}(u_{\beta},v_{\beta})\geq\mathcal{S}(u_{\lambda},v_{\lambda})\).
This completes the proof of Lemma 4.7.
We further give a crucial conclusion.
**Lemma 4.8**.: _Solutions of (4.1) belong to \(\mathcal{M}\)._
Proof.: Let \((u,v)\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\setminus\{(0,0)\}\) be a solution of (4.1). Firstly, multiplying \((\ref{eq:1}-a)\) by \(2a_{2}u\) and \((\ref{eq:1}-b)\) by \(a_{1}v\), then integrating over \(\mathbb{R}^{N}\) with respect to \(x\) yields
\[\begin{split}\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u|^ {2}dx+&\frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla v|^{2}dx+ 2a_{2}\omega_{1}\int_{\mathbb{R}^{N}}u^{2}dx\\ +& a_{1}\omega_{2}\int_{\mathbb{R}^{N}}v^{2}dx+3a_{1 }a_{2}\int_{\mathbb{R}^{N}}vu^{2}dx=0.\end{split}\] (B-1)
On the other hand, multiplying \((\ref{eq:1}-a)\) by \(2a_{2}x\cdot\nabla u\) and \((\ref{eq:1}-b)\) by \(a_{1}x\cdot\nabla v\), then integrating over \(\mathbb{R}^{N}\), we obtain
\[\begin{split}\frac{N-2}{Nm_{1}}a_{2}\int_{\mathbb{R}^{N}}|\nabla u |^{2}dx+&\frac{N-2}{2Nm_{2}}a_{1}\int_{\mathbb{R}^{N}}|\nabla v| ^{2}dx+2a_{2}\omega_{1}\int_{\mathbb{R}^{N}}u^{2}dx\\ +& a_{1}\omega_{2}\int_{\mathbb{R}^{N}}v^{2}dx+2a_{1 }a_{2}\int_{\mathbb{R}^{N}}vu^{2}dx=0.\end{split}\] (B-2)
Now subtracting (B-2) from (B-1) leads to
\[\frac{2a_{2}}{Nm_{1}}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx+\frac{a_{1}}{Nm_{2 }}\int_{\mathbb{R}^{N}}|\nabla v|^{2}dx+a_{1}a_{2}\int_{\mathbb{R}^{N}}vu^{2} dx=0,\]
which is equivalent to
\[\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx+\frac{a_{1}}{2m_{2}} \int_{\mathbb{R}^{N}}|\nabla v|^{2}dx+\frac{N}{2}a_{1}a_{2}\int_{\mathbb{R}^{ N}}vu^{2}dx=0,\]
that is, \(Q(u,v)=0\).
This completes the proof of Lemma 4.8.
We are now in the position to show Proposition 4.3.
**Proof of Proposition 4.3**.
Proof.: Let \(\{(u_{n},v_{n}),n\in\mathbb{N}\}\subset\mathcal{M}\) be a minimizing sequence for (1.9), that is, \((u_{n},v_{n})\neq(0,0)\), and as \(n\to+\infty\),
\[\mathcal{S}(u_{n},v_{n})\to\inf_{(u,v)\in\mathcal{M}}\mathcal{S}(u,v), \tag{4.6}\]
as well as
\[Q(u_{n},v_{n})=\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u_{n}|^{2}dx+ \frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla v_{n}|^{2}dx+\frac{N}{2}a_{1 }a_{2}\int_{\mathbb{R}^{N}}v_{n}u_{n}^{2}dx=0, \tag{4.6}\]
which implies that \(v_{n}\) needs to satisfy \(v_{n}<0\). Thus, we can rewritten (4.6)\({}^{*}\) as
\[Q(u_{n},v_{n})=\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u_{n}|^{2}dx+ \frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla v_{n}|^{2}dx-\frac{N}{2}a_{1 }a_{2}\int_{\mathbb{R}^{N}}(-v_{n})u_{n}^{2}dx=0. \tag{4.6}\]
According to Lemma 4.4 and Lemma 4.5, let \(u_{n}^{*}\) and \(v_{n}^{*}\) be the Schwarz spherical rearrangement of functions \(|u_{n}|\) and \(|v_{n}|=-v_{n}\) (with \(v_{n}<0\)), respectively. Then \(Q(u_{n}^{*},-v_{n}^{*})\) can be written as
\[Q(u_{n}^{*},-v_{n}^{*})=\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u_{n}|^ {2}dx+\frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla v_{n}^{*}|^{2}dx-\frac{ N}{2}a_{1}a_{2}\int_{\mathbb{R}^{N}}v_{n}^{*}{u_{n}^{*}}^{2}dx. \tag{4.7}\]
Referring to Lemma 4.4 and Lemma 4.5 again, we obtain
\[\begin{split} Q(u_{n}^{*},-v_{n}^{*})&=\frac{a_{2}}{m_{ 1}}\int_{\mathbb{R}^{N}}|\nabla u_{n}^{*}|^{2}dx+\frac{a_{1}}{2m_{2}}\int_{ \mathbb{R}^{N}}|\nabla v_{n}^{*}|^{2}dx-\frac{N}{2}a_{1}a_{2}\int_{\mathbb{R}^ {N}}v_{n}^{*}{u_{n}^{*}}^{2}dx\\ &\leq\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u_{n}|^{2}dx+ \frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla v_{n}|^{2}dx-\frac{N}{2}a_{1}a _{2}\int_{\mathbb{R}^{N}}(-v_{n}u_{n}^{2})dx\\ &\leq\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u_{n}|^{2}dx +\frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla v_{n}|^{2}dx+\frac{N}{2}a_{1 }a_{2}\int_{\mathbb{R}^{N}}v_{n}u_{n}^{2}dx\\ &=Q(u_{n},v_{n})=0.\end{split} \tag{4.7}\]
As given in Lemma 4.7, put
\[(u_{n})_{\lambda}=\lambda^{\frac{N}{2}}u_{n}(\lambda x),\quad(v_{n})_{\lambda }=\lambda^{\frac{N}{2}}v_{n}(\lambda x). \tag{4.7}\]
Then for the minimizing sequence \(\{(u_{n},v_{n}),n\in\mathbb{N}\}\subset\mathcal{M}\), we let
\[\xi_{n}=(u_{n}^{*})_{\beta_{n}},\quad\eta_{n}=(v_{n}^{*})_{\beta_{n}},\]
where \(0<\beta_{n}\leq 1\) is uniquely determined by
\[Q(\xi_{n},-\eta_{n})=Q\left((u_{n}^{*})_{\beta_{n}},-(v_{n}^{*})_{\beta_{n}} \right)=0. \tag{4.8}\]
On the other hand, by (6) of Lemma 4.5, there holds
\[\xi_{n}=(u_{n}^{*})_{\beta_{n}}=\left[(u_{n})_{\beta_{n}}\right]^{*},\quad\eta _{n}=(v_{n}^{*})_{\beta_{n}}=\left[(-v_{n})_{\beta_{n}}\right]^{*}.\]
Hence, note that (4.7)\({}^{b}\), \(Q(\xi_{n},-\eta_{n})=Q\left((u_{n}^{*})_{\beta_{n}},-(v_{n}^{*})_{\beta_{n}} \right)=Q\left(\left[(u_{n})_{\beta_{n}}\right]^{*},-\left[(-v_{n})_{\beta_{n} }\right]^{*}\right)\) can be formulated as
\[\begin{split} Q(\xi_{n},-\eta_{n})&=Q\left(\left[ (u_{n})_{\beta_{n}}\right]^{*},-\left[(-v_{n})_{\beta_{n}}\right]^{*}\right)\\ &=\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi_{n}|^{2}dx+ \frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla\eta_{n}|^{2}dx-\frac{N}{2}a_ {1}a_{2}\int_{\mathbb{R}^{N}}\eta_{n}{\xi_{n}}^{2}dx\\ &=0.\end{split} \tag{4.8}\]
Note that (1.10), \(4<N<6\), by (4) and (5) of Lemma 4.5, as well as (4.3), in view of \(Q(u_{n},v_{n})=0\) and Lemma 4.7 for \(\beta_{n}=1\), we obtain
\[\mathcal{S}(\xi_{n},-\eta_{n})\leq\mathcal{S}\left((u_{n})_{\beta_{n}},-(-v_{ n})_{\beta_{n}}\right)\leq\mathcal{S}(u_{n},v_{n}). \tag{4.9}\]
Thus (4.8) and (4.9) yield that
\[(\xi_{n},-\eta_{n})\in\mathcal{M}\quad\text{and}\quad\mathcal{S}(\xi_{n},- \eta_{n})\leq\mathcal{S}(u_{n},v_{n}). \tag{4.10}\]
This implies that \(\{(\xi_{n},-\eta_{n}),n\in\mathbb{N}\}\) itself is a minimizing sequence for (1.9).
Now for the minimizing sequence \(\{(\xi_{n},-\eta_{n}),n\in\mathbb{N}\}\) of (1.9), by Lemma 4.6 and (4.6), one gets that \(\|\xi_{n}\|_{H^{1}(\mathbb{R}^{N})}\) and \(\|\eta_{n}\|_{H^{1}(\mathbb{R}^{N})}\) are both bounded for all \(n\in\mathbb{N}\). Recall that \((\xi_{n},\eta_{n})\) are sequences of spherically symmetric non-increasing functions, there exists a subsequence \(\{\xi_{n_{k}},k\in\mathbb{N}\}\subset\{\xi_{n},n\in\mathbb{N}\}\) such that as \(k\to+\infty\),
\[\begin{cases}\xi_{n_{k}}\to\xi_{\infty}\quad\text{weakly}\quad\text{in}\quad H^ {1}(\mathbb{R}^{N}),\\ \xi_{n_{k}}\to\xi_{\infty}\quad\text{a.e.}\quad\text{in}\quad\mathbb{R}^{N}. \end{cases} \tag{4.11}\]
On the other hand, for \(\left\{\eta_{n_{k}},k\in\mathbb{N}\right\}\subset\left\{\eta_{n},n\in\mathbb{N}\right\}\), there also exists a subsequence \(\left\{\eta_{n_{k_{m}}},m\in\mathbb{N}\right\}\subset\left\{\eta_{n_{k}},k\in \mathbb{N}\right\}\) such that as \(m\to+\infty\)
\[\begin{cases}\eta_{n_{k_{m}}}\rightharpoonup\eta_{\infty}&weakly\quad\text{ in}\quad H^{1}(\mathbb{R}^{N}),\\ \eta_{n_{k_{m}}}\to\eta_{\infty}&a.e.\quad\text{in}\quad\mathbb{R}^{N}.\end{cases} \tag{4.12}\]
(4.11) also yields that as \(m\to+\infty\),
\[\begin{cases}\xi_{n_{k_{m}}}\rightharpoonup\xi_{\infty}&weakly\quad\text{ in}\quad H^{1}(\mathbb{R}^{N}),\\ \xi_{n_{k_{m}}}\to\xi_{\infty}&a.e.\quad\text{in}\quad\mathbb{R}^{N}.\end{cases} \tag{4.13}\]
Thus, we can extract a subsequence \(\left\{\left(\xi_{n_{k_{m}}},\eta_{n_{k_{m}}}\right):\ m\in\mathbb{N}\right\}\) from \(\left\{\left(\xi_{n},\eta_{n}\right):n\in\mathbb{N}\right\}\) such that (4.12) and (4.13) hold. Without any confusion, we still label \(\left\{\left(\xi_{n_{k_{m}}},\eta_{n_{k_{m}}}\right):\ m\in\mathbb{N}\right\}\) with \(\left\{\left(\xi_{n},\eta_{n}\right):n\in\mathbb{N}\right\}\).
Note that Lemma 4.8 that for \(1<p<\dfrac{2N}{N-2}\), the embedding \(H_{r}^{p}(\mathbb{R}^{N})\hookrightarrow L_{r}^{p}(\mathbb{R}^{N})\) is compact, it follows from (4.12) and (4.13) that
\[\begin{cases}\xi_{n}\to\xi_{\infty}&strongly\quad\text{in}\quad L^{2q}( \mathbb{R}^{N}),\\ \eta_{n}\to\eta_{\infty}&strongly\quad\text{in}\quad L^{p}(\mathbb{R}^{N}), \end{cases} \tag{4.14}\]
where
\[\frac{1}{p}+\frac{1}{q}=1,\ 4<N<6,\ 2+\frac{4}{N}<p<\frac{2N}{N-2},\ \frac{2N}{N+2}<q< \frac{2N+4}{N+4}. \tag{4.14}\]
We then claim:
\[\frac{a_{2}}{m_{1}}\|\nabla\xi_{n}\|_{L^{2}(\mathbb{R}^{N})}^{2}+\frac{a_{1}} {2m_{2}}\|\nabla\eta_{n}\|_{L^{2}(\mathbb{R}^{N})}^{2}\ \text{is bounded away from}\ 0.\]
Indeed, recalling (4.7)\({}^{p}\) and (4.8)\({}^{*}\), \(Q(\xi_{n},-\eta_{n})=0\) implies that
\[\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi_{n}|^{2}dx+\frac{a_{1}}{2m_ {2}}\int_{\mathbb{R}^{N}}|\nabla\eta_{n}|^{2}dx-\frac{N}{2}a_{1}a_{2}\int_{ \mathbb{R}^{N}}\eta_{n}\xi_{n}^{2}dx=0. \tag{4.15}\]
Note that assumption (1.10), and \(4<N<6\), by Young's inequality and the Gagliardo-Nirenberg inequality, for \(\dfrac{1}{p}+\dfrac{1}{q}=1\), we have
\[\begin{split}\frac{N}{2}a_{1}a_{2}\int_{\mathbb{R}^{N}}\eta_{n} \xi_{n}^{2}dx&\leq C\left(\int_{\mathbb{R}^{N}}\frac{|\eta_{n}|^{p }}{p}dx+\int_{\mathbb{R}^{N}}\frac{|\xi_{n}|^{2q}}{q}dx\right)\\ &\leq C\left(\|\eta_{n}\|_{L^{p}(\mathbb{R}^{N})}^{p}+\|\xi_{n}\|_ {L^{2}(\mathbb{R}^{N})}^{2q}\right)\\ &\leq C\left\|\xi_{n}\right\|_{L^{2}(\mathbb{R}^{N})}^{2q-\frac{N }{2}(2q-2)}\|\nabla\xi_{n}\|_{L^{2}(\mathbb{R}^{N})}^{\frac{N}{2}(2q-2)}\\ &\quad+C\left\|\eta_{n}\right\|_{L^{2}(\mathbb{R}^{N})}^{p-\frac{ N}{2}(p-2)}\|\nabla\eta_{n}\|_{L^{2}(\mathbb{R}^{N})}^{\frac{N}{2}(p-2)}.\end{split} \tag{4.16}\]
Recalling (4.14)\({}^{*}\), we obtain
\[\begin{cases}\dfrac{N}{4}\left(p-2\right)>1,\quad\dfrac{4N}{N+2}<2q<\dfrac{4N +8}{N+4},\\ \dfrac{N}{4}\left(2q-2\right)>\dfrac{N}{4}\left(\dfrac{4N}{N+2}-2 \right)=\dfrac{N}{2}\dfrac{N-2}{N+2}>1.\end{cases} \tag{4.17}\]
Let \(\theta=max\left\{\frac{N}{4}(p-2),\frac{N}{4}(2q-2)\right\}>1\), recalling that \(\|\xi_{n}\|_{L^{2}(\mathbb{R}^{N})}\leq C\), \(\|\eta_{n}\|_{L^{2}(\mathbb{R}^{N})}\leq C\), (4.16) and (4.17) yield that
\[\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi_{n}|^{2}dx+\frac{a_{1}}{2m_{ 2}}\int_{\mathbb{R}^{N}}|\nabla\eta_{n}|^{2}dx\leq c\left(\frac{a_{2}}{m_{1}} \int_{\mathbb{R}^{N}}|\nabla\xi_{n}|^{2}dx+\frac{a_{1}}{2m_{2}}\int_{\mathbb{R }^{N}}|\nabla\eta_{n}|^{2}dx\right)^{\theta},\]
which implies that
\[\left(\frac{a_{2}}{m_{1}}\|\nabla\xi_{n}\|_{L^{2}(\mathbb{R}^{N})}^{2}+\frac{a _{1}}{2m_{2}}\|\nabla\eta_{n}\|_{L^{2}(\mathbb{R}^{N})}^{2}\right)^{\theta-1} \geq C>0,\]
that is,
\[\frac{a_{2}}{m_{1}}\|\nabla\xi_{n}\|_{L^{2}(\mathbb{R}^{N})}^{2}+\frac{a_{1}} {2m_{2}}\|\nabla\eta_{n}\|_{L^{2}(\mathbb{R}^{N})}^{2}\geq C>0. \tag{4.18}\]
This gives that \(\frac{a_{2}}{m_{1}}\|\nabla\xi_{n}\|_{L^{2}(\mathbb{R}^{N})}^{2}+\frac{a_{1}} {2m_{2}}\|\nabla\eta_{n}\|_{L^{2}(\mathbb{R}^{N})}^{2}\) is bounded away from \(0\). Thus, we claim:
\[(\xi_{\infty},\eta_{\infty})\neq(0,0). \tag{4.18}\]
Indeed, by contradiction, if \((\xi_{\infty},\eta_{\infty})\equiv(0,0)\), then for
\[\frac{1}{p}+\frac{1}{q}=1,\quad 2+\frac{4}{N}<p<\frac{2N}{N-2},\quad\frac{2N}{ N+2}<q<\frac{2N}{N+4},\quad 4<N<6,\]
we obtain
\[\begin{cases}\xi_{n}\to 0\quad strongly\quad\text{in}\quad L^{2q}(\mathbb{R}^{N}), \\ \eta_{n}\to 0\quad strongly\quad\text{in}\quad L^{p}(\mathbb{R}^{N}).\end{cases}\]
Since by (4.15), as \(n\to+\infty\),
\[0<c \leq\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi_{n}|^{2} dx+\frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla\eta_{n}|^{2}dx\] \[=\frac{N}{2}a_{1}a_{2}\int_{\mathbb{R}^{N}}\eta_{n}\xi_{n}^{2}dx\] \[\leq c\int_{\mathbb{R}^{N}}|\xi_{n}|^{2q}dx+c\int_{\mathbb{R}^{N} }|\eta_{n}|^{p}dx\to 0,\]
which implies
\[\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi_{n}|^{2}dx+\frac{a_{1}}{2m _{2}}\int_{\mathbb{R}^{N}}|\nabla\eta_{n}|^{2}dx\to 0\quad as\quad n\to+\infty.\]
This contradicts to (4.18), and hence (4.18)\({}^{*}\) holds true: \((\xi_{\infty},\eta_{\infty})\neq(0,0)\).
Next, we let
\[\xi=(\xi_{\infty})_{\beta},\quad\eta=(\eta_{\infty})_{\beta}, \tag{4.19}\]
with \(\beta>0\) uniquely determined by the condition
\[Q(\xi,-\eta)=Q((\xi_{\infty})_{\beta},-(\eta_{\infty})_{\beta})=0.\]
Here, as (4.15), \(Q(\xi,-\eta)=Q((\xi_{\infty})_{\beta},-(\eta_{\infty})_{\beta})\) can be expressed by
\[Q(\xi,-\eta)=\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi|^{2}dx+\frac{a _{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla\eta|^{2}dx-\frac{N}{2}a_{1}a_{2}\int _{\mathbb{R}^{N}}\eta\xi^{2}dx. \tag{4.19}\]
Similarly, \(\mathcal{S}(\xi,-\eta)=\mathcal{S}((\xi_{\infty})_{\beta},-(\eta_{\infty})_{\beta})\) can be formulated as
\[\begin{split}\mathcal{S}(\xi,-\eta)&=\frac{a_{2}}{2m_ {1}}\int_{\mathbb{R}^{N}}|\nabla\xi|^{2}dx+\frac{a_{1}}{4m_{2}}\int_{\mathbb{R }^{N}}|\nabla\eta|^{2}dx\\ &\quad+a_{2}\omega_{1}\int_{\mathbb{R}^{N}}|\xi|^{2}dx+\frac{a_{1 }}{2}\omega_{2}\int_{\mathbb{R}^{N}}|\eta|^{2}dx-a_{1}a_{2}\int_{\mathbb{R}^{N }}\eta\xi^{2}dx.\end{split} \tag{4.19}\]
Therefore we have
\[\begin{cases}(\xi_{n})_{\beta}\to\xi&strongly\quad in\quad L^{2q}( \mathbb{R}^{N}),\\ (\eta_{n})_{\beta}\to\eta&strongly\quad in\quad L^{p}( \mathbb{R}^{N}),\\ (\xi_{n})_{\beta}\to\xi,&(\eta_{n})_{\beta}\to\eta& weakly\quad in\quad H^{1}(\mathbb{R}^{N}),\\ (\xi_{n})_{\beta}\to\xi,&(\eta_{n})_{\beta}\to\eta&a.e. \quad in\quad\mathbb{R}^{N},\end{cases} \tag{4.20}\]
where \(p,q\) are decided by (4.14) and (4.14)\({}^{*}\).
Since by (4.8), \(\mathcal{Q}(\xi_{n},-\eta_{n})=0\), Lemma 4.7 then yields that
\[\mathcal{S}\left((\xi_{n})_{\beta},-(\eta_{n})_{\beta}\right)\leq\mathcal{S} \left(\xi_{n},-\eta_{n}\right) \tag{4.21}\]
(4.22) and (4.21) then imply that
\[\begin{split}\mathcal{S}(\xi,-\eta)&\leq\liminf_{n \to+\infty}\mathcal{S}\left[(\xi_{n})_{\beta},-(\eta_{n})_{\beta}\right]\\ &\leq\lim_{n\to+\infty}\mathcal{S}\left(\xi_{n},-\eta_{n}\right)= \inf_{(u,v)\in\mathcal{M}}\mathcal{S}(u,v).\end{split} \tag{4.23}\]
Note that \((\xi,\eta)\neq(0,0)\) and \(\mathcal{Q}(\xi,-\eta)=0\), there holds \((\xi,-\eta)\in\mathcal{M}\). Therefore, (4.22) yields that \((\xi,-\eta)\) solves the minimization problem:
\[\mathcal{S}(\xi,-\eta)=\min_{(u,v)\in\mathcal{M}}\mathcal{S}(u,v). \tag{4.24}\]
This completes the proof of Proposition 4.3.
So far, the proof of (1) in Theorem 4.1 is finished.
### Existence of ground state solution of (4.1)
--proof of (2) in Theorem 4.1
In this subsection, we prove (2) of Theorem 4.1.
Proof.: Since \((\xi,-\eta)\) is a solution of the minimization problem (4.24), there exists a Lagrange multiplier \(\Lambda\) such that
\[\delta_{\xi}\left[\mathcal{S}(\xi,-\eta)+\Lambda\mathcal{Q}(\xi,-\eta)\right] =0,\quad\delta_{-\eta}\left[\mathcal{S}(\xi,-\eta)+\Lambda\mathcal{Q}(\xi,- \eta)\right]=0, \tag{4.25}\]
where \(\delta_{u}T\) denotes the variation of \(T(u,v)\) about \(u\). Note that the formula
\[\delta_{u}T(u,v)=\frac{\partial}{\partial\xi}T\left(u+\zeta\delta u,v\right) \Big{|}_{\xi=0},\]
one has
\[\begin{cases}\delta_{\xi}\left[\mathcal{S}(\xi,-\eta)+\Lambda\mathcal{Q}(\xi,-\eta) \right]=\left\langle\mathcal{A}(\xi,-\eta),\delta\xi\right\rangle,\\ \\ \delta_{-\eta}\left[\mathcal{S}(\xi,-\eta)+\Lambda\mathcal{Q}(\xi,-\eta) \right]=\left\langle\mathcal{B}(\xi,-\eta),\delta(-\eta)\right\rangle,\end{cases} \tag{4.25}\]
where \(\delta u\) denotes the variation of \(u\), \(\left\langle f,g\right\rangle=\int_{\mathbb{R}^{N}}fgdx\),
\[\begin{cases}\mathcal{A}(\xi,-\eta)=2(1+2\Lambda)\frac{a_{2}}{2m_{1}}(-\Delta \xi)+2a_{2}\omega_{1}\xi-2\left(1+\frac{N}{2}\Lambda\right)a_{1}a_{2}\xi\eta, \\ \\ \mathcal{B}(\xi,-\eta)=2(1+2\Lambda)\frac{a_{1}}{4m_{2}}\Delta\eta-a_{1} \omega_{2}\eta+2\left(1+\frac{N}{2}\Lambda\right)a_{1}a_{2}\xi^{2}.\end{cases} \tag{4.26}\]
Combining (4.24) with (4.25) and (4.26) yields
\[(1+2\Lambda)\frac{a_{2}}{2m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi|^{2}dx+a_{2} \omega_{1}\int_{\mathbb{R}^{N}}\xi^{2}dx-\left(1+\frac{N}{2}\Lambda\right)a_{ 1}a_{2}\int_{\mathbb{R}^{N}}\xi^{2}\eta dx=0, \tag{4.27}\]
\[(1+2\Lambda)\frac{a_{1}}{4m_{2}}\int_{\mathbb{R}^{N}}|\nabla\eta|^{2}dx+\frac {a_{1}}{2}\omega_{2}\int_{\mathbb{R}^{N}}\eta^{2}dx-\frac{1}{2}\left(1+\frac{ N}{2}\Lambda\right)a_{1}a_{2}\int_{\mathbb{R}^{N}}\xi^{2}\eta dx=0. \tag{4.28}\]
On the other hand \(Q(\xi,-\eta)=0\) gives
\[2\left(\frac{a_{2}}{2m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi|^{2}dx+\frac{a_{1} }{4m_{2}}\int_{\mathbb{R}^{N}}|\nabla\eta|^{2}dx\right)-\frac{N}{2}a_{1}a_{2} \int_{\mathbb{R}^{N}}\eta\xi^{2}dx=0, \tag{4.29}\]
which is equivalent to
\[\frac{6}{N}\left(\frac{a_{2}}{2m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi|^{2}dx+ \frac{a_{1}}{4m_{2}}\int_{\mathbb{R}^{N}}|\nabla\eta|^{2}dx\right)-\frac{3}{2} a_{1}a_{2}\int_{\mathbb{R}^{N}}\eta\xi^{2}dx=0. \tag{4.30}\]
Combining (4.27), (4.28), (4.29) and (4.30) together yields
\[\begin{split}&\left(1-\frac{6}{N}\right)\left(\frac{a_{2}}{2m_{1}} \int_{\mathbb{R}^{N}}|\nabla\xi|^{2}dx+\frac{a_{1}}{4m_{2}}\int_{\mathbb{R}^{ N}}|\nabla\eta|^{2}dx\right)\\ &\qquad+a_{2}\omega_{1}\int_{\mathbb{R}^{N}}\xi^{2}dx+\frac{a_{1} }{2}\omega_{2}\int_{\mathbb{R}^{N}}\eta^{2}dx-\frac{N}{4}\Lambda a_{1}a_{2} \int_{\mathbb{R}^{N}}\eta\xi^{2}dx=0.\end{split} \tag{4.31}\]
Let
\[\xi^{\tau}(x)=\frac{1}{\tau^{2}}\xi\left(\frac{x}{\tau}\right),\quad\eta^{ \tau}(x)=\frac{1}{\tau^{2}}\eta\left(\frac{x}{\tau}\right),\quad\tau>0, \tag{4.32}\]
then
\[\begin{split}& Q(\xi^{\tau},-\eta^{\tau})=\tau^{N-6}\frac{a_{2}} {m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi|^{2}dx+\tau^{N-6}\frac{a_{1}}{2m_{2}} \int_{\mathbb{R}^{N}}|\nabla\eta|^{2}dx-\frac{N}{2}\tau^{N-6}a_{1}a_{2}\int_{ \mathbb{R}^{N}}\eta\xi^{2}dx\\ &\tau^{N-6}\mathcal{Q}(\xi,-\eta).\end{split} \tag{4.33}\]
Since \((\xi,-\eta)\in\mathcal{M}\), (1.7) and (4.33) imply that
\[\forall\tau>0,\quad(\xi^{\tau},-\eta^{\tau})\in\mathcal{M}. \tag{4.34}\]
By Lemma 4.7, it follows that the function \(\tau\to\mathcal{S}\left(\xi^{\tau},-\eta^{\tau}\right)\) attains a minimum at \(\tau=1\), which yields that
\[\frac{d}{d\tau}\mathcal{S}\left(\xi^{\tau},-\eta^{\tau}\right)\Big{|}_{\tau=1}=0. \tag{4.34}\]
From (4.3) and \(\left(\xi^{\tau},-\eta^{\tau}\right)\in\mathcal{M}\), \(\mathcal{S}\left(\xi^{\tau},-\eta^{\tau}\right)\) has the following expression
\[\begin{split}\mathcal{S}\left(\xi^{\tau},-\eta^{\tau}\right)& =\left(1-\frac{4}{N}\right)\tau^{N-6}\left(\frac{a_{2}}{2m_{1}} \int_{\mathbb{R}^{N}}|\nabla\xi|^{2}dx+\frac{a_{1}}{4m_{2}}\int_{\mathbb{R}^{N }}|\nabla\eta|^{2}dx\right)\\ &\quad+\tau^{N-4}\left(a_{2}\omega_{1}\int_{\mathbb{R}^{N}}\xi^{ 2}dx+\frac{a_{1}}{2}\omega_{2}\int_{\mathbb{R}^{N}}\eta^{2}dx\right).\end{split} \tag{4.35}\]
(4.34) and (4.35) then yield
\[\begin{split}&\left(1-\frac{4}{N}\right)\left(N-6\right)\left( \frac{a_{2}}{2m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi|^{2}dx+\frac{a_{1}}{4m_{2 }}\int_{\mathbb{R}^{N}}|\nabla\eta|^{2}dx\right)\\ &\quad+\left(N-4\right)\left(a_{2}\omega_{1}\int_{\mathbb{R}^{N}} \xi^{2}dx+\frac{a_{1}}{2}\omega_{2}\int_{\mathbb{R}^{N}}\eta^{2}dx\right)=0, \end{split}\]
that is,
\[\begin{split}\frac{N-6}{N}\left(\frac{a_{2}}{2m_{1}}\int_{ \mathbb{R}^{N}}|\nabla\xi|^{2}dx+\frac{a_{1}}{4m_{2}}\int_{\mathbb{R}^{N}}| \nabla\eta|^{2}dx\right)\\ +a_{2}\omega_{1}\int_{\mathbb{R}^{N}}\xi^{2}dx+\frac{a_{1}}{2} \omega_{2}\int_{\mathbb{R}^{N}}\eta^{2}dx=0.\end{split} \tag{4.36}\]
Recalling (4.31), (4.36) yields
\[\frac{N}{4}\Lambda a_{1}a_{2}\int_{\mathbb{R}^{N}}\eta\xi^{2}dx=0. \tag{4.37}\]
On the other hand, noting that \(\mathcal{Q}(\xi,-\eta)=0\) and \((\xi,\eta)\neq(0,0)\), (4.37) and (1.7) imply
\[\frac{1}{2}\Lambda\left(\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla\xi|^{ 2}dx+\frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla\eta|^{2}dx\right)=0, \tag{4.38}\]
which claims that \(\Lambda=0\). Hence, (4.25) and (4.26) then give
\[\begin{split}\begin{cases}-\frac{a_{2}}{2m_{1}}\Delta\xi+a_{2} \omega_{1}\xi=a_{1}a_{2}\xi\eta,\\ -\frac{a_{1}}{2m_{2}}\Delta\eta+a_{1}\omega_{2}\eta=a_{1}a_{2}\xi^{2}, \end{cases}\end{cases}\end{split}\]
which together with \(a_{1}>0,a_{2}>0\) is equivalent to
\[\begin{cases}-\frac{1}{2m_{1}}\Delta\xi+\omega_{1}\xi=-a_{1}\xi(-\eta),\\ -\frac{1}{2m_{2}}\Delta(-\eta)+\omega_{2}(-\eta)=-a_{2}\xi^{2}. \end{cases} \tag{4.38}\]
This implies that \((\xi,-\eta)\) is a solution of (4.1), as (4.1) is the Euler-Lagrange equations of the functional \(S\) (see for (1.6)). Recalling Lemma 4.8, there holds \((\xi,-\eta)\in\mathcal{M}\). Therefore, \((\xi,-\eta)\) is then a ground state solution of (4.1).
This completes the proof of (2) in Theorem 4.1.
### Exponential decay of the ground state solution of (4.1)
--proof of (3) in Theorem 4.1
In this subsection, we will show the exponential decay of \((\xi,-\eta)\), for \((\xi,-\eta)\) being a spherically symmetric solution of (4.1) at infinity under the conditions in Lemma 4.2. We first claim:
**Proposition 4.9**.: _Let Lemma 4.2 hold true, and let \((\xi,-\eta)\) be the solution of (4.1) obtained in (1) and (2) of Theorem 4.1. Then \((\xi,-\eta)\) satisfies the following exponential decay estimates:_
\[\left|D^{\alpha}\xi(x)\right|\leq ce^{-\delta|x|},\quad\left|D^{\beta}(-\eta( x))\right|\leq ce^{-\kappa|x|},\quad x\in\mathbb{R}^{N} \tag{4.39}\]
_for some \(c>0,\ \delta>0,\ \kappa>0,\ |\alpha|\leq 2\) and \(|\beta|\leq 2\)._
Before showing Proposition 4.9, we first investigate the regularity of the ground state solution \((\xi,-\eta)\) of the problem (4.1) by using \((S-4)^{*}\) and \((S-5)^{*}\) in Lemma 4.2. We then claim:
**Lemma 4.10**.: _Assuming that \(\omega_{2}=2\omega_{1},\ 4<N<6\), and (1.10) holds true, let \((\xi,-\eta)\) be the solution of (4.1) obtained in (1) and (2) of Theorem 4.1. Then \((\xi,-\eta)\in C^{2}(\mathbb{R}^{N})\times C^{2}(\mathbb{R}^{N})\)._
Proof.: Since \((\xi,\eta)\) is a spherically symmetric solution of (4.1), \((\xi,-\eta)\) satisfies for \(\omega_{2}=2\omega_{1}\),
\[\begin{cases}-\dfrac{1}{2m_{1}}\Delta\xi(x)=-\omega_{1}\xi(x)-a_{1}\xi(x)(- \eta(x))=g_{1}(\xi,-\eta),\\ -\dfrac{1}{2m_{2}}\Delta(-\eta(x))=-\omega_{2}(-\eta(x))-a_{2}\xi(x)^{2}=g_{2} (\xi,-\eta).\end{cases} \tag{4.40}\]
Note that
\[\left\{\begin{array}{l} g_{1}(\xi,-\eta)=\left(-\omega_{1}-a_{1}(- \eta(x))\right)\xi(x)\triangleq\dfrac{g_{1}(\xi,-\eta)}{\xi(x)}\cdot\xi(x), \\ \\ g_{2}(\xi,-\eta)=\left(-\omega_{2}-a_{2}\dfrac{\xi^{2}(x)}{-\eta(x)}\right) (-\eta(x))\triangleq\dfrac{g_{2}(\xi,-\eta)}{-\eta(x)}\cdot(-\eta(x)),\end{array}\right. \tag{4.41}\]
by \((S-4)^{*}\) and \((S-5)^{*}\) of Lemma 4.2 with \(-\eta\) playing the role of \(v\) in Lemma 4.2, one obtains for \(L=\dfrac{N+2}{N-2}\),
\[\left\{\begin{array}{l}\left|\dfrac{g_{1}(\xi,-\eta)}{\xi} \right|\leq c+|\xi|^{L-1}=c+|\xi|^{\frac{4}{N-2}},\qquad\eqref{eq:c_1}-a\\ \\ \left|\dfrac{g_{2}(\xi,-\eta)}{-\eta}\right|\leq c+|\eta|^{L-1}=c+|\eta|^{ \frac{4}{N-2}}.\qquad\eqref{eq:c_1}-b\end{array}\right. \tag{4.42}\]
Since \((\xi,-\eta)\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\), we have \((\xi,-\eta)\in L^{2^{*}}(\mathbb{R}^{N})\times L^{2^{*}}(\mathbb{R}^{N})\) for \(2^{*}=\dfrac{2N}{N-2}\). Noting that \(2^{*}=\dfrac{4}{N-2}\cdot\dfrac{N}{2}\), (4.42-a) and (4.42-b) yield that
\[\left(\dfrac{g_{1}(\xi,-\eta)}{\xi},\dfrac{g_{2}(\xi,-\eta)}{-\eta}\right)\in L ^{\frac{N}{2}}(\mathbb{R}^{N})\times L^{\frac{N}{2}}(\mathbb{R}^{N}).\]
Hence it is easy to obtain
\[\left(\Delta\xi(x),\Delta(-\eta(x))\right)\in\left(L^{\frac{N}{N-2}}(\mathbb{R }^{N}),L^{\frac{N}{N-2}}(\mathbb{R}^{N})\right).\]
Recalling that \((\xi,-\eta)\) is a spherically symmetric solution of (4.1), applying Sobolev embedding theorem, we have \((\xi,-\eta)\in L^{p_{1}}_{loc}(\mathbb{R}^{N})\times L^{p_{2}}_{loc}(\mathbb{R}^ {N})\) for \(1\leq p_{1},p_{2}<+\infty\). By (4.40), we also have \((\Delta\xi,\Delta(-\eta))\in L^{p_{1}}_{loc}(\mathbb{R}^{N})\times L^{p_{2}}_{ loc}(\mathbb{R}^{N})\) for \(1\leq p_{1},p_{2}<+\infty\). In addition, a classical boot strap argument (on balls \(B_{R}\)) then shows that \((\xi,-\eta)\in L^{\infty}_{loc}(\mathbb{R}^{N})\times L^{p_{2}}_{loc}(\mathbb{ R}^{N})\) (see Lemma 2.6). Thus by the \(L^{p}-estimate\)[4][2, Theorem 9.11], one obtains that \((\xi,-\eta)\in W^{2,p_{1}}_{loc}(\mathbb{R}^{N})\times W^{2,p_{2}}_{loc}( \mathbb{R}^{N})\), for \(1<p_{1},p_{2}<\infty\). Hence Lemma 2.7 (Rellich's compactness theorem) yields that
\[(\xi,-\eta)\in C^{1,\alpha_{1}}(\mathbb{R}^{N})\times C^{1,\alpha_{2}}( \mathbb{R}^{N})\]
for \(\alpha_{1}\in(0,1)\) and \(\alpha_{2}\in(0,1)\), with \(0<\alpha_{1}\leq 1-\dfrac{1}{p_{1}},0<\alpha_{2}\leq 1-\dfrac{1}{p_{2}}\).
Noting that \((\xi,-\eta)\) is a spherically symmetric solution of (4.1), by (4.40) one knows that for \(\omega_{2}=2\omega_{1}\), \((\xi,-\eta)\) satisfies the equations as follows:
\[\begin{cases}-\dfrac{1}{2m_{1}}\xi_{rr}-\dfrac{1}{2m_{1}}\dfrac{N-1}{r}\xi_{ r}=-\omega_{1}\xi+a_{1}\xi\eta,\\ -\dfrac{1}{2m_{2}}(-\eta_{rr})-\dfrac{1}{2m_{2}}\dfrac{N-1}{r}(-\eta_{r})=- \omega_{2}(-\eta)-a_{2}\xi^{2},\end{cases} \tag{4.42}\]
which is equivalent to that \((\xi,-\eta)\) satisfies
\[\begin{cases}-\dfrac{1}{2m_{1}}\xi_{rr}-\dfrac{1}{2m_{1}}\dfrac{N-1}{r}\xi_{ r}=-\omega_{1}\xi+a_{1}\xi\eta\triangleq g_{1}^{*}(\xi,\eta),\\ -\dfrac{1}{2m_{2}}\eta_{rr}-\dfrac{1}{2m_{2}}\dfrac{N-1}{r}\eta_{r}=- \omega_{2}\eta+a_{2}\xi^{2}\triangleq g_{2}^{*}(\xi,\eta),\end{cases} \tag{4.42}\]
where we denote
\[g_{1}^{*}(\xi,\eta)=-\omega_{1}\xi+a_{1}\xi\eta,\quad g_{2}^{*}(\xi,\eta)=- \omega_{2}\eta+a_{2}\xi^{2}. \tag{4.42}\]
Hence \(\xi_{rr}(r)\) and \(\eta_{rr}(r)\) are continuous expect possible at \(r=0\). We then claim \(:\xi_{rr}(r)\)**and \(\eta_{rr}(r)\) are also continuous at \(r=0\).**
**Lemma 4.11**.: \(\xi_{rr}(r)\) _and \(\eta_{rr}(r)\) are continuous at \(r=0\)._
Proof.: Let
\[Q_{1}(r)=g_{1}^{*}(\xi(r),\eta(r))=-\omega_{1}\xi(r)+a_{1}\xi(r)\eta(r),\]
\[Q_{2}(r)=g_{2}^{*}(\xi(r),\eta(r))=-\omega_{2}\eta(r)+a_{2}\xi^{2}(r),\]
where \(Q_{1}(r)\) and \(Q_{2}(r)\) are continuous on \([0,\infty)\). Rewriting (4.42) as
\[\begin{cases}&-\dfrac{1}{2m_{1}}\dfrac{d}{dr}\left(r^{N-1}\xi_{r}\right)=r^{N -1}Q_{1}(r),\qquad\eqref{eq:11}a)\\ &-\dfrac{1}{2m_{2}}\dfrac{d}{dr}\left(r^{N-1}\eta_{r}\right)=r^{N-1}Q_{2}(r), \qquad\eqref{eq:11}b)\end{cases} \tag{4.43}\]
and then integrating from \(0\) to \(r\) yield
\[\begin{cases}&r^{N-1}\xi_{r}=-2m_{1}\int_{0}^{r}s^{N-1}Q_{1}(s)ds,\\ &\\ &r^{N-1}\eta_{r}=-2m_{2}\int_{0}^{r}s^{N-1}Q_{2}(s)ds,\end{cases} \tag{4.44}\]
let \(s=rt\), then (4.44) becomes
\[\left\{\begin{array}{c}\xi_{r}=-2m_{1}r\int_{0}^{1}t^{N-1}Q_{1}(rt)dt,\\ \\ \eta_{r}=-2m_{2}r\int_{0}^{1}t^{N-1}Q_{2}(rt)dt.\end{array}\right. \tag{4.45}\]
Note that
\[\left\{\begin{array}{c}\lim_{r\to 0}\int_{0}^{1}t^{N-1}Q_{1}(rt)dt= \frac{Q_{1}(0)}{N},\\ \\ \lim_{r\to 0}\int_{0}^{1}t^{N-1}Q_{2}(rt)dt= \frac{Q_{2}(0)}{N},\end{array}\right. \tag{4.45}\]
we obtain that \(\xi_{rr}(0)\) and \(\eta_{rr}(0)\) exist such that
\[\xi_{rr}(0)=-\frac{2m_{1}Q_{1}(0)}{N},\quad\eta_{rr}(0)=-\frac{2m_{2}Q_{2}(0)} {N}. \tag{4.46}\]
**Indeed, direct calculation gives**
\[\xi_{rr}(0) =\lim_{r\to 0}\frac{\xi_{r}(r)-\xi_{r}(0)}{r}==\lim_{r\to 0}\frac{-2m_{1}r\int_{0}^{1}t^{N-1}Q_{1}(rt)dt} {r}\] \[=\lim_{r\to 0}(-2m_{1})\int_{0}^{1}t^{N-1}Q_{1}(rt)dt=- \frac{2m_{1}Q_{1}(0)}{N},\]
**and**
\[\eta_{rr}(0) =\lim_{r\to 0}\frac{\eta_{r}(r)-\eta_{r}(0)}{r}==\lim_{r\to 0} \frac{-2m_{2}r\int_{0}^{1}t^{N-1}Q_{2}(rt)dt}{r}\] \[=\lim_{r\to 0}(-2m_{2})\int_{0}^{1}t^{N-1}Q_{2}(rt)dt=- \frac{2m_{2}Q_{2}(0)}{N}.\]
On the other hand, from (4.42), (4.45) and (4.45)\({}^{*}\) it follows that
\[\left\{\begin{array}{c}\lim_{r\to 0}\xi_{rr}(r)=\lim_{r\to 0} \left(-\frac{N-1}{r}\xi_{r}(r)-2m_{1}Q_{1}(r)\right)\\ \\ =-2m_{1}Q_{1}(0)+2m_{1}\frac{N-1}{N}Q_{1}(0)=-\frac{2m_{1}}{N}Q_{1}(0),\\ \\ \lim_{r\to 0}\eta_{rr}(r)=\lim_{r\to 0}\left(-\frac{N-1}{r}\eta_{r}(r)-2m_{2}Q _{2}(r)\right)=-\frac{2m_{2}}{N}Q_{2}(0).\end{array}\right. \tag{4.47}\]
Combining (4.46) with (4.47) implies that \(\xi_{rr}\) and \(\eta_{rr}\) are continuous at \(r=0\).
The proof of Lemma 4.11 is then completed.
So far, we obtain that \(\xi_{rr}\) and \(\eta_{rr}\) are continuous for any \(r\geq 0\), and hence Lemma 4.10 hold.
We next claim \(\xi>0\) and \(\eta>0\) on \(\mathbb{R}^{N}\).
Indeed, since \(\xi\) is a decreasing function of \(r\), we have \(\frac{d\xi}{dr}<0\) for any \(r>0\). Note that
\(\xi(x)\in H^{1}(\mathbb{R}^{N})\), there holds that \(\lim\limits_{r\to\infty}\xi(r)=0\). Hence by the maximum principle, one obtains that \(\xi>0\) on \(\mathbb{R}^{N}\). On the other hand, in view of the proof of (1) and (2) in Theorem 4.1, we obtain \(\eta\) is a decreasing function of \(r\), and hence \(\frac{d\eta}{dr}<0\) for any \(r>0\). From \(\eta(x)\in H^{1}(\mathbb{R}^{N})\), it follows that \(\lim\limits_{r\to\infty}\eta(r)=0\). Therefore, by the maximum principle, we get that \(\eta>0\) on \(\mathbb{R}^{N}\). All in all, there hold \(\xi(r)>0\) and \(\eta(r)>0\) on \(\mathbb{R}^{N}\).
We are now in the position to **prove Proposition 4.9**, which will be divided into three steps.
step1) Proof of the exponential decay of \((\xi,\eta)\);
step2) Verification of the exponential decay of \((\xi_{r},\eta_{r})\);
step3) Justification of the exponential decay of \((\xi_{rr},\eta_{rr})\), and thus of \((|{\mathcal{D}}^{\alpha}\xi(x)|,|{\mathcal{D}}^{\alpha}\eta(x)|)\) for \(|\alpha|\leq 2\).
Proof.: **step1) Proof of the exponential decay of \((\xi,\eta)\)**.
By Lemma 4.10, \((\xi,\eta)\) are of class \(C^{2}(\mathbb{R}^{N})\times C^{2}(\mathbb{R}^{N})\), accordingly it satisfies (4.42). Set
\[\xi^{*}=r^{\frac{N-1}{2}}\xi,\quad\eta^{*}=r^{\frac{N-1}{2}}\eta. \tag{4.48}\]
Direct calculation gives
\[\left\{\begin{array}{l}\xi=r^{-\frac{N-1}{2}}\xi^{*},\quad\eta=r^{-\frac{N- 1}{2}}\eta^{*},\\ \\ \xi_{r}=\frac{1-N}{2}r^{-\frac{N+1}{2}}\xi^{*}+r^{\frac{1-N}{2}}\xi^{*}_{r}, \\ \\ \xi_{rr}=r^{\frac{1-N}{2}}\xi^{*}_{rr}+(1-N)r^{-\frac{N+1}{2}}\xi^{*}_{r}+ \frac{N^{2}-1}{4}r^{\frac{N-3}{2}}\xi^{*},\\ \\ \eta_{r}=\frac{1-N}{2}r^{-\frac{N+1}{2}}\eta^{*}+r^{\frac{1-N}{2}}\eta^{*}_{r}, \\ \\ \eta_{rr}=r^{\frac{1-N}{2}}\eta^{*}_{rr}+(1-N)r^{-\frac{N+1}{2}}\eta^{*}_{r}+ \frac{N^{2}-1}{4}r^{\frac{-N-3}{2}}\eta^{*},\end{array}\right. \tag{4.49}\]
which together with (4.42) implies that \((\xi^{*},\eta^{*})\) satisfying
\[\left\{\begin{array}{l}\xi^{*}_{rr}=\left[f_{1}(r)+\frac{a}{r^{2}}\right] \xi^{*},\\ \\ \eta^{*}_{rr}=\left[f_{2}(r)+\frac{a}{r^{2}}\right]\eta^{*},\end{array}\right. \tag{4.50}\]
where
\[f_{1}(r)=\frac{-2m_{1}g_{1}^{*}(\xi(r),\eta(r))}{\xi(r)},\quad f_{2}(r)=\frac {-2m_{2}g_{2}^{*}(\xi(r),\eta(r))}{\eta(r)},\quad a=\frac{(N-1)(N-3)}{4}. \tag{4.50}\]
Recalling the radial Lemma 2.3, \(\xi(r)\to 0\), \(\eta(r)\to 0\) as \(r\to\infty\). Noting that (4.40), (4.42) and (4.42)\({}^{a}\), there holds
\[g_{1}^{*}\left(\xi(r),\eta(r)\right)=g_{1}\left(\xi(r),-\eta(r)\right),\quad g _{2}^{*}\left(\xi(r),\eta(r)\right)=-g_{2}\left(\xi(r),-\eta(r)\right), \tag{4.50}\]
this yields for \(\xi>0\) and \(\eta>0\) that
\[\frac{g_{2}^{*}\left(\xi(r),\eta(r)\right)}{\eta(r)}=-\frac{g_{2}\left(\xi(r),- \eta(r)\right)}{\eta(r)}=\frac{g_{2}\left(\xi(r),-\eta(r)\right)}{-\eta(r)}.\]
Therefore from \((S-2)\) and \((S-3)\) of Lemma 4.2 and \((4.50)^{*}\), it follows that for \(r\geq r_{0}\) large enough
\[\left\{\begin{aligned} f_{1}(r)+\frac{a}{r^{2}}\geq m_{1}\omega_{1},\\ &\\ f_{2}(r)+\frac{a}{r^{2}}\geq m_{2}\omega_{2}.\end{aligned}\right.\]
Furthermore, let
\[U={\xi^{*}}^{2},\ \ V={\eta^{*}}^{2},\]
then \((U,V)\) verifies
\[\left\{\begin{aligned} &\frac{1}{2}U_{rr}={\xi_{r}^{*}}^{2}+\left[f_{ 1}(r)+\frac{a}{r^{2}}\right]U,\\ &\frac{1}{2}V_{rr}={\eta_{r}^{*}}^{2}+\left[f_{2}(r)+\frac{a}{r^{ 2}}\right]V,\end{aligned}\right.\]
where we have used the fact that
\[{\xi_{r}^{*}}^{2}=\frac{1}{4}\frac{U_{r}^{2}}{U}\quad\text{and}\quad{\eta_{r} ^{*}}^{2}=\frac{1}{4}\frac{V_{r}^{2}}{V}.\]
Thus for \(r\geq r_{0}\) one has
\[\left\{\begin{aligned} & U\geq 0,V\geq 0,\\ & U_{rr}\geq 2m_{1}\omega_{1}U,\\ & V_{rr}\geq 2m_{2}\omega_{2}V.\end{aligned}\right.\]
Now let
\[\left\{\begin{aligned} & U^{*}=e^{-\sqrt{2m_{1}\omega_{1}}r}\left(U_{r}+ \sqrt{2m_{1}\omega_{1}}U\right),\\ & V^{*}=e^{-\sqrt{2m_{2}\omega_{2}}r}\left(V_{r}+\sqrt{2m_{2} \omega_{2}}V\right),\end{aligned}\right.\]
we have by (4.53)
\[\left\{\begin{aligned} & U_{r}^{*}=e^{-\sqrt{2m_{1}\omega_{1}}r}\left(U_{ rr}-2m_{1}\omega_{1}U\right)\geq 0,\\ & V_{r}^{*}=e^{-\sqrt{2m_{2}\omega_{2}}r}\left(V_{rr}-2m_{2} \omega_{2}V\right)\geq 0.\end{aligned}\right.\]
Therefore, \(U^{*}\) and \(V^{*}\) are nondecreasing on \((r_{0},+\infty)\).
Furthermore, we claim:
**Conclusion 4.12**.: _There holds_
\[U^{*}(r)\leq 0,\quad V^{*}(r)\leq 0\quad for\quad r\geq r_{1}>r_{0}.\]
**Proof.** Indeed, if there exists \(r_{1}>r_{0}\) such that \(U^{*}(r_{1})>0\), then \(U^{*}(r)\geq U^{*}(r_{1})>0\) for all \(r\geq r_{1}\). This together with (4.54) implies that
\[U_{r}+\sqrt{2m_{1}\omega_{1}}U=U^{*}(r)e^{\sqrt{2m_{1}\omega_{1}}r}\geq U^{*}( r_{1})e^{\sqrt{2m_{1}\omega_{1}}r},\]
whence \(U_{r}+\sqrt{2m_{1}\omega_{1}}U\) is not integrable on \((r_{1},+\infty)\). Recalling (4.49), \({\xi^{*}}^{2}\) and \({\xi^{*}}{\xi^{*}_{r}}\) are integrable near \(\infty\) for \(\xi\in H^{1}({\mathbb{R}}^{N})\), this together with (4.51)\({}^{a}\) yields that \(U_{r}+U\) are integrable from \(U={\xi^{*}}^{2}\) with \(U_{r}=2{\xi^{*}}{\xi^{*}_{r}}\), which is a contradiction. Hence \(U^{*}(r)\leq 0\) for \(r\geq r_{1}\). Similar argument yields that \(V^{*}(r)\leq 0\) for \(r\geq r_{1}\).
Conclusion 4.12 then holds true.
Thus for \(r\geq r_{1}\), (4.56) and (4.54) leads to
\[\left\{\begin{array}{rl}&\frac{d\left(e^{\sqrt{2m_{1}\omega_{1}}r}U\right)} {dr}=e^{2\sqrt{2m_{1}\omega_{1}}r}U^{*}\leq 0,\\ &\frac{d\left(e^{\sqrt{2m_{2}\omega_{2}}r}U\right)}{dr}=e^{2\sqrt{2m_{2} \omega_{2}}r}V^{*}\leq 0.\end{array}\right. \tag{4.57}\]
This implies that
\[\left\{\begin{array}{rl}&e^{\sqrt{2m_{1}\omega_{1}}r}U(r)\leq e^{\sqrt{2m_{ 1}\omega_{1}}r_{1}}U(r_{1}),\\ &\\ &e^{\sqrt{2m_{2}\omega_{2}}r}V(r)\leq e^{\sqrt{2m_{2}\omega_{2}}r_{1}}V(r_{1}),\end{array}\right.\]
that is,
\[U(r)\leq c_{1}(r_{1})e^{-\sqrt{2m_{1}\omega_{1}}r}\quad\text{and}\quad V(r) \leq c_{2}(r_{1})e^{-\sqrt{2m_{2}\omega_{2}}r},\]
where \(c_{1}(r_{1})\triangleq e^{\sqrt{2m_{1}\omega_{1}}r_{1}}U(r_{1})\) and \(c_{2}(r_{1})\triangleq e^{\sqrt{2m_{2}\omega_{2}}r_{1}}V(r_{1})\).
Recalling that
\[U(r)={\xi^{*}}^{2}(r)=\left(r^{\frac{N-1}{2}}\xi\right)^{2}=r^{N-1}\xi^{2}, \quad V(r)={\eta^{*}}^{2}(r)=\left(r^{\frac{N-1}{2}}\eta\right)^{2}=r^{N-1} \eta^{2},\]
we then obtain for \(r\geq r_{1}\) and for two positive constants \(c_{1}^{*}(r_{1})\geq\sqrt{c_{1}(r_{1})}\) and \(c_{2}^{*}(r_{1})\geq\sqrt{c_{2}(r_{1})}\):
\[\left\{\begin{array}{rl}&|\xi(r)|\leq c_{1}^{*}(r_{1})r^{-\frac{N-1}{2}}e^{ -\frac{\sqrt{2m_{1}\omega_{1}}}{2}r},\qquad\eqref{eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq
Integrating \((4.59-1)\) on \((r,R)\), applying (4.58) and letting \((r,R)\rightarrow(+\infty,+\infty)\) shows that: \(r^{N-1}\xi_{r}\) has a limit as \(r\rightarrow+\infty\); this limit can only be zero from \((4.58-1)\) and (4.60).
Indeed, own to \((\xi,\eta)\in H^{1}(\mathbb{R}^{n})\times H^{1}(\mathbb{R}^{n})\), by \((4.59-1)\) and (4.60) one has
\[\lim_{r\rightarrow+\infty}r^{N-1}\xi_{r} =\lim_{r\rightarrow+\infty}\int_{r}^{+\infty}2s^{N-1}m_{1}g_{1}^ {*}(\xi(s),\eta(s))ds\] \[\leq\lim_{r\rightarrow+\infty}\int_{r}^{+\infty}2m_{1}s^{N-1}d_ {1}|\xi(s)|ds\] \[\leq\lim_{r\rightarrow+\infty}\int_{r}^{+\infty}2m_{1}s^{N-1}d_ {1}c_{1}(r_{1})s^{-\frac{N-1}{2}}e^{-\sqrt{2m_{1}\omega_{1}}s}ds\] \[=0.\]
Furthermore, integrating \((4.59-1)\) on \((r,+\infty)\) gives
\[-r^{N-1}\xi_{r}=-2\int_{r}^{+\infty}s^{N-1}m_{1}g_{1}^{*}(\xi(s),\eta(s))ds,\]
that is,
\[r^{N-1}\xi_{r} =\int_{r}^{+\infty}2s^{N-1}m_{1}g_{1}^{*}(\xi(s),\eta(s))ds\] \[\leq\int_{r}^{+\infty}2m_{1}d_{1}|\xi(s)|ds\] \[\leq\int_{r}^{+\infty}2m_{1}d_{1}c_{1}^{*}(r_{1})s^{N-1}s^{-\frac {N-1}{2}}e^{-\frac{\sqrt{2m_{1}\omega_{1}}s}{2}}ds.\]
This implies that
\[|\xi_{r}(r)|\leq cr^{-\theta_{1}}e^{-\frac{\sqrt{2m_{1}\omega_{1}}}{2}r}\text { \emph{for some $\theta_{1}>0$,}}\]
and hence \(\xi_{r}(r)\) has an exponential decay at infinity. Similarly, combining \((4.59-2)\) with (4.60) yields
\[|\eta_{r}(r)|\leq cr^{-\theta_{2}}e^{-\frac{\sqrt{2m_{2}\omega_{2}}}{2}r}\text { \emph{for some $\theta_{2}>0$,}}\]
and thus \(\eta_{r}\) has also an exponential decay at infinity.
step3)**Justification of the exponential decay of \(\xi_{rr}\) and \(\eta_{rr}\), and thus of \((|D^{\alpha}\xi(x)|,|D^{\alpha}\eta(x)|)\) for \(|\alpha|\leq 2\).
Applying the established results that \(\xi,\xi_{r},\eta\) and \(\eta_{r}\) all have exponential decays, by (4.60) and by the equivalent form of (4.42):
\[\left\{\begin{array}{c}\xi_{rr}+\frac{N-1}{r}\xi_{r}=-2m_{1}g_{1 }^{*}(\xi,\eta),\\ \\ \eta_{rr}+\frac{N-1}{r}\eta_{r}=-2m_{2}g_{2}^{*}(\xi,\eta),\end{array}\right. \tag{4.61}\]
we can easily obtain the exponential decay of \(\xi_{rr}\) and \(\eta_{rr}\).
By far, we have established the exponential decay of \(\xi,\eta,\xi_{r},\eta_{r},\xi_{rr}\) and \(\eta_{rr}\), which imply the exponential decays for \(D^{\alpha}\xi(x)\) and \(D^{\alpha}\eta(x)\) for \(|\alpha|\leq 2\).
Step 1), Step 2) and Step 3) then complete the proof of Proposition 4.9.
Finally, combining these results in **Subsection 4.1, 4.2** and **4.3** together finishes the proof of Theorem 4.1.
## 5 Instability of Standing Waves under Mass Resonance
In Section 4 we have established the existence of the ground state solution \((\xi,-\eta)\) of (1.5) (or (4.1)). On the other hand, noting that
\[\left(\phi(t,x),\psi(t,x)\right)=\left(e^{i\omega_{1}t}\xi(x),e^{i\omega_{2}t}(- \eta(x))\right)\quad with\quad\omega_{2}=2\omega_{1},\]
which is the standing wave solution of (1.1) with the ground state solution \((\xi(x),-\eta(x))\) for (1.5) (or (4.1)). Hence Theorem 4.1 indeed gives the existence of standing waves of (1.1) associated with the ground state for (1.5) (or (4.1)). In this section, we are concerned with the characterization of the standing waves of (1.1) with minimal action \(\mathcal{S}(\xi,-\eta)\) and establish its' instability.
We then claim:
**Theorem 5.1**.: _Let \(\omega_{2}=2\omega_{1}\), \(m_{2}=2m_{1}\), \(4<N<6\) and (1.10) hold true. If \((\xi,-\eta)\in\mathcal{M}\) is given in Theorem 4.1, then for arbitrary \(\varepsilon>0\), there exists \((\phi_{0},\psi_{0})\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\) with_
\[\|\phi_{0}-\xi\|_{H^{1}(\mathbb{R}^{N})}<\varepsilon,\quad\|\psi_{0}-(-\eta) \|_{H^{1}(\mathbb{R}^{N})}<\varepsilon\]
_such that the solution \((\phi,\psi)\) of (1.1) with this initial data \((\phi_{0},\psi_{0})\) has the following property: For some finite time \(T<+\infty\), \((\phi,\psi)\) exists on \([0,T)\),_
\[(\phi,\psi)\in C\left([0,T);H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N}) \right),\]
_and_
\[\lim_{t\to T}\left(\|\phi\|_{H^{1}(\mathbb{R}^{N})}+\|\psi\|_{H^{1}( \mathbb{R}^{N})}\right)=+\infty.\]
For the local-wellposedness of solution to the Cauchy problem (1.1)-(2.1), using the similar argument to that proposed in Cazenave [3] and Ginibre-Velo [5], we can obtain:
**Proposition 5.2**.: _For any \((\phi_{0},\psi_{0})\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\), there exists a unique solution \((\phi,\psi)\) of (1.1)-(2.1) defined on a maximal time interval \([0,T)\), where \(T=T_{max}(\phi_{0},\psi_{0})\) and_
\[(\phi,\psi)\in C\left([0,T);H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\right)\]
_and either \(T=+\infty\) or \(T<+\infty\) and_
\[\lim_{t\to T}\left(\|\phi(t,\cdot)\|_{H^{1}(\mathbb{R}^{N})}^{2}+\|\psi(t, \cdot)\|_{H^{1}(\mathbb{R}^{N})}^{2}\right)=+\infty.\]
**Proposition 5.3**.: _Let \(\omega_{2}=2\omega_{1}\), \(m_{2}=2m_{1}\), \(4<N<6\) and (1.10) hold true. Then for all \(t\in[0,T)\), the solution \((\phi(t),\psi(t))=(\phi(t,\cdot),\psi(t,\cdot))\) of the Cauchy problem (1.1) with the initial data \((\phi_{0},\psi_{0})\) admits the following two conservation laws:_
\[a_{2}\int_{\mathbb{R}^{N}}|\phi|^{2}dx+a_{1}\int_{\mathbb{R}^{N}}|\psi|^{2}dx =a_{2}\int_{\mathbb{R}^{N}}|\phi_{0}|^{2}dx+a_{1}\int_{\mathbb{R}^{N}}|\psi_{ 0}|^{2}dx, \tag{5.1}\]
\[Re\mathcal{S}(\bar{\phi}(t),\psi(t))=Re\mathcal{S}(\bar{\phi}_{0},\psi_{0}). \tag{5.2}\]
_Put_
\[G(t)=\int_{\mathbb{R}^{N}}|x|^{2}\left(a_{2}|\phi|^{2}+a_{1}|\psi|^{2}\right)dx, \tag{5.3}\]
_then_
\[\frac{d^{2}}{dt^{2}}G(t)=\frac{2}{m_{1}}ReQ(\bar{\phi},\psi), \tag{5.4}\]
_where \(\mathcal{S}(\phi,\psi)\) and \(Q(\phi,\psi)\) are defined by (1.6) and (1.7), respectively._
Proof.: (5.1) is just the mass conservation equality (2.2). We then verify (5.2). From (1.6) it follows that
\[\begin{split} Re\mathcal{S}\left(\bar{\phi}(t),\psi(t)\right)& =\frac{a_{2}}{2m_{1}}\int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx+ \frac{a_{1}}{4m_{2}}\int_{\mathbb{R}^{N}}|\nabla\psi|^{2}dx\\ &\quad+a_{2}\omega_{1}\int_{\mathbb{R}^{N}}|\phi|^{2}dx+\frac{a_{ 1}}{2}\omega_{2}\int_{\mathbb{R}^{N}}|\psi|^{2}dx\\ &\quad+a_{1}a_{2}Re\int_{\mathbb{R}^{N}}\psi\overline{\phi}^{2}dx.\end{split} \tag{5.5}\]
Recalling the energy conservation (2.3) and (2.3a), for \(\omega_{2}=2\omega_{1}\), (5.5) gives
\[\begin{split} Re\mathcal{S}\left(\bar{\phi}(t),\psi(t)\right)& =E(\phi(t),\psi(t))+\omega_{1}\left[a_{2}\int_{\mathbb{R}^{N}}| \phi|^{2}dx+a_{1}\int_{\mathbb{R}^{N}}|\psi|^{2}dx\right]\\ &=E(\phi_{0},\psi_{0})+\omega_{1}\left[a_{2}\int_{\mathbb{R}^{N}} |\phi_{0}|^{2}dx+a_{1}\int_{\mathbb{R}^{N}}|\psi_{0}|^{2}dx\right]\\ &=Re\mathcal{S}(\bar{\phi}_{0},\psi_{0}).\end{split}\]
This proves that (5.2) holds true. In addition, by Lemma 2.2, (1.7) and (5.3), one obtains that for \(m_{2}=2m_{1}\)
\[\begin{split}\frac{d^{2}}{dt^{2}}G(t)&=\frac{2a_{ 2}}{m_{1}^{2}}\int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx+\frac{a_{1}}{2m_{1}^{2}} \int_{\mathbb{R}^{N}}|\nabla\psi|^{2}dx+\frac{a_{1}a_{2}N}{m_{1}}Re\int_{ \mathbb{R}^{N}}\psi\overline{\phi}^{2}dx\\ &=\frac{2}{m_{1}}\left[\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}| \nabla\phi|^{2}dx+\frac{a_{1}}{4m_{1}}\int_{\mathbb{R}^{N}}|\nabla\psi|^{2}dx+ \frac{N}{2}a_{1}a_{2}Re\int_{\mathbb{R}^{N}}\psi\overline{\phi}^{2}dx\right] \\ &=\frac{2}{m_{1}}Re\mathcal{Q}(\bar{\phi},\psi).\end{split}\]
This completes the proof of Proposition 5.3.
**Remark 5.4\({}^{*}\)**. If \((u,v)\) is a pair of real-valued functions, the conclusions of Lemma 4.4 are hold if we replace \(\mathcal{S}(u,v)\) and \(Q(u,v)\) by \(Re\mathcal{S}(\bar{u},v)\) and \(Re\mathcal{Q}(\bar{u},v)\), respectively. In addition, there holds \(\int_{\mathbb{R}^{N}}vu^{2}dx=\int_{\mathbb{R}^{N}}v\overline{u}^{2}dx\).
Before showing Theorem 5.1 we first claim the following:
**Lemma 5.4**.: _Let \(\omega_{2}=2\omega_{1}\), \(m_{2}=2m_{1}\), \(4<N<6\) and (1.10) hold true. Putting_
\[\phi_{\mu}=\mu^{\frac{N}{2}}\phi(\mu x),\quad\psi_{\mu}=\mu^{\frac{N}{2}}\psi (\mu x), \tag{5.6}\]
_there exists \((\phi,\psi)\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\setminus\{(0, 0)\}\) and \(\mu>0\) such that_
\[Re\mathcal{Q}\left(\bar{\phi}_{\mu},\psi_{\mu}\right)=0.\]
_Suppose that \(\mu<1\), then_
\[Re\mathcal{S}\left(\bar{\phi},\psi\right)-Re\mathcal{S}\left(\bar{\phi}_{\mu},\psi_{\mu}\right)\geq\frac{1}{2}Re\mathcal{Q}(\bar{\phi},\psi). \tag{5.7}\]
Proof.: Referring to the proof of Lemma 4.7, there holds
\[\left\{\begin{aligned} & Re\mathcal{Q}\left(\bar{\phi}_{\lambda}, \psi_{\lambda}\right)=A\lambda^{2}-\frac{N}{2}B\lambda^{\frac{N}{2}},\\ & Re\mathcal{S}\left(\bar{\phi}_{\lambda},\psi_{\lambda}\right)= A\frac{\lambda^{2}}{2}-B\lambda^{\frac{N}{2}}+C,\end{aligned}\right. \tag{5.8}\]
where
\[\left\{\begin{aligned} & A=\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}| \nabla\phi|^{2}dx+\frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla\phi|^{2}dx, \\ & B=-a_{1}a_{2}Re\int_{\mathbb{R}^{N}}\psi\overline{\phi}^{2}dx, \\ & C=a_{2}\omega_{1}\int_{\mathbb{R}^{N}}|\phi|^{2}dx+\frac{a_{1}} {2}\omega_{2}\int_{\mathbb{R}^{N}}|\psi|^{2}dx.\end{aligned}\right.\]
Since \(m_{2}=2m_{1}>0\), \(Re\mathcal{Q}\left(\bar{\phi}_{\mu},\psi_{\mu}\right)=0\) implies that
\[Re\int_{\mathbb{R}^{N}}\psi\overline{\phi}^{2}dx<0\quad A\mu^{2}=\frac{N}{2}B \mu^{\frac{N}{2}}. \tag{5.9}\]
Note that \((\phi,\psi)=(\phi_{1},\psi_{1})\) and \(Re\mathcal{Q}\left(\bar{\phi},\psi\right)=A-\frac{N}{2}B\), using (5.9) yields that for \(0<\mu<1\) and \(4<N<6\),
\[\begin{aligned} & Re\mathcal{S}\left(\bar{\phi},\psi\right)- Re\mathcal{S}\left(\bar{\phi}_{\mu},\psi_{\mu}\right)\\ &\quad=\frac{1}{2}A-B+C-\left(\frac{A}{2}\mu^{2}-B\mu^{\frac{N}{2} }+C\right)\\ &\quad=\frac{1}{2}A-B-\frac{A}{2}\mu^{2}+B\mu^{\frac{N}{2}}\\ &\quad=\frac{1}{2}A-\frac{1}{2}\cdot\frac{N}{2}B\mu^{\frac{N}{2} }-B+B\mu^{\frac{N}{2}}\\ &\quad=\frac{1}{2}\left(A-\frac{N}{2}B\right)+\frac{N}{4}B-\frac{ N}{4}B\mu^{\frac{N}{2}}-B+B\mu^{\frac{N}{2}}\\ &\quad=\frac{1}{2}\left(A-\frac{N}{2}B\right)+\left(\frac{N}{4}-1 \right)B-\left(\frac{N}{4}-1\right)B\mu^{\frac{N}{2}}\\ &\quad=\frac{1}{2}\left(A-\frac{N}{2}B\right)+\left(\frac{N}{4}-1 \right)B\left(1-\mu^{\frac{N}{2}}\right)\\ &\quad\geq\frac{1}{2}\left(A-\frac{N}{2}B\right)\geq\frac{1}{2} Re\mathcal{Q}\left(\bar{\phi},\psi\right).\end{aligned}\]
We further establish a stronger instability result than Theorem 5.1.
For \((u,v)\) is a pair of complex-valued functions, we define a manifold \(M^{*}\) as
\[M^{*}=\left\{(u,v)\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\setminus \{(0,0)\},\,\,\,Re\mathcal{Q}\left(\bar{u},v\right)=0\right\}, \tag{5.10}\]
and a constrained minimizing problem
\[M_{0}=\inf_{(u,v)\in M^{*}}Re\mathcal{S}\left(\bar{u},v\right), \tag{5.11}\]
where \(\mathcal{S}\left(\bar{u},v\right)\) and \(\mathcal{Q}\left(\bar{u},v\right)\) are defined by (1.6) and (1.7) for the complex-valued pair of functions \((u,v)\), and \(\bar{u}\) denotes the complex conjugate of \(u\), that is,
\[\begin{split}\mathcal{S}(\bar{u},v)&=\frac{a_{2}}{2 m_{1}}\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx+\frac{a_{1}}{4m_{2}}\int_{\mathbb{R}^{N}}| \nabla v|^{2}dx\\ &\quad+a_{2}\omega_{1}\int_{\mathbb{R}^{N}}|u|^{2}dx+\frac{a_{1} }{2}\omega_{2}\int_{\mathbb{R}^{N}}|v|^{2}dx+a_{1}a_{2}\int_{\mathbb{R}^{N}}v \bar{u}^{2}dx,\end{split} \tag{5.11}\]
\[\mathcal{Q}(\bar{u},v)=\frac{a_{2}}{m_{1}}\int_{\mathbb{R}^{N}}|\nabla u|^{2} dx+\frac{a_{1}}{2m_{2}}\int_{\mathbb{R}^{N}}|\nabla v|^{2}dx+\frac{N}{2}a_{1}a_{2} \int_{\mathbb{R}^{N}}v\bar{u}^{2}dx, \tag{5.11}\]
By the similar argument to that used in dealing with the real-valued pair of functions \((u,v)\), we can obtain that there exists a pair of complex-valued functions \((\xi^{*},\eta^{*})\) with \(Re\int_{\mathbb{R}^{N}}\eta^{*}\left(\bar{\xi}^{*}\right)^{2}dx<0\) such that
\[Re\mathcal{S}\left(\bar{\xi}^{*},\eta^{*}\right)=M_{0}=\min_{(u,v)\in M^{*}}Re \mathcal{S}\left(\bar{u},v\right)>0. \tag{5.12}\]
In addition, \((\xi^{*},\eta^{*})\) satisfies the following equations:
\[\left\{\begin{split}&-\frac{1}{2m_{1}}\Delta u(x)+\omega_{1}u(x)=-a _{1}v(x)\bar{u}(x),\qquad\eqref{eq:12}a\\ &-\frac{1}{2m_{2}}\Delta v(x)+\omega_{2}v(x)=-a_{2}u^{2}(x), \qquad\eqref{eq:12}b\end{split}\right.\] (5.12 \[{}^{*}\] )
for a pair of complex-valued functions \((u,v)\), and \(\omega_{2}=2\omega_{1}\). **Remark**\(5.5^{*}\). Note that for a pair of real-valued functions \((u,v)\), there hold
\[\left\{\begin{split}& Re\mathcal{S}\left(\bar{u},v\right)=\mathcal{S} \left(u,v\right),\quad Re\mathcal{Q}\left(\bar{u},v\right)=\mathcal{Q}(u,v),\\ & M^{*}\ \ \text{is}\ \ \text{identical}\ \ \text{with}\ \text{M}.\end{split}\right. \tag{5.13}\]
From (1.9), (5.11) and Theorem 4.1, for \((u,v)\) is a pair of real-valued functions, it follows that
\[M_{0}=K=\mathcal{S}\left(\xi,-\eta\right). \tag{5.14}\]
We then claim:
**Proposition 5.5**.: _Let \(\omega_{2}=2\omega_{1}\), \(m_{2}=2m_{1}\), \(4<N<6\) and (1.10) hold true. Let also \((\phi,\psi)\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})\) be a solution of the Cauchy problem (1.1)-(2.1) on \([0,T)\). Put_
\[\mathcal{K}_{1}=\left\{(u,v)\in H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R} ^{N}),ReQ\left(\bar{u},v\right)<0,\ \ Re\mathcal{S}\left(\bar{u},v\right)<M_{0}\right\}. \tag{5.15}\]
_Then for any initial data \((\phi_{0},\psi_{0})\in\mathcal{K}_{1}\), there holds \((\phi(t),\psi(t))\in\mathcal{K}_{1},\forall t\in[0,T)\). That is, \(\mathcal{K}_{1}\) is invariant under the flow generated by the Cauchy problem (1.1)-(2.1). Furthermore, if_
\[(|x|\phi_{0},|x|\psi_{0})\in L^{2}(\mathbb{R}^{N})\times L^{2}(\mathbb{R}^{N}),\]
_then \(T\) is finite and_
\[\lim_{t\to T}\left(\|\phi(t)\|_{H^{1}(\mathbb{R}^{N})}+\|\psi(t)\|_{H^{1}( \mathbb{R}^{N})}\right)=+\infty.\]
Proof.: Let \((\phi_{0},\psi_{0})\in\mathcal{K}_{1}\), since by (5.2),
\[Re\mathcal{S}\left(\bar{\phi}(t),\psi(t)\right)=Re\mathcal{S}\left(\bar{\phi}_{0},\psi_{0}\right)<M_{0},\quad for\quad 0\leq t<T, \tag{5.16}\]
we claim:
\[Re\mathcal{Q}\left(\bar{\phi}(t),\psi(t)\right)<0\quad for\quad 0\leq t<T. \tag{5.17}\]
Otherwise, by continuity there would exist a \(t^{*}>0\) such that
\[Re\mathcal{Q}(\bar{\phi}(t^{*}),\psi(t^{*}))=0,\quad that\quad is,\quad(\phi(t^{* }),\psi(t^{*}))\in M^{*},\]
where \(M^{*}\) is defined by (5.10). This is impossible for
\[Re\mathcal{S}(\bar{\phi}(t^{*}),\psi(t^{*}))<M_{0}\quad and\quad M_{0}=\min_{( u,\psi)\in M^{*}}Re\mathcal{S}(\bar{u},\nu).\]
So (5.17) holds true. Thus, \(\mathcal{K}_{1}\) is invariant under the flow generated by the Cauchy problem (1.1)-(2.1).
Now for fixed \(t\in[0,T)\), let \(\mu\) be defined by
\[Re\mathcal{Q}\left((\bar{\phi}(t))_{\mu},(\psi(t))_{\mu}\right)=0.\]
Note that \((\phi_{0},\psi_{0})\in\mathcal{K}_{1}\) and \(\mathcal{K}_{1}\) is a invariant manifold, we have \(Re\mathcal{Q}\left(\bar{\phi}(t),\psi(t)\right)<0\), which yields from Lemma 5.4 that \(\mu<1\). According to \(Re\mathcal{S}\left((\bar{\phi}(t))_{\mu},(\psi(t))_{\mu}\right)\geq M_{0}\) and \(Re\mathcal{S}\left(\bar{\phi}(t),\psi(t)\right)=Re\mathcal{S}\left(\bar{\phi} _{0},\psi_{0}\right)\), by (5.7) one has
\[\begin{split} Re\mathcal{Q}(\bar{\phi}(t),\psi(t))& \leq 2\left(Re\mathcal{S}\left(\bar{\phi}(t),\psi(t)\right)-Re\mathcal{ S}\left((\bar{\phi}(t))_{\mu},(\psi(t))_{\mu}\right)\right)\\ &\leq 2Re\mathcal{S}\left(\bar{\phi}_{0},\psi_{0}\right)-2M_{0}<0, \end{split} \tag{5.18}\]
where the last inequality uses the fact that \((\phi_{0},\psi_{0})\in\mathcal{K}_{1}\). Let \(\theta_{0}=2M_{0}-2Re\mathcal{S}\left(\bar{\phi}_{0},\psi_{0}\right)>0\), and \(\theta_{0}>0\) is a fixed positive constant. Combining (5.3), (5.4) and (5.18) together derives that
\[\begin{split}\frac{d^{2}}{dt^{2}}&\left[\int_{ \mathbb{R}^{N}}|x|^{2}\left(a_{2}|\phi|^{2}+a_{1}|\psi|^{2}\right)dx\right]\\ &=\frac{2}{m_{1}}Re\mathcal{Q}(\bar{\phi},\psi)\leq\frac{-2 \theta_{0}}{m_{1}}<0.\end{split} \tag{5.19}\]
Using the same argument as that used in the proof of Theorem 3.1, (5.19) implies that \(T\) must be finite and
\[\lim_{t\to T}\left(\|\phi(t)\|_{H^{1}(\mathbb{R}^{N})}+\|\psi(t)\|_{H^{1}( \mathbb{R}^{N})}\right)=+\infty.\]
This completes the proof of Proposition 5.5.
Finally, we return to the proof of Theorem 5.1.
**Proof of Theorem 5.1.**
Put
\[\phi_{0}(x)=\lambda^{\frac{N}{2}}\xi(\lambda x),\quad\psi_{0}(x)=\lambda^{ \frac{N}{2}}(-\eta)(\lambda x)\quad for\quad\lambda>1, \tag{5.20}\]
where \((\xi,\eta)\) is a pair of real-valued functions. Note that \((\xi,-\eta)\in M\) and
\[ReQ(\bar{\xi},-\eta)=Q(\xi,-\eta)=0, \tag{5.21}\]
by (5.12), (5.13), (5.14), (5.21) and Remark 5.5\({}^{*}\), making the similar argument to that adopted in Lemma 4.7, the functions \((\phi_{0}(x),\psi_{0}(x))\) satisfy for any \(\lambda>1\),
\[ReQ(\bar{\phi}_{0},\psi_{0})=Q(\phi_{0},\psi_{0})<0,\ \ \ Re{\cal S}(\bar{\phi}_{0}, \psi_{0})={\cal S}(\phi_{0},\psi_{0})<{\cal S}(\xi,-\eta)=K=M_{0}, \tag{5.22}\]
where \(M_{0}\) is defined by (5.18). Hence, (5.15) and (5.22) yield that \((\phi_{0},\psi_{0})\in{\cal K}_{1}\). On the other hand, note that \({\cal K}_{1}\) is a invariant flow for the Cauchy problem (1.1)-(2.1) from Proposition 5.5, there holds \((\phi(t),\psi(t))\in{\cal K}_{1}\), where \((\phi(t),\psi(t))\) is a solution of (1.1) with the initial data \((\phi_{0}(x),\psi_{0}(x))\).
Recalling (3) of Theorem 4.1, \(\xi(x)\) and \(\eta(x)\) have the exponential decays at infinity. By (5.20), we have
\[(|x|\phi_{0},|x|\psi_{0})\in L^{2}(\mathbb{R}^{N})\times L^{2}(\mathbb{R}^{N}). \tag{5.23}\]
Furthermore, as \(\lambda\to 1\),
\[\|\phi_{0}-\xi\|_{H^{1}(\mathbb{R}^{N})}\ \ \ \ and\ \ \ \ \|\psi_{0}-(-\eta)\|_{H^{1}(\mathbb{R}^{N})}\]
can be chosen arbitrarily small. Thus, using Proposition 5.5, we obtain that the solution \((\phi,\psi)\) of (1.1) with the initial data \((\phi_{0},\psi_{0})\) blows up (in \(H^{1}(\mathbb{R}^{N})\times H^{1}(\mathbb{R}^{N})-\)norm) in finite time.
This finishes the proof of Theorem 5.1. \(\Box\)
## Acknowledgments
Zaihui Gan is partially supported by the National Science Foundation of China under grant (No. 11571254) and the Natural Science Foundation of Tianjin under grant (No. 20JCYBJC01410).
|
2308.01901 | Exploring the structure and kinematics of the Milky Way through A stars | Despite their relatively high intrinsic brightness and the fact that they are
more numerous than younger OB stars and kinematically colder than older red
giants, A-type stars have rarely been used as Galactic tracers. They may, in
fact, be used to fill the age gap between these two tracers, thereby allowing
us to study the transition between them.
We analyse Galactic disc structure and kinematic perturbations up to 6 kpc
from the Sun based on observations of A-type stars.
This work presents a catalogue of A-type stars selected using the IGAPS
photometric survey. It covers the Galactic disc within $30^{o}\leq
l\leq215^{o}$ and $|b|\leq5^{o}$ up to a magnitude of $r\leq19$ mag with about
3.5 million sources. We used Gaia Data Release 3 parallaxes and proper motions,
as well as the line-of-sight velocities, to analyse the large-scale features of
the Galactic disc. We carried out a study of the completeness of the detected
density distributions, along with a comparison between the $b<0^{o}$ and
$b>0^{o}$ regions. Possible biases caused by interstellar extinction or by the
usage of some kinematic approximations were examined as well.
We find stellar overdensities associated with the Local and the Perseus
spiral arms, as well as with the Cygnus region. A-type stars also provide
kinematic indications of the Galactic warp towards the anticentre, which
displays a median vertical motion of ~6-7 km/s at a Galactocentric radius of
R=14 kpc. It starts at R=12 kpc, which supports the scenario where the warp
begins at larger radii for younger tracers when compared with other samples in
the literature. We also detect a region with downward mean motion extending
beyond 2 kpc from the Sun towards $60^{o}<l<75^{o}$ that may be associated with
a compression breathing mode. Furthermore, A-type stars reveal very clumpy
inhomogeneities and asymmetries in the $V_Z$-$V_{\phi}$ velocity space plane. | J. Ardèvol, M. Monguió, F. Figueras, M. Romero-Gómez, J. M. Carrasco | 2023-08-03T17:56:43Z | http://arxiv.org/abs/2308.01901v1 | # Exploring the structure and kinematics of the Milky Way through A stars+
###### Abstract
Context:Despite their relatively high intrinsic brightness and the fact that they are more numerous than younger OB stars and kinematically colder than older red giants, A-type stars have rarely been used as Galactic tracers. They may, in fact, be used to fill the age gap between these two tracers, thereby allowing us to evaluate the evolutionary and dynamic processes underlying the transition between them.
Aims:We analyse Galactic disc structure and kinematic perturbations up to 6 kpc from the Sun based on observations of A-type stars.
Methods:This work presents a catalogue of A-type stars selected using the IGAPS photometric survey. It covers the Galactic disc within \(30\degr\leq l\leq 215\degr\) and \(|b|\leq 5\degr\) up to a magnitude of \(r\leq 19\) mag with about 3.5 million sources. We used _Gaia_ Data Release 3 parallaxes and proper motions, as well as the line-of-sight velocities, to analyse the large-scale features of the Galactic disc. We carried out a study of the completeness of the detected density distributions, along with a comparison between the \(b<0\degr\) and \(b>0\degr\) regions. Possible biases caused by interstellar extinction or by the usage of some kinematic approximations were examined as well.
Results:We find stellar overdensities associated with the Local and the Perseus spiral arms, as well as with the Cygnus region. We find that A-type stars also provide kinematic indications of the Galactic warp towards the anticentre, which displays a median vertical motion of \(\sim 6\)-7 km s\({}^{-1}\) at a Galactocentric radius of \(R=14\) kpc. It starts at \(R\approx 12\) kpc, which supports the scenario where the warp begins at larger radii for younger tracers when compared with other samples in the literature. We also detect a region with downward mean motion extending beyond 2 kpc from the Sun towards \(60\degr\leq l\leq 75\degr\) that may be associated with a compression breathing mode. Furthermore, A-type stars reveal very clumpy inhomogeneities and asymmetries in the \(V_{2}\)-\(V_{\phi}\) velocity space plane.
Conclusions:
## 1 Introduction
Our knowledge of the Milky Way has been vastly improved in recent decades, but there are still many unknowns that remain to be deciphered. Our location within the Galaxy makes it more difficult to detect its structure with respect to external galaxies, although our advantageous position has allowed us to analyse our own Galaxy in much greater detail. In turn, this has enabled studies of the density and kinematic peculiar patterns produced by non-axisymmetric features (e.g. spiral arms, the warp, bar resonances, etc.) or by external interactions (as the Magellanic Clouds or the Sagittarius dwarf galaxy).
The _Gaia_ mission (Gaia Collaboration et al., 2016) is helping us learn more about the previously observed properties of the Milky Way, as well as to discover other non-equilibrium structures (e.g. Antoja et al., 2018; Belokurov et al., 2018; Helmi et al., 2018, and many others). As an example, Gaia Collaboration et al. (2021) and McMillan et al. (2022) have demonstrated that _Gaia_ measurements are able to highlight large non-equilibrium features of the Milky Way. The huge amount of the high-quality data it has already produced makes it ideal for analysing the structure of our Galaxy. Nevertheless, on-ground photometric and spectroscopic surveys are also extremely valuable as a complement to the _Gaia_ data. Some examples of these additional observations that are relevant to the present study are the photometry of IGAPS (Monguio et al., 2020), LAMOST spectra (Cui et al., 2012), and spectra obtained by WEAVE (Dalton, 2016; Jin et al., 2023) in the future.
A multi-tracer study of the Milky Way allows us to disentangle its structure and kinematics within a wide range of ages and then to examine their evolution. Some of the more widely used Galactic tracers are interstellar gas and dust, open clusters, young OB stars, and old red giants, among others. Each of them displays its own individual properties, requires different observational methods, exhibits its own biases and, in general, presents a different view of the Milky Way. As a non-exhaustive list of tracers used to analyse the Galactic structure, neutral hydrogen (HI) detected with radio wavelengths (that are barely affected by dust extinction) traces several gaseous spiral arms up to more than 15 kpc from the Sun (e.g. Oort et al., 1958; Levine et al., 2006; Nakanishi and Sofue, 2016; Koo et al., 2017). Sakai et al. (2019) and Reid et al. (2019) used masers associated with zero-age O-type stars to study the distribution and the kinematics of the youngest components of the spiral structure. Poggio et al. (2021) used young tracers (upper main sequence stars, cepheids, and open clusters) to find high-density structures associated with spiral arms. Pantaleoni Gonzalez et al. (2021) used massive OB stars and stressed the presence of a coherent feature that they refer as Cepheus Spur. In comparison, Cantat-Gaudin et al. (2020)
constructed their own catalogue of open clusters, concluding that their distribution is highly dependent on their age, with the oldest ones presenting signatures of both Galactic warp and flare. The Galactic warp has been examined using several tracers such as HI gas (Levine et al., 2006; Koo et al., 2017), infrared emission from dust and cold giants (Freudenreich et al., 1994), and cepheids (Chen et al., 2019; Skowron et al., 2019) as well as young OB and old Red Giant Branch (RGB) stars (Romero-Gomez et al., 2019).
Using tracers with different stellar ages is key to understand the evolution of the Milky Way (Amores et al., 2017). Since intermediately young A-type stars are relatively intrinsically bright and numerous, they are suitable for tracing density structures in the Galactic disc. They have typical masses of about \(3M_{\odot}\)(Griv et al., 2020) and ages of the order of 0.3-1.0 Gyr (Grosbol & Carraro, 2018). They are young enough to have small velocity dispersion (around \(20\,\mathrm{km\,s^{-1}}\) according to Aumer & Binney, 2009; Harris et al., 2018, 2019), while at the same time being old enough to have orbited the Galaxy up to four times and so, to have interacted gravitationally with the potential of different Galactic structures. Therefore, they are also appropriate for kinematic studies. In fact, these tracers open a new window to understand the Milky Way disc by offering the best of the two most frequently used types of tracers; namely, the brightness of giants and upper main sequence stars, the low velocity dispersion of young stars and the relatively high abundance of cooler stars. A few examples of research already performed with these stars are the following ones. Drew et al. (2008) analysed the spatial distribution of A-type stars in the OB association Cyg OB2, while Sale et al. (2010) studied the Galactic disc stellar density profile in a region of \(40\,^{\circ}\) in Galactic longitude around the anticentre (AC). Both papers showed the potential of selecting A-type stars using the \((r-H\alpha)\) vs \((r-i)\) colour-colour diagram. Monguio et al. (2015) detected the stellar overdensity associated with the Perseus arm for the first time using B4-A1 stars and located it at \(1.6\pm 0.2\) kpc from the Sun. Grosbol & Carraro (2018) compared B- and A-type stars kinematics near the Galactic centre (GC) with simulations and concluded that the Milky Way potential has two major arms. Harris et al. (2018, 2019) analysed the rotation curve of the Milky Way using the 6D configuration space (positions and velocities) of A- and F-type stars, but only in two specific lines of sight.
This paper is structured as follows. Section 2 defines reference systems used throughout the work, while Sect. 3 describes the data and different samples used. The stellar distribution in the XY plane and in the vertical direction are studied in Sect. 4. On the other hand, Sect. 5 describes a wide kinematic analysis of the samples. Section 6 presents a discussion of our results and comparison with the literature. Finally, Sect. 7 summarises the findings and conclusions of the work.
## 2 Coordinate systems
Several reference systems are used across this paper. In this section, we define them, along with the notation and the reference values used.
The spherical Galactic system is centred at the Sun. It is described by (\(l\), \(b\), \(d\)), where the first two variables correspond with Galactic longitude and Galactic latitude, respectively, and \(d\) stands for the distance between a given star and the Sun. Proper motions in these directions are referred to as (\(\mu_{l}\), \(\mu_{b}\)), while \(v_{\mathrm{los}}\) are the line-of-sight velocities. Projection effects are already considered in the sense that \(\mu_{l}\) contains the cos(\(b\)) term. Velocities in these Galactic directions are called (\(v_{l}^{\mathrm{corr}}\), \(v_{b}^{\mathrm{corr}}\)) and are computed as:
\[v_{\gamma}^{\mathrm{uncorr}}=4.7404705\,d\,\mu_{\gamma}\ \ \mathrm{with}\ \gamma=l,b, \tag{1}\]
where \(v_{\gamma}^{\mathrm{uncorr}}\) is in \(\mathrm{km\,s^{-1}}\) provided that \(d\) is in kpc and \(\mu_{\gamma}\) is in mass yr\({}^{-1}\). Both velocities given by Eq. (1) can be corrected for solar motion according to:
\[v_{l}^{\mathrm{corr}} = v_{l}^{\mathrm{uncorr}}-U_{\odot}\mathrm{sin}(l)+V_{\odot} \mathrm{cos}(l),\] \[v_{b}^{\mathrm{uncorr}} = v_{b}^{\mathrm{uncorr}}-U_{\odot}\mathrm{cos}(l)\mathrm{sin}(b)\] \[-V_{\odot}\mathrm{sin}(l)\mathrm{sin}(b)+W_{\odot}\mathrm{cos}(b),\]
where (\(U_{\odot}\), \(V_{\odot}\), \(W_{\odot}\)) are the components of the solar motion: \(V_{\odot}\) includes both the circular velocity of the local standard of rest and the peculiar azimuthal velocity of the Sun with respect to it (Abedi, 2015; Harris et al., 2019; Gaia Collaboration et al., 2021). We use the solar motion from Drimmel & Poggio (2018) [\(U_{\odot}\)] and Reid & Brunthaler (2020) [\(V_{\odot}\) and \(W_{\odot}\)], being (\(U_{\odot}\), \(V_{\odot}\), \(W_{\odot}\)) = (9.5, 250.7, 8.56) \(\mathrm{km\,s^{-1}}\) once scaled to the solar radius (\(R_{\odot}\)).
Heliocentric Cartesian coordinates are named as \(X\), \(Y\), and \(Z\), whereas Galactocentric ones are distinguished with a subscript: \(X_{\mathrm{Gal}}\), \(Y_{\mathrm{Gal}}\), and \(Z_{\mathrm{Gal}}\). They transform as \(X=X_{\mathrm{Gal}}+R_{\odot}\) and \(Z=Z_{\mathrm{Gal}}+Z_{\odot}\), while the Y component of both reference systems coincide. The used distance between the Sun and the GC (i.e. the solar radius) is equal to \(R_{\odot}=8.249\pm 0.009\) kpc (Gravity Collaboration et al., 2020) while the solar vertical coordinate \(Z_{\odot}\) is assumed to be zero as a simplifying approximation (thus, \(Z=Z_{\mathrm{Gal}}\)). X increases towards the GC in the heliocentric case or away from the Sun in the Galactocentric one. In either case, Y grows in the direction that rotation has at \(X_{\mathrm{Gal}}<0\), and \(Z\) towards the north Galactic pole (NGP).
The cylindrical Galactocentric radial, azimuthal, and vertical coordinates are referred to as \(R\), \(\phi\), and \(Z_{\mathrm{Gal}}\); while their respective velocities are \(V_{R}\), \(V_{\phi}\), and \(V_{Z}\). Also, (\(R\), \(\phi\), \(Z_{\mathrm{Gal}}\)) define a left-handed system, in which \(R\) increases away from the GC and \(\phi\) originates at the Sun-AC direction, increasing clockwise as seen from the NGP. In the case of the three velocities: \(V_{R}\) is positive outwards; \(V_{\phi}\), in the direction of rotation (clockwise from the NGP); and \(V_{Z}\), towards the NGP.
## 3 Data
The sample of A-type stars was selected using IGAPS photometric bands \(i\), H\(\alpha\), and \(r\) from IPHAS (Drew et al., 2005), which are centred at \(774.3\), \(656.8\), and \(624.0\) nm, respectively. It covers the northern Galactic plane within \(l\in[30\,,\,215]\,^{\circ}\) and \(|b|\leq 5\,^{\circ}\). We used the \((r-H\alpha)\) vs \((r-i)\) colour-colour diagram and a similar procedure to that described in Drew et al. (2008). The first step to obtain the working sample was to remove noise-like sources forcing the variable Class defined in Monguio et al. (2020) to be different from 0. White dwarfs (WD) and supergiants were avoided by imposing \(-0.1\) mag\(<(r-i)<2.2\) mag. Then, following Monguio et al. (2020), A-type stars were selected using the A0-A5 sequence line according to:
\[(r-H\alpha)-(\delta r-\delta H\alpha)-[0.0032+0.3735(r-i) \tag{3}\] \[-0.0608(r-i)^{2}+0.0041(r-i)^{3}] < 0,\]
with the \((\delta r-\delta H\alpha)\) term accounting for the uncertainties in the observed \((r-H\alpha)\) colour. Photometric errors increase sample contamination specially for faint stars. For this reason, we limited the previous selection at \(r\leq 19\) mag, which led to a sample with 3 532 751 stars. This sample is available through CDS as a table having the columns described in Appendix A.
Using a testing sample in the AC with near 2.05\(\cdot\)10\({}^{5}\) stars of any spectral type that have information both in IGAPS and in LAMOST DR81 catalogues, we found that 23% of stars selected as A0-A5 by the IGAPS photometric selection defined above are classified as other spectral types according to LAMOST and only 10% are not classified as types A0-A9. However, around half of that 23% sample have at least one LAMOST signal-to-noise ratio lower than 2. All this confirms that the contamination in our selected sample is small, well below 20%. This estimation agrees with Sale et al. (2010) and Harris et al. (2018), who found about 10-20% of contamination in their selections, which were made using similar methodologies.
Footnote 1: [http://www.lamost.org/dr8/v2.0/catalogue](http://www.lamost.org/dr8/v2.0/catalogue).
We also compared our selection with the golden sample of OBA stars from Gaia Collaboration et al. (2023a), restricted to a common sky region. Despite not including OB stars, our sample contains many more targets, and reaches both deeper distances and limiting magnitudes (with 90th percentiles of _Gaia_\(G\) passband being 16.5 mag for the golden sample and 18.8 mag for our A-type stars). On the other hand, the _Gaia_ golden sample includes few stars closer and apparently brighter that are not included in our sample due to saturation in the IGAPS photometric catalogue.
Once the sample was selected, parallaxes (\(\varpi\)) and proper motions were then obtained from _Gaia_ Data Release 3 (DR3, Gaia Collaboration et al. 2023c). To do this, the sample was cross-matched with this catalogue utilising a 1" radius. This procedure gave 3 512 224 counterparts (99.4% of our initial sample), out of which 3 490 765 (98.8%) have the three aforementioned quantities. The number of sources having \(\mathrm{RUWE}<1.4\) (Lindegren 2020) is 3 300 599 (94.5% of stars with astrometry). We are also interested in the line-of-sight velocities. There are 31 934 sources (0.9% of those with astrometry) with _Gaia_ DR3 \(v_{\mathrm{los}}\) available. It is also worth noting that they have a limiting magnitude of \(G_{\mathrm{RVS}}=14\) mag. These line-of-sight velocities were corrected according to Katz et al. (2023), which provided a small correction always below 0.4 km s\({}^{-1}\) for our sample. The correction described by Blomme et al. (2023) was not applied since most of the stars did not match the requirements for it.
Heliocentric distances were computed with the exponentially decreasing space density (EDSD) prior (Bailer-Jones 2015), which rely on a very small number of assumptions and just a single parameter, namely, the length scale. After checking different values for this length scale, the value of 3 kpc seems to be more appropriate for a sample of A-type stars (in agreement with the length scale of \(3020\pm 300\) pc derived by Sale et al. 2010). We studied as well other distance estimators such as those provided by Bailer-Jones et al. (2021). First, the geometric distances whose prior relies on several parameters varying for different directions in the sky and second, their photogeometric distances that also incorporates photometry in its prior. The priors used to compute these distances are usually based on colder - more numerous - stars and are not so well-behaved for our relatively blue stars lying near the Galactic plane (where extinction effects can be relevant).
Nevertheless, it should be taken into account that any distance estimator includes biases for parallaxes with large errors. In particular, the selected one tends to accumulate them at a distance equal to twice the length scale (i.e. at 6 kpc from the Sun in our case). In consequence, the presence of any buildup at this particular distance should be suspicious. To mitigate the effects of large parallax uncertainties (\(\sigma_{\varpi}\)), a cut in relative parallax error (\(\sigma_{\varpi}/\varpi\)) can be applied, assuming that this derives in a biased sample as stated by Luri et al. (2018). From now on, the reduced sample verifying \(0<\sigma_{\varpi}/\varpi\leq 0.3\) is referred to as the pP30 sample. It contains 1 394 075 sources (39.9% of stars with astrometry). The less restrictive \(0<\sigma_{\varpi}/\varpi\leq 0.5\) quality cut was used to construct the pP50 sample, which has 2 064 351 stars (59.1% of stars with astrometry). Lastly, an analogous sample (hereinafter, named pP30-RV) was created applying the same \(0<\sigma_{\varpi}/\varpi\leq 0.3\) cut for those stars having \(v_{\mathrm{los}}\) from _Gaia_ DR3. That cut reduced our sample with _Gaia_ DR3 line-of-sight velocities to 30 185 stars (94.5% of them). This sample reaches shorter distances than the previous one (up to \(d\approx 3\) kpc rather than \(d\approx 5\)-6 kpc) because of the \(G_{\mathrm{RVS}}=14\) mag limiting magnitude imposed to _Gaia_ DR3 \(v_{\mathrm{los}}\). Table 1 summarises the sizes of the relevant subsamples described in this section.
Figure 1 demonstrates again that the level of contamination of our selection method is low by showing the extinction-corrected _Gaia_ colour-magnitude diagram (CMD) for the pP30 sample together with three isochrones from PARSEC v1.2S (Bressan et al. 2012; Chen et al. 2015; Marigo et al. 2017)2. We used reddening values from the Green et al. (2019) dustmap converted to extinctions in our bands of interest (i.e. \(G\) and \(G_{\mathrm{BP}}-G_{\mathrm{RP}}\)) using Eqs. B. B.1 and B.2 defined in Appendix B together with the usual relations:
Footnote 2: [http://stev.oapd.inaf.it/cgi-bin/cmd](http://stev.oapd.inaf.it/cgi-bin/cmd).
Footnote 3: See also [https://dustmaps.readthedocs.io/en/latest/modules.html#module-dustmaps.bayestar](https://dustmaps.readthedocs.io/en/latest/modules.html#module-dustmaps.bayestar).
\[M_{G} = G-5{\rm 5log}_{10}(d)+5-A_{G}, \tag{4}\] \[(G_{\mathrm{BP}}-G_{\mathrm{RP}})_{0} = (G_{\mathrm{BP}}-G_{\mathrm{RP}})-E(G_{\mathrm{BP}}-G_{\mathrm{RP}}), \tag{5}\]
where \(A_{G}\) is the extinction in the G band; \((G_{\mathrm{BP}}-G_{\mathrm{RP}})\) and \((G_{\mathrm{BP}}-G_{\mathrm{RP}})_{0}\) are the observed and the dereddened _Gaia_ colours, respectively; \(G\) and \(M_{G}\) are the apparent and the absolute magnitude in the G band, respectively; and \(d\) must be in parsecs. We obtained \(E(B-V)\) reddening values from Green et al. (2019) using the dustmaps python package (Green 2018) and we transformed them to extinction in the V band according to \(A_{V}=2.742E(B-V)\)(Schlafly & Finkbeiner 2011; Green et al. 2019)3. The groups of WD and giants - shown in Fig. 1 at coordinates (0, 12) and (1.25, 0.50), respectively - represent such a small fraction of the sample (less than 1% in total) that they barely affect our results. In addition, detected WDs are located very near the Sun (almost all of them are closer than 0.5 kpc), so they lie in the region where there are not enough statistics to ascertain the reliability of these results.
Footnote 4: All \(M_{G}\) numerical values in this section are derived from the Table 15.7 from Cox (2002) transformed into _Gaia_ bands according to [https://gea.esac.esa.int/archive/documentation/GDR3/](https://gea.esac.esa.int/archive/documentation/GDR3/)
In turn, Fig. 2 shows the histograms of \(G\) and \(M_{G}\) for the pP30 (top) and the pP30-RV (bottom) samples and demonstrates that in both cases the mode is located between the absolute magnitudes expected for A0 and A5 stars (i.e. between 0.6 mag and 1.9 mag)4. The fraction of stars having \(M_{G}\) larger than the value expected for F0 stars (i.e. with \(M_{G}\geq 2.6\) mag) is 25.8% for the
\begin{table}
\begin{tabular}{l|l|l} Sample & Description & Stars \\ \hline All & A stars with \(r\leq 19\) mag & 3 532 751 \\ All-DR3 & All + _Gaia_ DR3 astrometry & 3 490 765 \\ pP50 & All-DR3 + \(0<\sigma_{\varpi}/\varpi\leq 0.5\) & 2 064 351 \\ pP30 & All-DR3 + \(0<\sigma_{\varpi}/\varpi\leq 0.3\) & 1 394 075 \\ All-RV & All-DR3 + _Gaia_ DR3 \(v_{\mathrm{los}}\) & 31 934 \\ pP30-RV & pP30 + _Gaia_ DR3 \(v_{\mathrm{los}}\) & 30 185 \\ \end{tabular}
\end{table}
Table 1: Number of stars of the samples given in Sect. 3.
pP30 sample and 15.3% for the pP30-RV one, which confirms again the goodness of the selection. One must take into account that this estimation of the contamination depends on the actual contamination of the selection, but also on the parallax errors and the used distance estimator, as well as on both the applied dustmap and the assumed extinction law. We note that the bottom panel (using the pP30-RV sample) shows that the _Gaia_'s selection function for a sample of A-type stars having \(v_{\rm los}\) has a clear and artificial bimodality originated by the different methodology applied in the \(G_{\rm RVS}\leq 12\) mag and the \(G_{\rm RVS}>12\) mag regimes (Katz et al., 2023).
## 4 Structure
This section describes two procedures that shed light on the Milky Way structure through the analysis of stellar densities across the XY Galactic plane. Then, we study the vertical distribution of stars by comparing the sample above and below the plane defined by \(b=0\degr\).
### Distribution across the XY plane
We used two different approaches to locate Galactic structures: stellar surface densities and stellar 2D local overdensities. The first one is based on the method described in Sect. 4.1 from Monguio et al. (2015), where the authors define the surface density at each point by extrapolating the observed density in a limited \(Z\) range to the whole \(Z\) range by assuming a \({\rm sech}^{2}(Z/h_{z})\) vertical density distribution. For our study, we assumed a scale height of \(h_{z}=200\) pc (Monguio et al., 2015) and ignored the Galactic warp. We also generalised their method splitting the sample into different Galactic longitude bins (of 2 \(\degr\)) in addition to into distance bins (of 100 pc). The second method involves computing local overdensities with the bivariate kernel density estimation described in Eq. 1 and Appendix B.1 from Poggio et al. (2021). We utilised Epanechnikov kernel functions with a local density bandwidth \(h_{\rm local}=0.3\) kpc and a mean density bandwidth \(h_{\rm mean}=1.5\) kpc applied at intervals of 100 pc in both \(X_{\rm Gal}\) and \(Y_{\rm Gal}\) coordinates. Although this alternative cannot provide the absolute scale of the densities, it is better at highlighting local overdensities. In fact, they are enhanced by this second method at larger distances (where absolute densities are lower), while remaining partially hidden in the Poisson noise when using surface densities.
The stellar surface density map in the XY Galactic plane resulting from applying the first methodology to the pP30 sample is show in the left panel of Fig. 3. Densities are very uncertain for \(d\leq 0.5\) kpc as they rely in a very low number of stars owing to the small volume covered by the \(b\)-limited sample (thus, they are not shown). The right panel in Fig. 3 shows stellar local overdensities computed with the second method. The map for the uncut sample is almost completely radial, so it also employs the pP30 sample to avoid severe radial artefacts. In the same way as in its left panel, results from the nearest region are not trustworthy because of the small volume covered by the sample; the furthest values are also unreliable because of parallax uncertainties. This method has some edge effects as well, mainly originated at both \(l\) limits. They were partially corrected by taking into account which was the real sampled area.
Radial structures with respect to the Sun in Fig. 3 can have two different causes. On the one hand, they can be cones of low density caused by foreground extinction, as the light yellow (blue) triangular region starting around \((X_{\rm Gal},Y_{\rm Gal})\approx(-7.75,\,2.50)\) kpc in the left (right) panel due to extinction in Cygnus. On the other hand, parallax uncertainties translate into uncertain stellar distances, causing blurring in the radial direction. This second effect elongates density structures, which appear distorted even if they are real. Dealing with these two artefacts is not straightforward. A 3D extinction map is required to estimate the consequences of the former (see Appendix C),
Figure 1: Extinction-corrected _Gaia_ CMD of the pP30 sample. Three lines indicate the 0.2, 1.0 and 2.0 Gyr isochrones. The colour shows the density in a logarithmic scale where brighter means more stars. Only bins with ten or more stars are plotted.
Figure 2: Histograms of apparent magnitude (\(G\), left column) and of extinction-corrected absolute magnitude (\(M_{G}\), right column) for the pP30 (top row) and the pP30-RV (bottom row) samples. Vertical lines show absolute \(M_{G}\) magnitudes for B8 (-0.3 mag), A0 (0.6 mag), A5 (1.9 mag), and F0 (2.6 mag) stars from left to right. In all cases, \(\mu\) stands for the mean value of the distribution, and \(\sigma\) for its standard deviation.
whereas the latter is already mitigated by restricting the sample to high-quality parallaxes.
The resemblance between the two panels of Fig. 3 allows us to verify the results from the \(0\leq\sigma_{\varpi}/\varpi\leq 0.3\) cut using two different techniques. Their most evident features are (a) the high density spot visible at \((X_{\rm Gal},Y_{\rm Gal})\approx(-10.00,\,1.75)\) kpc and (b) the 'V' shape starting at \((X_{\rm Gal},Y_{\rm Gal})\approx(-8,\,1)\) kpc, which are associated with the Perseus arm and the Cygnus region, respectively. The former may also be associated with a minor peak centred at \((X_{\rm Gal},Y_{\rm Gal})\approx(-10.25,\,2.50)\) kpc, which is blended with the primary one for the local overdensities. They also agree in showing (c) a short overdense band at \((X_{\rm Gal},Y_{\rm Gal})\approx(-9.00,\,0.75)\) kpc, very near to the Sun (\(d\approx 1\) kpc) in the second quadrant; (d) a low-density elongated area centred at \((X_{\rm Gal},Y_{\rm Gal})\approx(-10.0,\,0.5)\) kpc; and (e) three small peaks relatively close to the AC. In particular, two of them are on both sides of the \(Y=0\) line, at \(X_{\rm Gal}\approx-9.75\) kpc (only barely visible in the right panel) and at \(X_{\rm Gal}\approx-10.25\) kpc, respectively; the third one is located at \((X_{\rm Gal},Y_{\rm Gal})\approx(-9.50,\,-0.75)\) kpc. In addition, the surface density map shows (f) the beginning of a high-density section at the lower-\(l\) end of the sample (i.e. at the right radial cut of this XY map) that extends up to \(d\approx 3\) kpc. This (f) structure, together with the third peak in (e) are so close to the sample edge, that in the right panel, it is not possible to guarantee whether they are real structures or not. All these (a)-(f) features are also visible when using a cleaner sample with \(0\leq\sigma_{\varpi}/\varpi\leq 0.15\).
### Vertical distribution
This section is devoted to study the vertical stellar distribution. We compared the southern (\(b<0\,^{\circ}\)) and the northern (\(b>0\,^{\circ}\)) parts of the Galactic disc by splitting the pF30 sample into these two bins. Then, we computed their stellar surface densities as explained in Sect. 4.1. The resulting surface densities are named respectively \(\Sigma_{\rm b<0}\) and \(\Sigma_{\rm b<0}\).
In Fig. 4, their difference is normalised in relation to the uncertainty of the surface density of the full pF30 sample, that is: with respect to \(\sigma[\Sigma]\), where \(\Sigma\) denotes the stellar surface density (left panel of Fig. 3). In turn, this uncertainty was estimated by propagating a Poisson error for the number of stars inside each bin (see Eq. 3 from Monguio et al. 2015). The employment of this relative deviation allows us to highlight variations that are statistically representative in favour of those which are smaller than uncertainty fluctuations or of the same order. At the same time, \(\sigma[\Sigma]\) introduces only small deviations to the general trend of \(\Sigma_{\rm b>0}-\Sigma_{b<0}\).
Figure 4 reveals structures with discrepancies well beyond the 5\(\sigma\) level that cannot be explained by simple statistical fluctuations of the stellar density along the vertical direction. So, they originate from actual differences in the observed stellar distribution. However, this may not necessarily correspond with differences in the real Galactic stellar distribution. Once again, extinction may affect the results. For these reason, we include an estimation of the distance (\(d_{\rm lim}\)) that can be reached with each subsample, restricted to \(r\leq 19\) mag. These rough estimations of completeness limits, which take extinction into account, are computed as described in Appendix C.
Despite the fact that they can only be treated qualitatively, a more in-depth analysis of the \(d_{\rm lim}\) estimations of completeness limits provides two important outcomes. Firstly, it enables the detection of differences in extinction between the northern and the southern disc. Secondly - and this aspect is highly correlated with the previous one - it provides a diagnostic to decide whether a feature in Fig. 4 is more probable to be a real asymmetry, or an artefact caused by detection limits due to differences in absorption between both subsamples. The following detailed, step-by-step reasoning clarifies this statement.
Consider for instance the positive (green) patch at \(l\approx 70\,^{\circ}\) beyond \(d\approx 4\) kpc in Fig. 4. Towards this direction, \(d_{\rm lim}(b>0)>d_{\rm lim}(b<0)\), which implies that the line of sight can reach deeper distances for \(b>0\,^{\circ}\). Therefore, the extinction is higher for \(b<0\,^{\circ}\) on average and thus, the number of stars included in a magnitude-limited catalogue would be smaller for \(b<0\,^{\circ}\) (where stars are more faded) than for \(b>0\,^{\circ}\). This directly results in the observed \(\Sigma_{b>0}\) being larger than \(\Sigma_{b<0}\) even if the real distribution of stars was completely homogeneous in this location. So, a positive value of the quantity \((\Sigma_{b>0}-\Sigma_{b<0})/\sigma[\Sigma]\) in
Figure 3: Galactocentric XY maps of the stellar surface density (left) and local overdensities (right) for the pF30 sample. The solar position is indicated with a black star at \(X_{\rm Gal}=-R_{\odot}\) and the GC is at the origin, beyond the right-hand edge of each plot.
this context may be caused by extinction biases, at least partially, rather than arising entirely from a real vertical peculiarity in the Galactic disc densities. An analogous argument proves that the opposite stands for \(d_{\rm lim}(b>0)\,<\,d_{\rm lim}(b<0)\), as around \(l\approx 100\,^{\circ}\) or \(l\approx 120\,^{\circ}\).
For the same reason, the sign of \((\Sigma_{b>0}-\Sigma_{b<0})/\sigma[\Sigma]\) may indicate a real density difference between both subsamples when \(d_{\rm lim}(b>0)\approx d_{\rm lim}(b<0)\). For instance, this is the case of the positive (green) spot at \((l,d)\approx(80\,^{\circ},\,1.75\) kpc) or the negative (pink) one at \((l,d)\approx(135\,^{\circ},\,2.5\) kpc). The former indicates that the Cygnus region is predominantly in the northern disc (as seen in the upper panel of Fig. 1 from Quintana & Wright 2021). In contrast, the second example might suggest that the Perseus arm diverts southwards or that there is some other kind of stellar overdensity below the Galactic plane in this direction. A detailed analysis of the completeness of each half of the sample is needed to determine up to which distance these densities are truly reliable at each Galactic longitude.
In Fig. 5, we plot the median values of the \(Z_{\rm Gal}\) coordinate for the pP30 sample. The sample is binned in \(l\) and \(d\) employing the same grid as before (i.e. 2\(\,{}^{\circ}\) width in \(l\) and 100 pc long in \(d\)). We note that the range in \(Z_{\rm Gal}\) of the closest bins to the Sun is extremely limited owing to the sample restriction in \(b\) (reaching only \(|Z_{\rm Gal}|=87\) pc at \(d=1\) kpc). Therefore, their medians will always be systematically near zero. This figure has features equivalent to those in Fig. 4, though they tend to start about 1-2 kpc later for median \(Z_{\rm Gal}\) than for \((\Sigma_{b>0}-\Sigma_{b<0})/\sigma[\Sigma]\). The main examples of this are the positive (green) values at \(60\,^{\circ}\leq l\leq 80\,^{\circ}\) and at \(l\geq 130\,^{\circ}\), or the negative (pink) region between them. This demonstrates that both approaches are highly correlated.
We note that these two figures are dependent on the value assumed for \(Z_{\odot}\). However, with a value of \(Z_{\odot}=20.8\) pc (Bennett & Bovy 2019), the change in Fig. 5 is a negligible shift. The \(\Sigma_{b>0}-\Sigma_{b<0}\) seen in Fig. 4 has small variations (smaller than 10 times \(\sigma[\Sigma]\) for 90% of the bins) that do not modify the main structures.
The Galactic warp is a large-scale structure bending the disc, shifting the density upwards in the northern Galactic plane. As seen through HI gas, it has an amplitude (i.e. height at the direction of maximum deviation, \(\phi\approx 90\,^{\circ}\)) between 1.3 kpc (Levine et al. 2006b) and 1.7 kpc (Koo et al. 2017) at \(R\approx 16\) kpc. Cepheids show an amplitude of 1.0 kpc at \(R\approx 14\) kpc (Skowron et al. 2019), while Romero-Gomez et al. (2019) find 0.2 kpc amplitude for OB stars and 1.0 kpc for RGB stars at the same Galactocentric radius. If there were no extinction, we would expect that median \(Z_{\rm Gal}\) values grow according to the previous characterisation. As shown in Fig. 5, our sample is highly affected by differences in the extinction above and below the plane. Therefore, no clear conclusions about the warp using only stellar counts can be made at this stage. In the next section, we study the warp through kinematics, which are not so heavily affected by incompleteness.
## 5 Kinematics
This section focuses on our A-type stars large-scale kinematics, in particular, on the (\(V_{R}\), \(V_{\phi}\), \(V_{Z}\)) velocity maps (Sect. 5.1), on the \(v_{\lambda}^{\rm corr}\) distribution and the Galactic warp (Sect. 5.2), and on the inhomogeneities of the \(V_{Z}\)-\(V_{\phi}\) plane (Sect. 5.3).
### 6D sample
The Galactocentric XY maps for the pP30-RV sample in the three components of cylindrical Galactocentric velocities \(V_{R}\), \(V_{\phi}\), and \(V_{Z}\) are shown in Fig. 6. We note that due to the very strict constrains of the \(v_{\rm los}\) requirement, the sample barely changes when applying the parallax quality cut (two last rows of Table 1). The main properties of these maps are described below. We only
Figure 4: Heliocentric distance versus Galactic longitude map of the difference between the surface densities for \(b>0\,^{\circ}\) and \(b<0\,^{\circ}\) relative to the uncertainty of the density of the full pp30 sample. Positive values mean larger densities in the northern bin. Estimations of completeness limits for the \(b<0\,^{\circ}\) (\(b>0\,^{\circ}\)) subsample are shown as orange circles (blue squares) connected by a dotted (dashed) line that guides the eye.
Figure 5: Heliocentric distance versus Galactic longitude map of the distribution of median values of \(Z_{\rm Gal}\) for the pP30 sample. Positive values mean that stars are mostly at positive \(Z_{\rm Gal}\). Estimations of completeness limits for the \(b<0\,^{\circ}\) (\(b>0\,^{\circ}\)) subsample are shown as orange circles (blue squares) connected by a dotted (dashed) line that guides the eye.
discuss those features extending far from the edges and over a large amount of bins.
In the top panel, median Galactocentric radial velocities are positive (outwards) at \(l\approx 45\)-\(90\,\degr\) and at the second quadrant, whereas it presents two negative clumps close to the sample lower borders. In the middle panel, the azimuthal velocity map has important variations with considerably low values (\(V_{\phi}\la 230\,\mathrm{km\,s^{-1}}\)) towards the AC and values above \(240\,\mathrm{km\,s^{-1}}\) in all the first quadrant. When looking at the AC direction, we see a rapidly decreasing trend having a \(\sim 15\,\mathrm{km\,s^{-1}}\) drop within a range of \(\sim 2\) kpc in \(R\). This could be related with the asymmetric drift that causes median \(V_{\phi}\) to be lower than circular velocities (chapter 4.8.2a from Binney & Tremaine 2008). However, due to the small radial velocity dispersion of our relatively young targets (around \(15\,\mathrm{km\,s^{-1}}\) according to Aumer & Binney 2009), this effect is almost independent of \(R\) and smaller than \(5\,\mathrm{km\,s^{-1}}\) for our A-type stars (see right panel of Fig. 4 from Robin et al. 2017). Thus, this effect cannot explain the large peculiarities we find. In the bottom panel, the most striking feature is a region with downward motion (\(V_{Z}<0\,\mathrm{km\,s^{-1}}\)) at \(l\approx 60\)-\(75\,\degr\), around (\(X_{\mathrm{Gal}},Y_{\mathrm{Gal}}\)) \(\approx(-7.0,\,2.5)\) kpc. It is surrounded by two elongated regions with upward motion, one at larger \(l\) and the other at lower \(l\) with respect to it. All this indicates the presence of a kinematic perturbation in the aforementioned region. Considering that at \(l\approx 60\)-\(75\,\degr\) nearby extinction is more concentrated below \(b=0\,\degr\) than above (see the difference between both completeness distance estimations \(d_{\mathrm{lim}}\) in Fig. 4, the explanation in Sect. 4.2, Figs. 3-4 from Sale et al. 2014 or also Fig. 2 from Green et al. 2019), this perturbation may be the signature of a compression breathing mode. Galactic warp effects are expected to start too far (beyond \(R\approx 10\)-\(12\) kpc according to Romero-Gomez et al. 2019; Gaia Collaboration et al. 2023b) to be observed with this reduced sample.
We expect that some of these detected features are related with perturbations caused by spiral arms or by interactions with the Milky Way satellites (such as the Sagittarius dwarf galaxy). The short distance covered and the lack of part of the first and the third quadrants, as well as the full fourth quadrant, make it difficult to reach any further conclusion in this regard. Acquisition of a larger number of line-of-sight velocities for our A-type stars sample will allow us to clarify detected kinematic patterns and to fully understand them.
### Without the line-of-sight velocities
Our IGAPS selection and the recent _Gaia_ DR3 make these A-type stars kinematic maps possible up to \(r=19\) mag for the first time. In this section we improve the statistics of the previous 3D kinematic analysis and we study \(\nu_{b}^{\mathrm{corr}}\) (without \(v_{\mathrm{los}}\)) for the pP50 sample.
Appendix D explains the effects of using \(\nu_{b}^{\mathrm{corr}}\) as an estimator for \(V_{Z}\) and the differences between these two magnitudes. In particular, it is worth to notice that as the sample is limited to \(30\,\degr\)\(\la l\la 215\,\degr\), the main expected discrepancy introduced by this approximation is a compression breathing mode dependent on \(l\) that is present in the \(\nu_{b}^{\mathrm{corr}}\) distribution, but not in the \(V_{Z}\) one.
The map of median \(\nu_{b}^{\mathrm{corr}}\) across the Galactic XY plane is shown in Fig. 7. It has two prominent features. The first one is a very negative \(\nu_{b}^{\mathrm{corr}}\) region at (\(l,\,d\)) \(\approx\) (60-\(75\,\degr\), 6-\(7\) kpc) - or (equivalently) around (\(X_{\mathrm{Gal}},Y_{\mathrm{Gal}}\)) \(\approx\) (\(-6\), 6) kpc -, which coincides with the prolongation of the \(V_{Z}<0\,\mathrm{km\,s^{-1}}\) peculiarity highlighted in the previous section with the 6D sample (bottom panel of Fig. 6). This feature might be artificially enhanced in Fig. 7 by the use of \(\nu_{b}^{\mathrm{corr}}\) as a proxy of \(V_{Z}\) (see Appendix D). The second feature is a positive \(\nu_{b}^{\mathrm{corr}}\) band towards \(160\,\degr\)\(\la l\la 210\,\degr\) beyond \(R\approx 12\) kpc (left-most edge of the sample). It demonstrates the existence of the Galactic warp signature as a large-scale structure located approximately around the AC at large Galactocentric radii (\(R\ga 12\) kpc) that is moving upwards. No important differences between \(\nu_{b}^{\mathrm{corr}}\) and \(V_{Z}\) are expected in this region (see Fig. D.2b).
A Galactic warp causes stellar orbits to have a specific vertical waving. In particular, for a symmetric warp formed by tilted flat circular orbits (see Fig. 5.1 from Abedi 2015) its maximum
Figure 6: Velocity maps of the pP30-RV sample in the Galactocentric XY plane. They show medians of radial (\(V_{R}\)), azimuthal (\(V_{\phi}\)), and vertical (\(V_{Z}\)) velocities from top to bottom. The bins have \(0.2\) kpc per side and only those containing ten or more stars are shown. Black lines show the Galactic longitude limits of the sample. Grey lines indicate Galactocentric radii from 5 to 12 kpc and azimuths from \(-15\,\degr\) to \(30\,\degr\) with \(5\,\degr\) steps.
and minimum vertical velocities are reached towards the line-of-nodes (this would not necessarily be the same for different warp geometries, see for instance Romero-Gomez et al., 2019). Furthermore, the amplitude of this oscillating motion is expected to grow with Galactocentric radius \(R\), as so does the maximum vertical coordinate reached. Figure 8 confirms this predictions by showing median \(v_{b}^{\rm corr}\) as a function of \(\phi\) for different bins in Galactocentric radius, reaching up to \(v_{b}^{\rm corr}\approx 6\)-\(7\,\)km\(\,\)s\({}^{-1}\) at \(R\approx 14\) kpc.
The smallest \(v_{b}^{\rm corr}\) values seen in Fig. 8 beyond \(\phi\approx 50\,^{\circ}\) correspond with the negative \(v_{b}^{\rm corr}\) feature found in Fig. 7 around (\(X_{\rm Gal}\), \(Y_{\rm Gal}\)) \(\approx\) (\(-6\), 6) kpc. To revisit this region and analyse it in depth, two slices in \(l\) were selected from the pP50 sample (as it was done for the Appendix D model): namely \(60\,^{\circ}\leq l\leq 75\,^{\circ}\) and \(172.5\,^{\circ}\leq l\leq 187.5\,^{\circ}\) (the second one as a reference to compare a direction where there are almost no differences between \(v_{b}^{\rm corr}\) and \(V_{Z}\)). Figure 9 shows how \(v_{b}^{\rm corr}\) is distributed in the \(Z_{\rm Gal}\)-\(d\) plane for each of them. Figure 9 presents a noticeable compression breathing mode that is very similar in amplitude to the one modelled in Fig. D.2a (note that these two panels have almost the same colour range). A relevant difference between the model and the data is that the former displays a straight division between positive and negative velocities (thanks to its high degree of symmetry), while the band satisfying \(v_{b}^{\rm corr}\approx 0\,\)km\(\,\)s\({}^{-1}\) widens and seems to bend towards negative \(Z_{\rm Gal}\) at \(l\gtrsim 3.5\) kpc for the pP50 sample. By contrast, Fig. 9b exhibits globally positive medians of \(v_{b}^{\rm corr}\) caused by the Galactic warp. Furthermore, it also shows a compression breathing mode, although much weaker than the previous one. In this case, neither the warp signature nor the observed breathing mode cannot be attributed to the differences between \(v_{b}^{\rm corr}\) and \(V_{Z}\).
### Velocity space asymmetries
This section further exploits the strengths of A-type stars as Galactic-scale tracers by analysing the velocity space asymmetries. We study the distribution in the \(V_{Z}\)-\(V_{\phi}\) plane, which displays several overdensities whose shape and size (and even their number in some cases) depend on \(R\) as well as on \(Z_{\rm Gal}\). Similar plots are available in Fig. 13 of Gaia Collaboration et al. (2021) for sources without any selection on stellar types (see also McMillan et al., 2022, where these perturbations are analysed based on a sample of intrinsically blue stars).
Since the current study requires to use narrow Galactocentric radial bins, the pP50 sample is used again to improve the statistics. We use the same approach as in Gaia Collaboration et al. (2021), with the approximations \(V_{Z}\equiv v_{b}^{\rm corr}\) and \(V_{\phi}\equiv-v_{b}^{\rm corr}\) for a sample of sources restricted to \(170\,^{\circ}\leq l\leq 190\,^{\circ}\). The resulting distribution in the \(v_{b}^{\rm corr}\) vs \(-v_{b}^{\rm corr}\) plane for several bins in Galactocentric radius is shown in Fig. 10, where the two rows compare stars having \(Z_{\rm Gal}>0\) (upper row) with those having \(Z_{\rm Gal}<0\) (bottom row).
The most compact peaks seen in the upper left panel of this figure at \((v_{b}^{\rm corr},-v_{b}^{\rm corr})\approx(-0.3,\,210)\,\)km\(\,\)s\({}^{-1}\) and \((v_{b}^{\rm corr},-v_{b}^{\rm corr})\approx(1.9,\,225)\,\)km\(\,\)s\({}^{-1}\) are associated with overdensities in sky co
Figure 8: Median velocity in \(b\) direction, corrected for the solar motion, versus the Galactocentric azimuthal coordinate. The pP50 sample sources are binned in \(R\) every 2 kpc. Error bars show standard deviations divided by the square root of the number of stars in each bin.
Figure 7: Galactocentric XY map of the corrected velocities in the \(b\) direction derived from _Gaia_ DR3 proper motions for the pP50 sample. Bins have sides equal to 0.2 kpc and are colour-coded by median \(v_{b}^{\rm corr}\). Only those bins with ten or more stars are shown. Black diagonal lines show the Galactic longitude limits of the sample. Grey lines indicate Galactocentric radii from 4 to 16 kpc and azimuths from \(-20\,^{\circ}\) to \(70\,^{\circ}\) with \(10\,^{\circ}\) steps.
Figure 9: Vertical Galactic coordinate as function of heliocentric distance, colour-coded by the median \(v_{b}^{\rm corr}\) for the pP50 sample. Panel (a) displays the slice between \(60\,^{\circ}\leq l\leq 75\,^{\circ}\), whereas panel (b) uses stars with \(172.5\,^{\circ}\leq l\leq 187.5\,^{\circ}\). In both cases, the size of the bins is 0.2 kpc in \(d\) and 20 pc in \(Z_{\rm Gal}\). Bins with fewer than ten stars are not shown.
dinates; actually, they are the signature of open clusters NGC 2099 and NGC 1912, respectively. On the other hand, Fig. 10 displays clear velocity space inhomogeneities, as well as asymmetries between above and below the Galactic plane. By visual inspection of panels beyond \(R=11\) kpc, two main peaks are seen for \(Z_{\rm Gal}>0\), whereas stars at \(Z_{\rm Gal}<0\) are grouped in three major overdensities. All of them move towards smaller rotational velocities and slightly smaller vertical velocities when increasing \(R\). The origin of this substructure is still controversial, although it could be related with the passage of a satellite as the Sagittarius dwarf galaxy (McMillan et al. 2022).
Since \(v_{\rm los}\) information is very relevant to disentangle the origin of these inhomogeneities, we tried to reproduce these results using \(V_{\phi}\) and \(V_{Z}\) velocities for stars with _Gaia_ DR3 line-of-sight velocities. Nevertheless, they are so few that no kinematic substructure can be recovered. To recover, at least partially, the large statistics of the original A-type stars sample and complete these analysis using the 3D velocity space, \(v_{\rm los}\) measurements from WEAVE will be crucial. In contrast with _Gaia_, whose line-of-sight velocities methods (measuring the near infrared calcium triplet region) are more suitable for colder stars, WEAVE will have a survey specially dedicated to A-type stars (Jin et al. 2023) which will definitively help to fill this gap.
## 6 Discussion
This section is devoted to discuss the major findings of the work, to contrast density and kinematics of A-type stars with different tracers and to compare our results with the literature. It follows the same order as previous sections, starting by density structures and then discussing kinematics.
Overdense structures highlighted in Fig. 3 are likely associated with: (a) the Perseus arm; (b), (c), and (e) different parts of the Local arm; and (f) the largest-\(l\) component of the Sagittarius-Carina arm. This is in agreement with features found by Poggio et al. (2021) in their Fig. 1. In contrast, our structure (c) can also be related with the Cepheus spur defined by OB stars in the left panel of Fig. 5 from Pantaleoni Gonzalez et al. (2021). We have checked that the spiral arms models of Xu et al. (2018) and Reid et al. (2019) overlap with the overdensities found in Perseus, Local and Sagittarius-Carina arms. However, the observed distribution favours an scenario with clumpy structures over continuous spiral arms. The low density region (d) arises naturally from the inter-arm region between the Perseus and the Local arms based on the Poggio et al. (2021) approach. On the contrary, Cantat-Gaudin et al. (2020) highlights, in the same coordinates, a lack of open clusters that splits their more tightly wound model of the Perseus arm (see the right panel of their Fig. 11). Regardless of which pitch angle is the correct one, there is an 'empty' volume at this position. At least, it is depleted of masers (Reid et al. 2019), upper main sequence stars (Poggio et al. 2021), A-type stars (this work) and open clusters (Cantat-Gaudin et al. 2020 and references in their Sect. 5.1, Hunt & Reffert 2023). Furthermore, it is also recovered by Gaia Collaboration et al. (2023b) both with OB stars and young open clusters (see their Figs. 12-14).
We go on to consider the pP30 vertical distribution mapped in Fig. 5. Despite of the difference in coordinates projection, it is highly in agreement with both panels of Fig. 5 from Romero-Gomez et al. (2019), which use a sample of OB stars and another one of RGB stars. A very recognisable feature is the positive region around \(l\approx 60\)-\(75\,^{\circ}\) that is shown in their two samples as well as in our pP30 sample. The largest positive region at \(l\gtrsim 130\,^{\circ}\) that starts after \(d\approx 4\) kpc (i.e. covering a wide field-of-view around the AC beyond \(R\approx 12\) kpc) for our A-type stars also seems to agree with a combination of both OB and RGB samples from Romero-Gomez et al. (2019): it samples a larger area than the former, while it seems to be shifted about \(\sim 10\,^{\circ}\) in \(l\) with respect to the latter in the sense that the one of the RGB sample is centred at lower Galactic longitudes. The negative clump at \(80\,^{\circ}\lesssim l\lesssim 130\,^{\circ}\) corresponds better with the OB sample of the aforementioned authors than with their RGB sample, although they coincide for those Galactic longitudes nearer to \(l=90\,^{\circ}\). Consequently, intermediate A-type stars seem to show some features detected both with younger and older stars, even if the exact distribution is highly dependent on the age of the tracer.
The kinematic signature of the Galactic warp seen in our Fig. 7, which starts being noticeable at \(R\gtrsim 12\) kpc around the AC and moves at \(v_{b}^{\rm corr}\approx 6\)-\(7\,{\rm km\,s^{-1}}\) for \(R\approx 14\) kpc, is also found by Romero-Gomez et al. (2019) with their two samples. They find that the warp starts at \(R\approx 10\)-\(11\) kpc for their RGB sample and at \(R\approx 12\)-\(13\) kpc for their OB sample which, together with our intermediate values, confirms an age dependency of its starting radius. It is also in agreement with the 5-\(6\,{\rm km\,s^{-1}}\) motion found at \(R\approx 14\) kpc by Poggio et al. (2018) for a sample of upper main sequence stars (mainly O, B and A) selected with 2MASS (Skrutskie et al. 2006) and _Gaia_ DR2 data (Gaia Collaboration et al. 2018). As well as with Gaia Collaboration et al. (2023b), which uses \(V_{Z}\) thanks to the new _Gaia_ DR3 line-of-sight velocities and finds a vertical motion of \(V_{Z}(R\approx 14\) kpc) \(\approx 7\,{\rm km\,s^{-1}}\) for a RGB sample (their OB sample has an extremely limited extension and does not reach the required distances).
We analysed a region with downward mean \(v_{\rm b}^{\rm corr}\) found at (\(X_{\rm Gal}\), \(Y_{\rm Gal})\approx(-6,\,6)\) kpc in Fig. 7. Its extension to larger heliocentric distances is also detected with OB stars in Romero-Gomez et al. (2019), whose proper motions in \(b\) are referred to the local standard of rest instead of to the Sun. The map of vertical velocity \(V_{Z}\) of our pP30-RV sample (so, including the full 3D velocities) also presents a similar feature at the same direction beyond \(d\approx 2\) kpc (bottom panel of Fig. 6). This fact proves that this kinetically coherent structure is real, despite it can be enhanced in Fig. 7 due to the inherent discrepancy between \(v_{\rm b}^{\rm corr}\) and \(V_{Z}\) combined with extinction effects. The \(V_{Z}\) maps from Gaia Collaboration et al. (2023b) do not show any peculiarity towards these coordinates for RGB stars, while their OB sample - which is restricted to \(d<2\) kpc - have almost exactly the same behaviour as our A-type stars. This implies that this kinematic perturbation may be associated with young tracers, either because their bluer colours are more affected by extinction, or because it originated in the kinematics of the gas. Nevertheless, the upper-left panel of Fig. 23 from the same authors displays two regions with compression breathing modes using their RGB sample. One of them is nearly symmetric around the AC starting at \(R\approx 11\) kpc (as we detected in Fig. 9b), whereas the other one is just on the controversial location of our interest. Thus, even if the detected signatures may be artificially amplified by some observational limitations, we conclude that the Milky Way disc is being partially compressed in some regions.
The global gradient found in the pP30-RV \(V_{\phi}\) map in Fig. 6 is qualitatively similar to that of the _Gaia_ DR3 OB sample in Gaia Collaboration et al. (2023b) (middle left panel of their Fig. 21), but it has a noticeably different pattern from the \(V_{\phi}\) map shown by their sample of RGB stars (obtained from the data of the middle left panel of their Fig. 16 with an adapted colour axis range). The sudden decrease of \(\sim 15\,{\rm km\,s^{-1}}\) within \(\sim 2\) kpc in the AC direction is also found with the youngest tracers of Gaia Collaboration et al. (2021) (see their Fig. 10), and it is partially seen in the _Gaia_ DR3 OB sample - although our sample reaches
\(\sim 1\) kpc further. It can also be detected using the open clusters from Hunt & Reffert (2023). Interestingly, zero-age O-type stars with masers at this location do not show any peculiar variation in circular rotation velocity (Reid et al. 2019; Sakai et al. 2019).
On the other hand, both inward-velocity features of our pP30-RV \(V_{R}\) map (upper panel in Fig. 6) coincide with the pattern seen by OB stars (Gaia Collaboration et al. 2023b) and masers (Reid et al. 2019). However, our sample also shows a region with outward \(V_{R}\) at \(R>9\) kpc and \(5\degr\lesssim 5\degr\lesssim 15\degr\) - namely, around \((X_{\rm Gal},Y_{\rm Gal})=(-9.5,1.5)\) kpc - that is not shown by any of the tracers mentioned in here or above.
Globally, both the OB and the A-type stars show concordant kinematic perturbations that deviate from axisymmetry, mainly in the vertical and the Galactocentric radial velocities. They may be originated by spiral arms, interactions with satellites or bar resonances among others. The Galactic latitude limit of the pP30-RV sample highly reduces the statistics very close to the Sun. By contrast, our sample of A-type stars reaches slightly deeper than the one of OB stars. Complementing current samples with the southern Galactic disc from VPHAS+ data (Drew et al. 2013) and with WEAVE line-of-sight velocities will highly improve these velocity maps. In turn, this will allow us to understand the dynamics of the Milky Way disc through the connection between these two kind of tracers.
The velocity space substructure found towards the AC using velocities in Galactic coordinates directions (Fig. 10) is highly in agreement with Fig. 13 from Gaia Collaboration et al. (2021) and with the results at the AC shown in Fig. 7 from McMillan et al. (2022). However, even if detected inhomogeneities correspond in most of the cases, our sample (without mixing different stellar populations) shows them as better-defined clumpy regions and with a higher degree of detail. In consequence, our A-type stars selection reveals some substructure that had previously gone unseen, such as the presence of a bimodality for \(Z_{\rm Gal}>0\) and a trimodality for \(Z_{\rm Gal}<0\) between \(R=11\) kpc and \(R=15\) kpc.
## 7 Conclusions
We constructed an extended sample of A-type stars to unveil Milky Way disc density structures and kinematic perturbations. It was selected using a colour-colour diagram from IGAPS photometric bands and includes _Gaia_ DR3 astrometry and line-of-sight velocities. Our main findings are as follows.
* mostly from slightly colder populations
- is estimated to be 10%. When dereddening the subsample with relative error in parallax smaller than 30%, our IGAPS selection criterion retains less than 1% of white dwarfs or giants in the sample, and the fraction of stars with derived absolute magnitudes fainter than that of F0-type stars is about 25% (two-thirds of them being compatible with F-type stars). The catalogue has 31 934 sources with line-of-sight velocities from _Gaia_ DR3.
* We detect stellar density enhancements associated with the Perseus arm at Galactocentric coordinates \((X_{\rm Gal},Y_{\rm Gal})\approx(-10.00,\,1.75)\) kpc, with the Cygnus region at \((X_{\rm Gal},Y_{\rm Gal})\approx(-8.0,\,1.0)(-7.5,\,2.0)\) kpc and with the Cepheus Spur at \((X_{\rm Gal},Y_{\rm Gal})\approx(-9.00,\,0.75)\) kpc. We also find a low-density region at \((X_{\rm Gal},Y_{\rm Gal})\approx(-10.0,\,0.5)\) kpc already observed using open clusters, upper main sequence stars, and masers.
* The analysis of the vertical distribution of stellar densities proves that many of the prominent differences between \(b>0\degr\) and \(b<0\degr\) are caused by extinction and might not entirely correspond with real asymmetric distributions. The imprint of the Galactic warp is not clear considering just the density distribution of the sample.
* The cylindrical component of the velocity \(V_{\phi}\) presents large variations with values above 240 km s\({}^{-1}\) in the first quadrant and a rapidly decreasing trend in the anticentre (AC) direction with values ranging from 240 km s\({}^{-1}\) at the Sun position to 225 km s\({}^{-1}\) at \(R\approx 10\) kpc.
* The Galactocentric radial velocity \(V_{R}\) shows patterns that deviate up to 10-20 km s\({}^{-1}\) from circular orbits. The main features are two regions with mean \(V_{R}\) towards the Galactic centre (GC) that are also seen with other tracers and an outward-velocity region around \((X_{\rm Gal},Y_{\rm Gal})=(-9.5,1.5)\) kpc that is not detected in other studies.
* We established a simple model (Appendix D) and determined that the velocity in the Galactic latitude direction corrected by the solar motion (\(v_{\rm J}^{\rm corr}\)) is practically equivalent to the Galactocentric vertical velocity (\(V_{Z}\)) along the GC-AC
Figure 10: Density in the \(V_{Z}\equiv v_{b}^{\rm corr}\) vs \(V_{\phi}\equiv-v_{J}^{\rm corr}\) velocity map for several Galactocentric radial bins for the pP50 sample restricted around the AC (170 \(\degr\leq l\leq 190\degr\)). The two rows distinguish \(Z_{\rm Gal}>0\) (above) from \(Z_{\rm Gal}<0\) (below). All panels share both axes. The colour indicates stellar counts normalised to the densest bin of each panel in the sense that darker means denser. Bin sizes have been adapted to the number of stars of each panel (numbers shown at bottom right corners).
direction as long as \(V_{R}\approx 0\,\mathrm{km\,s^{-1}}\). On the other hand, \(v_{b}^{\mathrm{corr}}\) and \(V_{Z}\) are extremely discrepant towards other Galactic longitudes, where their difference has a strong dependence on \(Z_{\mathrm{Gal}}\) that creates artificial breathing modes.
* Using both \(v_{b}^{\mathrm{corr}}\) on the sample with proper motions and also \(V_{Z}\) on the reduced 6D sample, we detected a kinetically peculiar structure towards the \(60\,^{\circ}\leq l\leq 75\,^{\circ}\) direction that moves downwards. It reaches \(V_{Z}\) about \(-4\,\mathrm{km\,s^{-1}}\) for our A-type stars. This behaviour is also shown by OB stars, but is more elusive for the older RGB stars. Confirming it as a young kinematic structure.
* We find that the kinematic signature of the Galactic warp begins at \(R\approx 12\) kpc (\(d\approx 4\) kpc) and that it has an amplitude of \(v_{b}^{\mathrm{corr}}\approx 6\)-\(7\,\mathrm{km\,s^{-1}}\) at \(R\approx 14\) kpc. Our result favour the scenario where the starting radius of the warp changes with the age of the tracer in the sense that it begins further for younger samples.
* We also show that A-type stars have a very inhomogeneous and asymmetric \(V_{Z}\equiv v_{b}^{\mathrm{corr}}\) vs \(V_{\phi}\equiv-v_{l}^{\mathrm{corr}}\) velocity space, proving that these peculiarities are shared among different tracers. In particular, our sample allow us to trace two (three) major overdensities at \(Z_{\mathrm{Gal}}>0\) (\(Z_{\mathrm{Gal}}<0\)) from \(R=11\) kpc up to \(R=15\) kpc not shown in previous studies.
Thanks to _Gaia_ and large ongoing photometric surveys, this work upgrades A-type stars as a new population for large-scale Galactic disc structure and kinematics studies. We demonstrate that A-type stars are very powerful tracers that can be used in addition to those that have historically been used as an age transition to complete and expand our knowledge of the properties and the evolution of the Milky Way.
###### Acknowledgements.
We would like to thank the anonymous referee for the comments and suggestions, which greatly improved this work. We would also like to thank Dr. Luis Aguilar and Dr.resa Antola for our scientific discussions and comments, as well as David Altamirano for the exchange of ideas and some programming codes we have had. This work made use of TOPCAT5 and STITIS6 Visual Observatory. This research made use of Astropy.7 A community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018). This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This work isn't of the PRE2021-100596 grant funded by Spanish MCIN/AE/1/0.13039/501001100133 and by ESF+. It was also (partially) funded by "ERDF A way of making Europe" by the "European Union" through grants RTI2018-095076-B-C21 and PTI2021-1228420B-C21, and the Institute of Cosmosciences University of Barcelona (ICCUB, Unidad de Excelencia "Maria de Maeztu") through grant CEX2019-000918-M.
Footnote 7: [http://www.star.bris.ac.uk/](http://www.star.bris.ac.uk/)\(\sim\)mbt/topcat/
|
2305.09379 | Optimal Control of McKean-Vlasov equations with controlled stochasticity | In this article, we analyse the existence of an optimal feedback controller
of stochastic optimal control problems governed by SDEs which have the control
in the diffusion part. To this end, we consider the underlying Fokker-Planck
equation to transform the stochastic optimal control problem into a
deterministic problem with open-loop controller. | Luca Di Persio, Peter Kuchling | 2023-05-16T12:05:09Z | http://arxiv.org/abs/2305.09379v2 | # Optimal Control of McKean-Vlasov equations with controlled stochasticity
###### Abstract
In this article, we analyse the existence of an optimal feedback controller of stochastic optimal control problems governed by SDEs which have the control in the diffusion part. To this end, we consider the underlying Fokker-Planck equation to transform the stochastic optimal control problem into a deterministic problem with open-loop controller.
## 1 Introduction
The present paper aims at considering the optimal control problem
\[\text{minimize }\mathbb{E}\Big{[}\int_{0}^{T}g(X(t))+h(u(t,X(t)))dt\Big{]}+ \mathbb{E}g_{0}(X(T)) \tag{1.1}\]
for some functions \(g,h\) and \(g_{0}\), subject to either
\[\begin{split} dX&=f(X)dt+\sqrt{u}\sigma(X)dW\\ X(0)&=x_{0}\end{split} \tag{1.2}\]
where \((2\sigma)^{\frac{1}{2}}=\|a_{ij}(x)\|_{i,j}^{d}\), or by the McKean-Vlasov SDEs
\[\begin{split} dX&=b\Big{(}\frac{d\mathcal{L}_{X}}{ d\lambda}\Big{)}D(X)dt+\frac{1}{2}\Big{(}\frac{u(X)\beta(\mathcal{L}_{X})}{ \mathcal{L}_{X}}\Big{)}^{\frac{1}{2}}dW\\ X(0)&=X_{0}.\end{split} \tag{1.3}\]
In both situations, the controller \(u\) appears in the stochasticity part and is taken from a set
\[\mathcal{U}=\{u\in L^{\infty}((0,T)\times\mathbb{R}^{d})\colon\gamma_{1}\leq u \leq\gamma_{2},\sum_{i,j=1}^{d}D_{ij}^{2}(a_{ij}(x)u(t,x))\leq\gamma_{3}\}.\]
where \((a_{ij})\) is an elliptic matrix for all \(x\in\mathbb{R}^{d}\) in the case (1.2), while it is represented by \(a_{ij}=\delta_{ij}\) for (1.3). The assumptions on all the other coefficients will be later specified. Functions \(u\in\mathcal{U}\) are also called quasi-concave.
For the sake of completeness, let us recall that various works have analysed the above equations in the context of mean-field games [17, 22], while other works, see, e.g., [12, 18, 20, 5, 21], relies on the classical approach, i.e. using the Hamilton-Jacobi-Bellman (HJB) equation, see also [14, 15], which poses certain technical issues. Indeed, in these cases the solution of the HJB equation turns out to be only
a viscosity solution, which makes it difficult to analyse the dynamic programming equation to obtain the optimal controller. Furthermore, the dynamic programming approach needs the technical notion of differentiability with respect to measures using the Wasserstein distance.
During recent years a different path has been considered, mainly based on transferring the stochastic optimal control problem with feedback inputs to a deterministic problem, which yields an open-loop controller serving also as the solution to Problem (1.1). Some successful attempts have been made by employing the corresponding Kolmogorov (backward) equation in [2, 6, 9]. Also, in [16], a direct approach using BSDEs via the Kolmogorov approach yields the existence of an optimal controller in the drift term.
A similar but alternative solution is proposed in this work. In particular, we consider the Fokker-Planck equation (or forward Kolmogorov equation) associated to the above SDEs. This approach has been used in [1] in case that the controller appears in the drift of the SDE. It has also been mentioned in [18] as a possible alternative to the Hamilton-Jacobi-Bellman approach.
Since we wish to model a controller in the diffusive term, the approach used in [1] must be slightly modified. Indeed, since it is generally to be avoided to impose too strong regularity assumptions on the controller, we consider the Fokker-Planck equations corresponding to (1.2) or (1.3) in a variational setting using the triplet \(L^{2}\subset H^{-1}\subset(L^{2})^{*}\). In both cases, the first goal is to show the existence of a solution \(\rho^{u}\) to the corresponding Fokker-Planck equation for any controller \(u\in\mathcal{U}\). This is done in Proposition 2.2 and Theorem 3.2, respectively. Next, we show that the sequence of controllers minimizing the cost functional \(I\) has a limit \(u^{*}\in\mathcal{U}\) corresponding to its solution \(\rho^{u^{*}}\), which is in turn our optimal controller. This is proven in Theorem 2.5 and Theorem 3.3, respectively. A question that will be answered in a future work is the representation of the controller via maximum principle.
The rest of the article is structured as follows. In Section 2, we show the existence of an optimal controller for the linear problem. Section 3 is devoted to the analogous results in the case of a nonlinear McKean-Vlasov equation. For the sake of completeness, we included, within the Appendix section, some classical results regarding existence of solutions to our PDEs due to Lions.
### Notation
We denote by \(L^{p}(\mathbb{R}^{d})\), \(1\leq p\leq\infty\), the Banach spaces of \(p\)-Lebesgue integrable functions on \(\mathbb{R}^{d}\). Furthermore, \(H^{1}(\mathbb{R}^{d})\) denotes the Sobolev space \(H^{1}(\mathbb{R}^{d})=\{u\in L^{2}(\mathbb{R}^{d})\colon\nabla u\in L^{2}( \mathbb{R}^{d})\}\), and \(H^{-1}(\mathbb{R}^{d})\) its dual. Denote by \(L^{2}_{\rm loc}(\mathbb{R}^{d})\) etc. the corresponding dual spaces. Similarly, we write \(W^{1,\infty}(\mathbb{R}^{d})=\{u\in L^{\infty}(\mathbb{R}^{d})\colon\nabla u \in L^{\infty}(\mathbb{R}^{d})\}\).
By \(\mathcal{D}(\mathbb{R}^{d})=C^{\infty}_{c}(\mathbb{R}^{d})\) we denote all infinitely differentiable functions with compact support. The symbols \(C(\mathbb{R}^{d})\) and \(C_{b}(\mathbb{R}^{d})\) denote the space of continuous functions and the space of bounded continuous functions, respectively. We denote by \(C^{1}(\mathbb{R}^{d})\) the space of all continuously differentiable functions. Partial derivatives in the \(i\)-th variable are denoted by \(D_{i}\).
For a real Banach space \(\mathcal{X}\) and \(0<T<\infty\), we denote by \(L^{p}(0,T;\mathcal{X})\) the space of Bochner \(p\)-integrable functions \(u\colon(0,T)\to\mathcal{X}\) and by \(C([0,T];\mathcal{X})\) the space of \(\mathcal{X}\)-valued continuous functions on \([0,T]\).
## 2 Stochastic Optimal Control: The SDEs (1.2)
### Setup
We consider the following optimal control problem:
\[\text{minimize }\mathbb{E}\Big{[}\int_{0}^{T}g(X(t))+h(u(t,X(t)))dt\Big{]}+ \mathbb{E}g_{0}(X(T)) \tag{1.1}\]
subject to SDEs
\[dX =f(X)dt+\sqrt{u}\sigma(X)dW\] \[X(0) =x_{0}\]
and \(u\in\mathcal{U}\), where \(\mathcal{U}\subset L^{\infty}((0,T)\times\mathbb{R}^{d})\) is a closed convex set to be made precise below. Furthermore, we assume that \(f\in L^{\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) while \(h(u)\geq 0\) and convex and continuous. We assume also that
\[g\in C_{b}(\mathbb{R}^{d})\cap L^{2}(\mathbb{R}^{d})\text{ and }g_{0}\in C( \mathbb{R}^{d})\cap L^{\infty}(\mathbb{R}^{d}). \tag{2.1}\]
If we set \(\rho:=\rho(t,x)\) to be the density of \(X\), i.e. for \(A\in\mathcal{B}(\mathbb{R}^{d})\),
\[\mathbb{P}(X(t)\in A)=\int_{A}\rho(t,x)dx,\ \forall\ t\geq 0,\]
then \(\rho\) solves (in the sense of distributions) the Fokker-Planck equation (see [7, 8, 24])
\[\begin{cases}\frac{d}{dt}\rho(t,x)-\frac{1}{2}\sum_{i,j=1}^{d}D_{ ij}^{2}\left[u(t,x)a_{ij}(x)\rho(t,x)\right]+\text{div}(f(x)\rho(t,x))=0\\ \rho(0,x)=\rho_{0}(x)\end{cases} \tag{2.2}\]
where \(x\in\mathbb{R}^{d}\), \(a_{ij}=\sum_{k=1}^{d}\sigma_{ik}\sigma_{kj}\). Assume that \(a_{ij}\in C(\mathbb{R}^{d})\) and there exists \(\gamma_{0}>0\) such that
\[\sum_{i,j=1}^{d}a_{ij}\xi_{i}\xi_{j}\geq\gamma_{0}|\xi|^{2}\ \forall\xi\in \mathbb{R}^{d}. \tag{2.3}\]
Denote the solution to (2.2) by \(\rho^{u}\). Then, we may rewrite the optimal control problem as the following minimization problem:
\[\text{minimize }I(u)=\left[\int_{0}^{T}\int_{\mathbb{R}^{d}}[g(x)+h(u(s,x))] \rho^{u}(s,x)dsdx+\int_{\mathbb{R}^{d}}g_{0}(x)\rho^{u}(T,x)dx\right] \tag{2.4}\]
subject to (2.2) and \(u\in\mathcal{U}\), where
\[\mathcal{U}=\{u\in L^{\infty}((0,T)\times\mathbb{R}^{d})\colon \gamma_{1}\leq u\leq\gamma_{2},\sum_{i,j=1}^{d}D_{ij}^{2}(a_{ij}u(t,x))\leq \gamma_{3}\}.\]
**Remark 2.1**.: _In Problem (1.1), \(u=u(t,X(t))\) is a Markov stochastic feedback control, while in (2.4) it is an open distributed controller._
_The difference to the optimal control problem of [1] consists in the fact that while the author there applied the control function to the deterministic part, our control problem considers the control function on the stochastic part. This way, the control appears in the second-order derivative instead of the divergence._
### Well-posedness of the state system (2.2)
In view of Theorem A.1, we consider (2.2) in the variational setting w.r.t. the following spaces:
\[V =L^{2}(\mathbb{R}^{d})\] \[H =H^{-1}(\mathbb{R}^{d})\]
Thus, we have \(V^{*}=(L^{2})^{*}=(I+A_{0})V\), where the dual is taken with respect to the \(H^{-1}\)-duality functional
\[{}_{V^{*}}\langle u,v\rangle_{V}=\int_{\mathbb{R}^{d}}[(I+A_{0})^{-1}u]vdx=((I+ A_{0})^{-1}u,v)_{L^{2}(\mathbb{R}^{d})}=(u,v)_{H^{-1}(\mathbb{R}^{d})}\]
for \(u\in V^{*},v\in V\), where \(A_{0}\) is the operator
\[A_{0}\rho=-\sum_{i,j=1}^{d}D_{i}(a_{ij}D_{j}\rho)\ \forall\rho\in H\]
considered in the sense of distributions on \(\mathbb{R}^{d}\). We note that by (2.3), it follows that \(A_{0}\) is an isomorphism of \(H^{1}\) onto \(H^{-1}\). We have
\[V=L^{2}(\mathbb{R}^{d})\subset H=H^{-1}(\mathbb{R}^{d})\subset V^{*}=(L^{2}( \mathbb{R}^{d}))^{*},\]
with dense and continuous embeddings. The operator \(A(t)\colon V\to V^{*}\) is defined for \(\rho\in L^{2}(\mathbb{R}^{d})\) as
\[A(t)\rho=-\frac{1}{2}\sum_{i,j=1}^{d}D_{ij}^{2}(u(t,x)a_{ij}(x)\rho)+\operatorname {div}(f(x)\rho)\text{ in }\mathcal{D}^{\prime}(\mathbb{R}^{d}).\]
Therefore, the bilinear form in question is defined as follows:
\[a(t;\rho,\psi) ={}_{V^{*}}\langle A\rho,\psi\rangle_{V}\] \[=-\frac{1}{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}(I+A_{0})^{-1} D_{ij}^{2}(ua_{ij}\rho)\psi dx+\int_{\mathbb{R}^{d}}(I+A_{0})^{-1}\operatorname {div}f\rho\cdot\psi dx,\ \rho,\psi\in V.\]
The goal of this section is to prove the following existence result:
**Proposition 2.2**.: _Assume that there exist constants \(0<\gamma_{1}<\gamma_{2}\) and \(\gamma_{0},\gamma_{3}>0\) such that \(\gamma_{1}\leq u\leq\gamma_{2}\) and (2.3). Furthermore, assume that \(a_{ij}\in W^{1,\infty}(\mathbb{R}^{d})\), that is,_
\[\overline{a}:=\max\{\max_{i,j}\|D_{i}a_{ij}\|_{\infty},\|a\|_{\infty}\}<\infty.\]
_Let \(f\in L^{\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) and \(\rho_{0}\in H=H^{-1}\). Then there exists a unique \(H^{-1}\)-weak solution \(\rho\) of (2.2), that is,_
\[\rho\in L^{2}(0,T;V)\cap C([0,T];H)\text{ and }\frac{d}{dt}\rho\in L^{2}(0,T;V^{*}).\]
Taking into account Theorem A.1, we must check the following two conditions:
* \(|a(t,\rho,\varphi)|\leq M\|\rho\|_{V}\|\varphi\|_{V}\)
* \(a(t,\rho,\rho)\geq\alpha\|\rho\|_{V}-C\|\rho\|_{H}\)
for \(\rho,\varphi\in\mathcal{D}(\mathbb{R}^{d})\). Since \(\mathcal{D}(\mathbb{R}^{d})\subset V\) is dense, there exists a unique continuous extension of \(a(t;\cdot,\cdot)\colon\mathcal{D}(\mathbb{R}^{d})\times\mathcal{D}(\mathbb{ R}^{d})\to\mathbb{R}\) which is given by our operator. The next subsections are devoted to each of the estimates.
#### 2.2.1 Upper bound
This section is devoted to Condition 2 of Theorem A.1. By density, it suffices to show the upper bound for \(\rho,\varphi\in\mathcal{D}(\mathbb{R}^{d})\). To this end, we need to rewrite the differential operator in a more suitable form.
**Lemma 2.3**.: _For all test functions \(\psi\in\mathcal{D}(\mathbb{R}^{d})\), we have_
\[\sum_{i,j=1}^{d}a_{ij}D_{ij}^{2}(I+A_{0})^{-1}\psi=-\psi+(I+A_{0})^{-1}\psi- \sum_{i,j=1}^{d}(D_{i}a_{ij})D_{j}(I+A_{0})^{-1}\psi.\]
Proof.: We have by the product rule (denote \(D_{ij}^{2}=D_{i}D_{j}=D_{j}D_{i}\))
\[\sum_{i,j=1}^{d}a_{ij}D_{ij}^{2}(I+A_{0})^{-1}\psi =\sum_{i,j=1}^{d}a_{ij}D_{ij}^{2}(I+A_{0})^{-1}\psi+\sum_{i,j=1}^{ d}(D_{i}a_{ij})D_{j}(I+A_{0})^{-1}\psi\] \[\qquad-\sum_{i,j=1}^{d}(D_{i}a_{ij})D_{j}(I+A_{0})^{-1}\psi\] \[=\sum_{i,j=1}^{d}D_{i}[a_{ij}D_{j}(I+A_{0})^{-1}\psi]-\sum_{i,j=1 }^{d}(D_{i}a_{ij})D_{j}(I+A_{0})^{-1}\psi\] \[=-A_{0}(I+A_{0})^{-1}\psi-\sum_{i,j=1}^{d}(D_{i}a_{ij})D_{j}(I+A_ {0})^{-1}\psi\] \[=-(I+A_{0})^{-1}\psi-A_{0}(I+A_{0})^{-1}\psi+(I+A_{0})^{-1}\psi\] \[\qquad-\sum_{i,j=1}^{d}(D_{i}a_{ij})D_{j}(I+A_{0})^{-1}\psi\] \[=-\psi+(I+A_{0})^{-1}\psi-\sum_{i,j=1}^{d}(D_{i}a_{ij})D_{j}(I+A_ {0})^{-1}\psi\]
Note that with this approach, we circumvent the regularity assumption on \(u\), as can be easily seen by the lemma.
Let now \(\rho,\psi\in\mathcal{D}(\mathbb{R}^{d})\subset V\). Using Lemma 2.3, we may proceed in the estimation required for Theorem A.1:
\[a(t;\rho,\psi) =\nu_{\ast}\langle A\rho,\psi\rangle_{V}\] \[=-\frac{1}{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}(I+A_{0})^{-1} D_{ij}^{2}(ua_{ij}\rho)\psi dx+\int_{\mathbb{R}^{d}}(I+A_{0})^{-1}\operatorname{ div}(f\rho)\psi dx\] \[=-\frac{1}{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}D_{ij}^{2}(ua_ {ij}\rho)(I+A_{0})^{-1}\psi dx+\int_{\mathbb{R}^{d}}(I+A_{0})^{-1}\operatorname {div}(f\rho)\psi dx\] \[=-\frac{1}{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}u\rho a_{ij}D_{ ij}^{2}(I+A_{0})^{-1}\psi dx+\int_{\mathbb{R}^{d}}(I+A_{0})^{-1}\operatorname{ div}(f\rho)\psi dx\] \[=\frac{1}{2}\int_{\mathbb{R}^{d}}u\rho\psi dx-\frac{1}{2}\int_{ \mathbb{R}^{d}}u\rho(I+A_{0})^{-1}\psi dx\] \[\qquad+\frac{1}{2}\int_{\mathbb{R}^{d}}u\rho\sum_{i,j=1}^{d}(D_{ i}a_{ij})D_{j}(I+A_{0})^{-1}\psi dx+\int_{\mathbb{R}^{d}}(I+A_{0})^{-1} \operatorname{div}(f\rho)\psi dx\]
We may estimate the four terms separately (Recall that \(V=L^{2}(\mathbb{R}^{d})\)). Note that
* Differential operators \(D_{i}\colon H^{1}(\mathbb{R}^{d})\to L^{2}(\mathbb{R}^{d})\) are continuous with \(\|D_{i}\|\leq 1\).
* The mapping \((I+A_{0})^{-1}\colon(L^{2})^{*}\to L^{2}\) is continuous. Denote its operator norm by \(\|(I+A_{0})^{-1}\|=\alpha\).
* Also, \((I+A_{0})^{-1}\colon H^{-1}\to H^{1}\), i.e. for elements from \(H^{-1}\), we have higher regularity.
Using these facts, we obtain
1. \[\Big{|}\int_{\mathbb{R}^{d}}u\rho\psi dx\Big{|}\leq\gamma_{2}\int_{ \mathbb{R}^{d}}|\rho\psi|dx\leq\gamma_{2}\|\rho\|_{V}\|\psi\|_{V}\]
2. \[\Big{|}\int_{\mathbb{R}^{d}}u\rho(I+A_{0})^{-1}\psi dx\Big{|}\leq \gamma_{2}\alpha\int_{\mathbb{R}^{d}}|\rho\psi|dx\leq\gamma_{2}\alpha\|\rho \|_{V}\|\psi\|_{V}\]
3. \[\Big{|}\int_{\mathbb{R}^{d}}u\rho\sum_{i,j=1}^{d}(D_{i}a_{ij})D_{ j}(I+A_{0})^{-1}\psi dx\Big{|} \leq d\overline{a}\gamma_{2}\int_{\mathbb{R}^{d}}|\rho\sum_{j=1}^ {d}D_{j}(I+A_{0})^{-1}\psi|dx\] \[\leq d^{2}\overline{a}\gamma_{2}\alpha\|\rho\|_{V}\|\psi\|_{V}\]
4. \[\Big{|}\int_{\mathbb{R}^{d}}(I+A_{0})^{-1}\operatorname{div}(f \rho)\psi dx\Big{|} \leq\int_{\mathbb{R}^{d}}|f\rho\nabla(I+A_{0})^{-1}\psi|dx\] \[\leq\|f\|_{\infty}\int_{\mathbb{R}^{d}}|\rho\sum_{j=1}^{d}D_{j}(I +A_{0})^{-1}\psi|dx\] \[\leq\|f\|_{\infty}\alpha d\|\rho\|_{V}\|\psi\|_{V}\]
In total, we have
\[|a(t,\rho,\psi)|\leq\Big{(}\frac{\gamma_{2}}{2}+\frac{\gamma_{2} \alpha}{2}+\frac{d^{2}\overline{a}\gamma_{2}\alpha}{2}+\|f\|_{\infty}d\alpha \Big{)}\|\rho\|_{V}\|\psi\|_{V}\]
which is the desired upper bound. Using that \(\mathcal{D}(\mathbb{R}^{d})\subset V\) is dense, we obtain the upper bound for all \(\rho,\varphi\in V\).
#### 2.2.2 Lower bound
Let \(\rho\in\mathcal{D}(\mathbb{R}^{d})\). Again using Lemma 2.3, we obtain the following:
\[a(t;\rho,\rho) =-\frac{1}{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}(I+A_{0})^{-1}D_{ ij}^{2}(ua_{ij}\rho)\rho dx+\int_{\mathbb{R}^{d}}(I+A_{0})^{-1}\operatorname{ div}(f\rho)\rho dx\] \[=-\frac{1}{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}ua_{ij}\rho D_{ ij}^{2}(I+A_{0})^{-1}\rho dx+\int_{\mathbb{R}^{d}}(I+A_{0})^{-1}\operatorname{ div}(f\rho)\rho dx\] \[=\frac{1}{2}\int_{\mathbb{R}^{d}}u\rho^{2}dx-\frac{1}{2}\int_{ \mathbb{R}^{d}}u\rho(I+A_{0})^{-1}\rho dx\] \[\qquad+\frac{1}{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}u\rho(D_{ i}a_{ij})D_{j}(I+A_{0})^{-1}\rho dx+\int_{\mathbb{R}^{d}}(I+A_{0})^{-1} \operatorname{div}(f\rho)\rho dx\] \[\geq\frac{\gamma_{1}}{2}\|\rho\|_{V}^{2}-\frac{\gamma_{2}}{2}\| \rho\|_{H}^{2}\] \[\qquad+\underbrace{\frac{1}{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^ {d}}u\rho(D_{i}a_{ij})D_{j}(I+A_{0})^{-1}\rho dx+\int_{\mathbb{R}^{d}}(I+A_{0} )^{-1}\operatorname{div}(f\rho)\rho dx}_{(*)}\]
For the remaining term \((*)\), we proceed in a similar fashion as [1]. Consider the two terms separately:
1. \[\Big{|}\int_{\mathbb{R}^{d}}u\rho(D_{i}a_{ij})D_{j}(I+A_{0})^{-1} \rho dx\Big{|} \leq\int_{\mathbb{R}^{d}}|u\rho(D_{i}a_{ij})D_{j}(I+A_{0})^{-1} \rho|dx\] \[\leq\gamma_{2}\overline{a}\int_{\mathbb{R}^{d}}|\rho D_{j}(I+A_{0 })^{-1}\rho|dx\] \[\leq\|\rho\|_{L^{2}}\|(I+A_{0})^{-1}\rho\|_{L^{2}}\] \[\leq\alpha\|\rho\|_{V}\|\rho\|_{H}\]
2. \[\Big{|}\int_{\mathbb{R}^{d}}(I+A_{0})^{-1}\operatorname{div}(f \rho)\rho dx\Big{|} \leq\|(I+A_{0})^{-1}\operatorname{div}(f\rho)\|_{H^{1}}\|\rho\|_{H ^{-1}}\] \[\leq\alpha\|f\rho\|_{L^{2}}\|\rho\|_{H^{-1}}\leq\alpha\|f\|_{ \infty}\|\rho\|_{L^{2}}\|\rho\|_{H^{-1}}\]
Hence, in total, we have
\[|(*)|\leq\underbrace{\Big{(}\frac{d^{2}\alpha}{2}+\alpha\|f\|_{\infty}\Big{)} }_{=:C_{2}}\|\rho\|_{V}\|\rho\|_{H}\leq\frac{C_{2}\beta}{2}\|\rho\|_{V}+\frac {C_{2}}{2\beta}\|\rho\|_{H}\]
where the last inequality is an application of Young's inequality, which is valid for any \(\beta>0\). Turning this inequality around, we obtain
\[(*)\geq-\frac{C_{2}\beta}{2}\|\rho\|_{V}-\frac{C_{2}}{2\beta}\|\rho\|_{H}\]
Putting everything together, we are left with
\[a(t;\rho,\rho)\geq\frac{\gamma_{1}-C_{2}\beta}{2}\|\rho\|_{V}-\Big{(}\frac{ \gamma_{2}}{2}+\frac{C_{2}}{2\beta}\Big{)}\|\rho\|_{H}.\]
Choosing \(\beta\) small enough such that \(\gamma_{1}-C_{2}\beta>0\), we obtain the desired estimate for Theorem A.1 for \(\rho\in\mathcal{D}(\mathbb{R}^{d})\). Using the density of \(\mathcal{D}(\mathbb{R}^{d})\subset V\), the bound also holds for all \(\rho\in V\).
**Remark 2.4**.: _The existence result of Proposition 2.2 can be shown without the additional requirement on the second derivatives of \(u\)._
### Existence of an optimal controller
The goal is to show the existence of a minimizer for Problem (2.4), i.e.,
\[\text{minimize }I(u)=\left[\int_{0}^{T}\int_{\mathbb{R}^{d}}(g(x)+h(u(s,x)) \rho^{u}(s,x)dsdx+\int_{\mathbb{R}^{d}}g_{0}(x)\rho^{u}(T,x)dx\right]\]
subject to
\[u\in\mathcal{U}=\{u\in L^{\infty}((0,T)\times\mathbb{R}^{d})\colon 0<\gamma_{1} \leq u(t,x)\leq\gamma_{2},\sum_{i,j=1}^{d}D^{2}_{ij}(a_{ij}u(t,x))\leq\gamma_{ 3}\}.\]
We shall assume that (2.1),(2.3) hold and also that \(f\in L^{\infty}(\mathbb{R}^{d})\), \(\operatorname{div}f\in L^{\infty}(\mathbb{R}^{d})\).
More precisely, in this section, we wish to prove the following statement:
**Theorem 2.5**.: _For any \(\rho_{0}\in L^{2}(\mathbb{R}^{d})\), there exists a solution \((u,\rho^{u})\) to the optimal control problem (2.4)._
Due to the assumptions on \(g,g_{0}\) and \(h\), there exists \(m^{*}\in\mathbb{R}\) s.t.
\[\inf_{u\in\mathcal{U}}I(u)=m^{*}.\]
Furthermore, we find a sequence \(\{u_{k}\}_{k\in\mathbb{N}}\subset\mathcal{U}\) such that
\[m^{*}\leq I(u_{k})\leq m^{*}+\frac{1}{k}\text{ for all }k\in\mathbb{N}. \tag{2.5}\]
In order to pass to the limit in (2.5), we shall first prove some preliminary results given in lemmas which follow.
**Lemma 2.6**.: _Assume that \(\rho_{0}\in L^{2}(\mathbb{R}^{d})\) and \(u\in\mathcal{U}\). Then the solution \(\rho=\rho^{u}\) to (2.2) satisfies_
\[\rho\in L^{2}(0,T;H^{1}(\mathbb{R}^{d})) \tag{2.6}\]
\[\|\rho\|^{2}_{L^{\infty}(0,T;L^{2}(\mathbb{R}^{d}))}+\int_{0}^{t}\int_{ \mathbb{R}^{d}}|\nabla_{x}\rho(s,x)|^{2}dsdx\leq C\|\rho_{0}\|^{2}_{L^{2}( \mathbb{R}^{d})}\ \forall t\in(0,T), \tag{2.7}\]
_where \(C\) is independent of \(u\)._
Proof.: We approximate \(u\) in \(L^{\infty}((0,T)\times\mathbb{R}^{d})\) by a sequence \(\{u_{\varepsilon}\}_{\varepsilon}\subset C^{2}([0,T]\times\mathbb{R}^{d})\) with
\[\sum_{i,j}D^{2}_{ij}(a_{ij}u_{\varepsilon}(t,x))\leq\gamma_{3}\ \forall(t,x)\in(0,T)\times\mathbb{R}^{d}.\]
For \(u_{\varepsilon}\in C^{2}\), we can apply as above Thm. A.1 on the spaces
\[V=H^{1}(\mathbb{R}^{d}),\ H=L^{2}(\mathbb{R}^{d}),\ V^{*}=H^{-1}(\mathbb{R}^{d}).\]
Then it follows that Equation (2.2) with \(u_{\varepsilon}\) instead of \(u\) has a unique solution
\[\rho_{\varepsilon}\in L^{2}(0,T;H^{1}(\mathbb{R}^{d}))\cap C([0,T],L^{2}( \mathbb{R}^{d}))\text{ with }\frac{d}{dt}\rho_{\varepsilon}\in L^{2}(0,T;H^{-1}(\mathbb{R}^{d}))\]
Moreover, as easily seen, for \(\varepsilon\to 0\), we have
\[\rho_{\varepsilon}\to\rho\text{ strongly in }L^{2}(0,T;L^{2}(\mathbb{R}^{d})) \cap C([0,T];H^{-1}(\mathbb{R}^{d})).\]
Taking into account that
\[\frac{1}{2}\frac{d}{dt}\|\rho_{\varepsilon}(t)\|_{L^{2}}^{2}={}_{H^{1}}(\rho_{ \varepsilon}(t),\frac{d\rho_{\varepsilon}}{dt}(t))_{H^{-1}}\text{ a.e. }t\in(0,T)\]
we get by (2.2)
\[\frac{1}{2}\|\rho_{\varepsilon}(t)\|_{L^{2}}^{2}+\int_{0}^{t} \int_{\mathbb{R}^{d}}\rho_{\varepsilon}(s,x)\Big{(}-\frac{1}{2}\sum_{i,j=1}^{d }D_{ij}^{2}(u(s,x)\rho_{\varepsilon}(s,x)a_{ij}(x))\\ +\operatorname{div}(f(x)\rho_{\varepsilon}(s,x))\Big{)}dsdx=\frac {1}{2}\|\rho_{0}\|_{L^{2}(\mathbb{R}^{d})}^{2}.\]
On the other hand,
\[\int_{\mathbb{R}^{d}}\rho_{\varepsilon}D_{ij}^{2}(u\rho_{\varepsilon}a_{ij}) dx=-\int_{\mathbb{R}^{d}}ua_{ij}D_{i}\rho_{\varepsilon}D_{j}\rho_{\varepsilon} dx-\int_{\mathbb{R}^{d}}\rho_{\varepsilon}D_{i}\rho_{\varepsilon}D_{j}(a_{ij}u)dx\]
and by (2.3),
\[\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}\rho_{\varepsilon}D_{ij}^{2 }(u\rho_{\varepsilon}a_{ij})dx =-\int_{\mathbb{R}^{d}}u\sum_{i,j=1}^{d}a_{ij}(D_{i}\rho_{ \varepsilon})(D_{j}\rho_{\varepsilon})dx-\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d} }\rho_{\varepsilon}D_{i}\rho_{\varepsilon}D_{j}(ua_{ij})dx\] \[\leq-\int_{\mathbb{R}^{d}}u\gamma_{0}|\nabla\rho_{\varepsilon}|^ {2}dx-\frac{1}{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}D_{i}(\rho_{\varepsilon }^{2})D_{j}(ua_{ij})dx\] \[\leq-\gamma_{1}\gamma_{0}\int_{\mathbb{R}^{d}}|\nabla\rho_{ \varepsilon}|^{2}dx+\frac{1}{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}\rho_{ \varepsilon}^{2}D_{ij}^{2}(ua_{ij})dx\] \[\leq-\gamma_{1}\gamma_{0}\int_{\mathbb{R}^{d}}|\nabla\rho_{ \varepsilon}|^{2}dx+\frac{\gamma_{3}}{2}\int_{\mathbb{R}^{d}}\rho_{ \varepsilon}^{2}dx.\]
We have also for any \(\delta>0\),
\[-\int_{\mathbb{R}^{d}}\rho_{\varepsilon}\operatorname{div}(f\rho_ {\varepsilon})dx =\int_{\mathbb{R}^{d}}\nabla\rho_{\varepsilon}f\rho_{\varepsilon}dx\] \[\leq\frac{\delta}{2}\int_{\mathbb{R}^{d}}(\nabla\rho_{\varepsilon })^{2}dx+\frac{1}{2\delta}\int_{\mathbb{R}^{d}}f^{2}\rho_{\varepsilon}^{2}dx\] \[\leq\frac{\delta}{2}\int_{\mathbb{R}^{d}}(\nabla\rho_{\varepsilon })^{2}dx+\frac{\|f\|_{\infty}^{2}}{2\delta}\int_{\mathbb{R}^{d}}\rho_{ \varepsilon}^{2}dx\,,\]
therefore, above calculations together with Young's inequality, give us
\[\frac{1}{2}\|\rho_{\varepsilon}(t)\|_{L^{2}}^{2} =\frac{1}{2}\int_{0}^{t}\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(s,x)\sum_{i,j=1}^{d}D_{ij}^{2}(u(s,x)\rho_{\varepsilon}(s,x)a_{ij}(x))dsdx\] \[\qquad-\int_{0}^{t}\int_{\mathbb{R}^{d}}\rho_{\varepsilon}(s,x) \operatorname{div}(f(x)\rho_{\varepsilon}(s,x))dxds+\frac{1}{2}\|\rho_{0}\|_{ L^{2}}^{2}\] \[\leq-\gamma_{0}\gamma_{1}\int_{0}^{t}\int_{\mathbb{R}^{d}}( \nabla\rho_{\varepsilon})^{2}dxds+\frac{\gamma_{3}}{2}\int_{0}^{t}\int_{ \mathbb{R}^{d}}\rho_{\varepsilon}^{2}dxds\] \[\qquad+\frac{\delta}{2}\int_{0}^{t}\int_{\mathbb{R}^{d}}( \nabla\rho_{\varepsilon})^{2}dxds+\frac{\|f\|_{\infty}}{2\delta}\int_{0}^{t} \int_{\mathbb{R}^{d}}\rho_{\varepsilon}^{2}dxds+\frac{1}{2}\|\rho_{0}\|_{L^{2 }}^{2}\] \[=\Big{(}\frac{\delta}{2}-\gamma_{0}\gamma_{1}\Big{)}\int_{0}^{t} \int_{\mathbb{R}^{d}}(\nabla\rho_{\varepsilon})^{2}dxds+\Big{(}\frac{\gamma_ {3}}{2}+\frac{\|f\|_{\infty}}{2\delta}\Big{)}\int_{0}^{t}\|\rho_{\varepsilon}( s)\|_{L^{2}}^{2}ds.\]
Let us choose \(\delta>0\) s.t. \(\delta-2\gamma_{0}\gamma_{1}<0\), then rearranging:
\[\|\rho_{\varepsilon}(t)\|_{L^{2}}^{2}+(2\gamma_{0}\gamma_{1}-\delta)\int_{0}^{t }\int_{\mathbb{R}^{d}}(\nabla\rho_{\varepsilon})^{2}dxds\leq\|\rho_{0}\|_{L^{ 2}}^{2}+\left(\gamma_{3}+\frac{\|f\|_{\infty}^{2}}{\delta}\right)\int_{0}^{t} \|\rho_{\varepsilon}(s)\|_{L^{2}}^{2}ds\]
and exploiting the Gronwall's lemma, we have:
\[\|\rho_{\varepsilon}(t)\|_{L^{2}}^{2}+(2\gamma_{0}\gamma_{1}-\delta)\int_{0}^ {t}\int_{\mathbb{R}^{d}}|\nabla\rho_{\varepsilon}(s,x)|^{2}dsdx\leq C\|\rho_{ 0}\|_{L^{2}}^{2}\ \forall t\in(0,T)\]
where \(C\) is independent of both \(\varepsilon\) and \(u\). \(\varepsilon\to 0\), hence we get (2.6). Using the weak lower semicontinuity of the \(L^{2}\)- and \(H^{1}\)-norms, we also obtain (2.7) as claimed.
**Lemma 2.7**.: _There exists a subsequence \(\{u_{k_{r}}\}_{r\in\mathbb{N}}\) and \(u^{*}\in\mathcal{U}\) such that for any \(R>0\):_
\[u_{k_{r}}\xrightarrow{r\to\infty}u^{*}\text{ weakly in }L^{2}((0,T)\times B _{R}(0),\mathbb{R})\text{ and weak* in }L^{\infty}((0,T)\times\mathbb{R}^{d}). \tag{2.8}\]
Proof.: Fix \(R>0\).
1. \(\{u_{k}\}_{k\in\mathbb{N}}\) is weak*-precompact on \(L^{2}(B_{R}(0),\mathbb{R})\): By Alaoglu's theorem, any bounded set in a normed space is weak*-precompact. Since \(\{u_{k}\}_{k\in\mathbb{N}}\subset\mathcal{U}\), we have \[\sup_{k\in\mathbb{N}}\|u_{k}\|_{L^{2}(B_{R}(0))}\leq\sqrt{\lambda(B_{R}(0))} \gamma_{2}\] and hence, the sequence is weak*-precompact.
2. \(\{u_{k}\}_{k\in\mathbb{N}}\) is weakly precompact on \(L^{2}(B_{R}(0),\mathbb{R})\): Since the weak and weak*-topology are equivalent on \(L^{2}\), this statement follows directly from the first step.
3. \(\{u_{k}\}_{k\in\mathbb{N}}\) has a weakly convergent subsequence: As \(\{u_{k}\}_{k\in\mathbb{N}}\) is weakly compact, this statement follows by Eberlein-Shmulyan. Since \(\mathcal{U}\) is closed, \(u^{*}\in\mathcal{U}\) is clear.
4. A posteriori, the limit \(u^{*}\) of the sequence will again have \(\Delta_{x}u^{*}\leq\gamma_{3}\).
Thus, the statement is shown.
**Lemma 2.8**.: _Assume that \(\rho_{0}\in L^{2}(\mathbb{R}^{d})\). Then there exists a subsequence \(\{\tilde{u}_{s}\}_{s\in\mathbb{N}}\subset\{u_{k}\}_{k\in\mathbb{N}}\) such that_
\[\tilde{u}_{s}\rho^{\tilde{u}_{s}}\to u^{*}\rho^{u^{*}}\text{ strongly in }L^{2}(0,T;H^{-1}_{\rm loc}(\mathbb{R}^{d}))\text{ as }s\to\infty.\]
_Furthermore, we have_
\[\rho^{\tilde{u}_{s}}\to\rho^{u^{*}}\text{ weakly in }L^{2}(0,T;L^{2}_{\rm loc}( \mathbb{R}^{d})).\]
Proof.: By Lemma 2.6, we have
\[\|\rho^{u_{k_{r}}}(t)\|_{L^{2}(\mathbb{R}^{d})}^{2}+\|\rho^{u_{k_{r}}}\|_{L^{ 2}(0,T;H^{1}(\mathbb{R}^{d}))}\leq C \tag{2.9}\]
where \(\{u_{k_{r}}\}_{r\in\mathbb{N}}\) is the subsequence from Lemma 2.7. Therefore,
\[\left\|\frac{d\rho^{u_{k_{r}}}}{dt}(t)\right\|_{H^{-1}}\leq C_{1}\|\rho^{u_{k_ {r}}}(t)\|_{H^{1}}+C_{2}. \tag{2.10}\]
Moreover, by Estimate (2.9), it follows that \(\{\rho^{u_{k_{r}}}\}\) is weakly compact in \(L^{2}(0,T;H^{1})\) and \(L^{2}(0,T;L^{2})\), hence by Aubin-Lions theorem (see, e.g., [3, Thm. 1.3.5]) there exists a subsequence \(\{\tilde{u}_{s}\}_{s\in\mathbb{N}}\) and a function \(\rho^{*}\) s.t.
\[\rho^{\tilde{u}_{s}}\to\rho^{*}\text{ weakly in }L^{2}(0,T;H^{1})\text{ and strongly in }L^{2}(0,T;L^{2}_{\rm loc}). \tag{2.11}\]
Furthermore, due to (2.9) together with (2.10), we also get
\[\frac{d\rho^{u_{k_{r}}}}{dt}\to\frac{d\rho^{*}}{dt}\text{ weakly in }L^{2}(0,T;H^{-1}).\]
Taking into account that \(\tilde{u}_{s}\to u^{*}\) weak-star in \(L^{\infty}\), it follows by (2.11) that
\[\tilde{u}_{s}\rho^{\tilde{u}_{s}}\to u^{*}\rho^{*}\text{ weakly in }L^{2}_{\text{loc}}((0,T)\times\mathbb{R}^{d}).\]
Here is the argument: Let \(\psi\in L^{2}_{\text{loc}}\). Then \(\rho^{*}\psi\in L^{1}_{\text{loc}}\) and so for any \(R>0\),
\[\Big{|}\int_{B_{R}}(\tilde{u}_{s}\rho^{\tilde{u}_{s}}-u^{*}\rho^{ *})\psi dx\Big{|} =\Big{|}\int_{B_{R}}(\tilde{u}_{s}\rho^{\tilde{u}_{s}}-\tilde{u}_{ s}\rho^{*}+\tilde{u}_{s}\rho^{*}-u^{*}\rho^{*})\psi dx\Big{|}\] \[\leq\int_{B_{R}}|\tilde{u}_{s}||(\rho^{\tilde{u}_{s}}-\rho^{*}) \psi|dx+\int_{B_{R}}|\tilde{u}_{s}-u^{*}|\rho^{*}\psi dx\] \[\xrightarrow[s\to\infty]{s\to\infty}0.\]
It is left to argue that \(\rho^{*}=\rho^{u^{*}}\). But this follows by letting \(s\to\infty\) in (2.2) where \(u=\tilde{u}_{s}\) and we see that \(\rho^{*}=\rho^{u^{*}}\).
Proof of Theorem 2.5.: Set \(L^{u}(t,x)=g(x)+h(u(t,x))\). Let \(R>0\) and \(\{k_{s}\}_{s\in\mathbb{N}}\) the sequence corresponding to \(\{\tilde{u}_{s}\}_{s\in\mathbb{N}}\) as in (2.5). By a similar calculation to [1, Page 22], we obtain
\[m^{*}+\frac{1}{k_{r_{s}}} \geq\underbrace{\int_{0}^{T}\int_{B(0,R)}\hskip-14.226378ptL^{ \tilde{u}_{s}}\rho^{u^{*}}dxdt}_{=:I_{1}(\tilde{u}_{s})}+\underbrace{\int_{0} ^{T}\int_{B(0,R)}L^{\tilde{u}_{s}}(\rho^{\tilde{u}_{s}}-\rho^{u^{*}})dxdt}_{=: I_{2}(\tilde{u}_{s})}\] \[\qquad+\langle g_{0},\rho^{\tilde{u}_{s}}(T)\rangle_{V^{*},V}\]
The convergence of the terminal expression is clear. Hence, we focus on the integral expressions.
1. Since \(h\) is convex and bounded below, via Fatou, the function \(I_{1}(u)\) is lower semicontinuous in \(L^{p}(\mathbb{R}^{d})\) for every \(1\leq p\leq\infty\), so \[\liminf_{s\to\infty}I_{1}(\tilde{u}_{s})\geq I_{1}(u^{*})\]
2. By Lemma 2.8 together with Assumption (2.1), we obtain the desired convergence. \[\lim_{s\to\infty}I_{2}(\tilde{u}_{s})=0.\]
By taking \(R\to\infty\), this means that \(u^{*}\) is a minimizer as claimed.
## 3 Optimal Control of McKean-Vlasov-equation: The nonlinear diffusion case
The goal of this section is to prove the existence of an optimal controller for the McKean-Vlasov-Equation, i.e. solve the minimizing problem
\[\text{minimize }\mathbb{E}(J(X,u))=\mathbb{E}\Big{[}\int_{0}^{T}g(X)+h(u(X)) dt+g_{0}(X(T))\Big{]}\]
subject to
\[u\in\mathcal{U}:=\{u\in L^{\infty}(\mathbb{R}^{d})\colon 0<\gamma_{1}\leq u\leq \gamma_{2},\Delta u(x)\leq\gamma_{3}\} \tag{3.1}\]
and
\[\begin{split} dX&=b\Big{(}\frac{d\mathcal{L}_{X}}{d \lambda}\Big{)}D(X)dt+\Big{(}\frac{2u(X)\beta(\mathcal{L}_{X})}{\mathcal{L}_{X }}\Big{)}^{\frac{1}{2}}dW\\ X(0)&=X_{0}\end{split} \tag{1.3}\]
where \(\mathcal{L}_{X}\) is the law of \(X\) and \(\frac{d\mathcal{L}_{X}}{d\lambda}\) is its density. To stay in the spirit of Section 2.3, we focus here on a solution in \(H^{-1}\).
The McKean-Vlasov equation (1.3) is equivalent to the Fokker-Planck equation
\[\begin{split}&\frac{d}{dt}\rho-\Delta(u\beta(\rho))+\text{div}(D(x )b(\rho)\rho)=0\text{ in }(0,\infty)\times\mathbb{R}^{d}\\ &\rho(0)=\rho_{0}\text{ in }\mathbb{R}^{d}\end{split} \tag{3.2}\]
where \(\rho_{0}dx=\mathcal{L}_{X_{0}}\). As explained in [7], by using [24, Theorem 2.5], the existence of a solution to (3.2) yields a probability measure on the canonical space \(C([0,T];R^{d})\) solving the corresponding martingale problem. This implies the existence of a weak solution to (1.3) using e.g. [23, Theorem 4.5.2]. The other direction from (1.3) to (3.2) can be seen using Ito's formula.
Hence, the optimal control problem reduces to
\[\text{minimize }I(u)=\int_{0}^{T}\int_{\mathbb{R}^{d}}(g(x)+h(u(x)))\rho(t,x) dtdx+\int_{\mathbb{R}^{d}}g_{0}(x)\rho(T,x)dx \tag{3.3}\]
subject to (3.1) and (3.2).
While it is desirable from a modelling perspective to have solutions in \(L^{1}\), the appearance of the controller within the diffusion term force us to rely on \(H^{-1}\)-methods for the variational approach in the construction of a solution and controller.
To show the existence of an optimal controller, we proceed as in Section 2.3. Namely, we use a nonlinear version of Lions' theorem to show the existence of an \(H^{-1}\)-solution to (3.2) for any \(u\in\mathcal{U}\). Then we consider an approximating sequence \(\{u_{s}\}_{s\in\mathbb{N}}\) and show that its limit \(u^{*}\) attains the infimum of the cost functional.
Let us now formulate the standing assumptions for this section.
**Assumption 1**.: _We assume the following conditions on the coefficients:_
1. \(\beta\in C^{1}(\mathbb{R})\)_,_ \(0<\alpha_{0}\leq\beta^{\prime}(r)\) _for all_ \(r\in\mathbb{R}\) _and_ \(\beta(0)=0\)_._
2. \(b\in C(\mathbb{R}),|b|_{\infty}<\infty\)_,_ \(D\in L^{\infty}\)_._
3. _The function_ \(\beta\) _is Lipschitz continuous with constant_ \(|\beta|_{\rm Lip}\)_._
4. _There exists_ \(\alpha_{2}\geq 0\) _such that for all_ \(x,y\in\mathbb{R}\)_,_ \[|b(x)x-b(y)y|\leq\alpha_{2}|\beta(x)-\beta(y)|.\]
**Remark 3.1**.: _Actually, the condition \(|b|_{\infty}<\infty\) is redundant: If everything else is assumed, for any \(x\neq 0\),_
\[|b(x)|=\frac{|b(x)x|}{|x|}=\frac{|b(x)x-b(0)0|}{|x|}\leq\alpha_{2}\frac{|\beta (x)-\beta(0)|}{|x|}\leq\alpha_{2}|\beta|_{\rm Lip}\frac{|x|}{|x|}=\alpha_{2}| \beta|_{\rm Lip}.\]
_By continuity, this bound also holds for \(x=0\) and we get \(|b|_{\infty}\leq\alpha_{2}|\beta|_{\rm Lip}\)._
_Furthermore, it follows directly that the function \(y\mapsto b(y)y\) is Lipschitz continuous with constant \(\alpha_{2}|\beta|_{\rm Lip}\)._
### Well-posedness of the state system (3.2)
To show existence of an \(H^{-1}\)-solution to (3.2), we wish to apply Theorem A.2, again in the case of \(V=L^{2},H=H^{-1},V^{*}=(L^{2})^{*}\). We consider the nonlinear operator \(A\colon V\to V^{*}\), defined by
\[{}_{V^{*}}(Ay,z)_{V}=\int_{\mathbb{R}^{d}}(I-\Delta)^{-1}(-\Delta u\beta(y))z \,dx+\int_{\mathbb{R}^{d}}(I-\Delta)^{-1}\operatorname{div}(D(b(y)y))z\,dx\]
for \(y,z\in V\). This leads to the following existence result:
**Theorem 3.2**.: _Under Assumption 1 For any \(\rho_{0}\in H\), there exists a solution \(\rho\colon[0,T]\to V^{*}\) to (3.2) satisfying_
\[\rho\in C([0,T];H^{-1})\cap L^{2}(0,T;L^{2}),\ \frac{d\rho}{dt}\in L ^{2}(0,T;(L^{2})^{*})\] \[\frac{d\rho}{dt}(t)+Ay(t)=0\text{ a.e. }t\in(0,T),\ \rho(0)=\rho_{0}.\]
Proof.: We show that the conditions of Theorem A.2 for \(p=2\) are fulfilled for the operator
\[A(y)=-\Delta(u\beta(y))+\operatorname{div}(Db(y)y),\ y\in V.\]
We start by showing that \(A\) is demicontinuous, i.e., for any sequence \(\{y_{n}\}\subset V=L^{2}(\mathbb{R}^{d})\), with \(y_{n}\to y\) in \(V\) and any \(z\in V\), we show that
\[(A(y_{n})-A(y),z)\xrightarrow{n\to\infty}0.\]
Hence, let \(z\in V\). Integration by parts and Cauchy-Schwarz yields
\[|(Ay_{n}-Ay,z)| \leq\Big{|}\int(I-\Delta)^{-1}z(-\Delta(u\beta(y_{n})-u\beta(y))) dx\Big{|}\] \[\qquad+\Big{|}\int z(I-\Delta)^{-1}\operatorname{div}(Db(y_{n})y _{n}-Db(y)y)))dx\Big{|}\] \[\leq\int|\Delta(I-\Delta)^{-1}z||u(\beta(y_{n})-\beta(y))|dx\] \[\qquad+\int|\nabla(I-\Delta)^{-1}zD(x)(b(y_{n})y_{n}-b(y)y)|dx\] \[\leq\|\Delta(I-\Delta)^{-1}z\|_{L^{2}}\gamma_{2}|\beta|_{\rm Lip }\|y_{n}-y\|_{L^{2}}\] \[\qquad+|D|_{\infty}\|\nabla(I-\Delta)^{-1}z\|_{L^{2}}\|b(y_{n})y _{n}-b(y)y\|_{L^{2}}\]
Since \(y\mapsto b(y)y\) is Lipschitz continuous, both terms converge to zero due to the \(L^{2}\)-convergence of \(\{y_{n}\}_{n\in\mathbb{N}}\). Hence, the demicontinuity is shown.
Let us underlined that, for the demicontinuity, the Lipschitz condition on \(y\mapsto b(y)y\) is not necessary. For the second term, we wish to apply the continuous mapping theorem, which only holds on finite measure spaces. Note that we have \(y_{n}\to y\) in \(L^{2}\) by assumption, implying that \(y_{n}\to y\) locally in measure, hence, by the continuous mapping theorem, it holds that \(b(y_{n})\to b(y)\) locally in measure, which also means that \(b(y_{n})y\to b(y)y\) locally in measure. Since \(b(y_{n})y\in L^{2}\) and \(|b(y_{n})y|\leq|b|_{\infty}|y|\in L^{2}\), we obtain by Pratt's theorem (see e.g. [13, Satz 5.3]) that
\[b(y_{n})y\to b(y)y\text{ in }L^{2}.\]
**Quasi-Monotonicity:** We need to show that there exists some \(\gamma>0\) such that
\[(A(y_{1})-A(y_{2}),y_{1}-y_{2})\geq-\gamma|y_{1}-y_{2}|_{H}^{2}\]
for all \(y_{1},y_{2}\in D(A)\). First of all, we have
\[(A(y_{1})-A(y_{2}),y_{1}-y_{2}) =\int(I-\Delta)^{-1}(A(y_{1})-A(y_{2}))(y_{1}-y_{2})dx\] \[=\int(I-\Delta)^{-1}(-\Delta(u\beta(y_{1}))+\Delta(u\beta(y_{2})) (y_{1}-y_{2})dx\] \[\qquad+\int(I-\Delta)^{-1}[\operatorname{div}(Db(y_{1})y_{1}- \operatorname{div}(Db(y_{2})y_{2}](y_{1}-y_{2})dx\] \[=-\int(I-\Delta)^{-1}\Delta[u\beta(y_{1})-u\beta(y_{2})](y_{1}-y_ {2})dx\] \[\qquad+\int(I-\Delta)^{-1}[\operatorname{div}(Db(y_{1})y_{1})- \operatorname{div}(Db(y_{2})y_{2})](y_{1}-y_{2})dx\] \[=\underbrace{\int u(\beta(y_{1})-\beta(y_{2}))(y_{1}-y_{2})dx}_{ \geq 0}\] \[\qquad-\int(I-\Delta)^{-1}(u\beta(y_{1})-u\beta(y_{2}))(y_{1}-y_ {2})dx\] \[\qquad+\int(I-\Delta)^{-1}[\operatorname{div}(Db(y_{1})y_{1})- \operatorname{div}(Db(y_{2})y_{2})](y_{1}-y_{2})dx\]
Since the first term is \(\geq 0\), we only need to estimate the other two. For the second term, We have
\[-\int (I-\Delta)^{-1}((u\beta(y_{1})-u\beta(y_{2}))(y_{1}-y_{2})dx\] \[=-\int(u\beta(y_{1})-u\beta(y_{2}))(I-\Delta)^{-1}(y_{1}-y_{2})dx\] \[\geq-\gamma_{2}\int(\beta(y_{1})-\beta(y_{2}))(I-\Delta)^{-1}(y_ {1}-y_{2})dx\] \[\geq-\gamma_{2}|\beta|_{\operatorname{Lip}}\int(y_{1}-y_{2})(I- \Delta)^{-1}(y_{1}-y_{2})dx\] \[=-\gamma_{2}|\beta|_{\operatorname{Lip}}|y_{1}-y_{2}|_{H^{-1}}^{2}.\]
The divergence term can be estimated as follows:
\[\int (I-\Delta)^{-1}\operatorname{div}[(Db(y_{1})y_{1})-(Db(y_{2})y_{2 })]dx\] \[\leq\|\operatorname{div}\|_{H^{1}\to L^{2}}((I-\Delta)^{-1}(Db(y_{1})y_ {1}-Db(y_{2})y_{2}),y_{1}-y_{2})_{L^{2}}\] \[=\|\operatorname{div}\|\big{(}Db(y_{1})y_{1}-Db(y_{2})y_{2},(I- \Delta)^{-1}(y_{1}-y_{2})\big{)}_{L^{2}}\] \[\leq n\|D\|_{\infty}\big{|}\big{(}b(y_{1})y_{1}-b(y_{2})y_{2},(I- \Delta)^{-1}(y_{1}-y_{2})\big{)}_{L^{2}}\big{|}\] \[\leq n\|D\|_{\infty}\alpha_{2}|\beta|_{\operatorname{Lip}}|y_{1}- y_{2}|_{H^{-1}}^{2}\]
Turning the inequality around, we obtain
\[\int(I-\Delta)^{-1}\operatorname{div}[Db(y_{1})y_{1}-Db(y_{2})y_{2}](y_{1}-y_ {2})dx\geq-n\|D\|_{\infty}\alpha_{2}|\beta|_{\operatorname{Lip}}|y_{1}-y_{2}| _{H^{-1}}^{2}.\]
In total, we have
\[(A(y_{1})-A(y_{2}),y_{1}-y_{2})_{H}\geq-|\beta|_{\operatorname{Lip}}(\gamma_{2 }+n\|D\|_{\infty}\alpha_{2})|y_{1}-y_{2}|_{H}^{2}.\]
**Inequalities from Theorem A.2:** It is left to show that the two inequalities stated in the theorem hold.
1. For the first one, we may repeat the calculation for the quasi-monotonicity to obtain \[(Ay,y)\geq\alpha_{0}\gamma_{1}\|y\|_{V}-\gamma|y|_{H}\] where \(\gamma\) is the constant from the calculation of quasi-monotonicity.
2. Repeating the calculation used for demicontinuity by replacing \(y_{n}-y\) by \(y\in V\), we obtain \[\|Ay\|_{*} =\sup_{\|z\|_{V}=1}|(Ay,z)|\] \[\leq\|\Delta(I-\Delta)^{-1}z\|_{L^{2}}\gamma_{2}|\beta|_{\mathrm{ Lip}}\|y\|_{V}\] \[\qquad+|D|_{\infty}\|\nabla(I-\Delta)^{-1}z\|_{L^{2}}\|b(y)y\|_{V}\] \[\leq C_{2}\|y\|_{V}\] for some constant \(C_{2}\), where we used that \(\|b(y)y\|_{V}\leq\alpha_{2}\|\beta(y)\|_{V}\leq\alpha_{2}|\beta|_{\mathrm{Lip }}\|y\|_{V}\).
Since \(u>0\) by assumption, the coercivity condition can be omitted in view of (A.3). Therefore, all conditions of Theorem A.2 are fulfilled and we have proven the existence of a solution to (3.2).
### Existence of an optimal controller
Now that we have established the existence of a solution to (3.2), it remains to show the following theorem:
**Theorem 3.3**.: _Under Assumption 1, for any \(\rho_{0}\in L^{2}(\mathbb{R}^{d})\), there exists a solution \((u,\rho^{u})\) to the optimal control problem (3.3)._
Due to the assumptions on \(g,g_{0}\) and \(h\), we again obtain the existence of \(m^{*}\in\mathbb{R}\) such that
\[\inf_{u\in\mathcal{U}}I(u)=m^{*}\]
as well as a sequence \(\{u_{k}\}\subset\mathcal{U}\) such that
\[m^{*}\leq I(u_{k})\leq m^{*}+\frac{1}{k}\text{ for all }k\in\mathbb{N}.\]
Lemma 2.7 is applicable and we get the existence of a subsequence \(\{u_{k_{r}}\}_{r\in\mathbb{N}}\) as well as \(u^{*}\in\mathcal{U}\) such that
\[u_{k_{r}}\xrightarrow{r\to\infty}u^{*}\text{ weakly in }L^{2}(B_{R}(0), \mathbb{R})\text{ and weak* in }L^{\infty}(\mathbb{R}^{d}).\]
Similarly to Lemma 2.6, we start out with the following a priori bound on \(\rho\):
**Lemma 3.4**.: _Assume that \(\rho_{0}\in L^{2}(\mathbb{R}^{d})\) and \(u\in\mathcal{U}\). Then the solution \(\rho=\rho^{u}\) to Equation (3.2) satisfies_
\[\rho\in L^{2}(0,T;H^{1}(\mathbb{R}^{d})) \tag{3.4}\]
\[\|\rho\|_{L^{\infty}(0,T;L^{2}(\mathbb{R}^{d}))}^{2}+\int_{0}^{t}\int_{\mathbb{ R}^{d}}|\nabla_{x}\rho(s,x)|^{2}dxds\leq C\|\rho_{0}\|_{L^{2}(\mathbb{R}^{d})}^{2} \ \forall t\in(0,T), \tag{3.5}\]
_where \(C\) is independent of \(u\)._
Proof.: Let \(\{u_{\varepsilon}\}_{\varepsilon}\subset C^{2}([0,T]\times\mathbb{R}^{d})\cap \mathcal{U}\) be a sequence approximating \(u\) as \(\varepsilon\to 0\). Then (3.2) yields a solution \(\rho_{\varepsilon}\in L^{2}\) as in Lemma 2.6. Furthermore, due to
\[\frac{1}{2}\frac{d}{dt}\|\rho_{\varepsilon}(t)\|_{L^{2}}^{2}={}_{H^{1}}\big{(} \rho_{\varepsilon}(t),\frac{d\rho_{\varepsilon}}{dt}(t))_{H^{-1}}\]
we get by (3.2)
\[\frac{1}{2}\|\rho_{\varepsilon}(t)\|_{L^{2}}^{2}+\int_{0}^{t}\int _{\mathbb{R}^{d}}\rho_{\varepsilon}(s,x)\Big{(}-\Delta(u_{\varepsilon}(s,x) \beta(\rho_{\varepsilon}(s,x))\] \[\qquad\qquad\qquad+\operatorname{div}(D(x)b(\rho_{\varepsilon}(s,x))\rho_{\varepsilon}(s,x))\Big{)}dxds=\frac{1}{2}\|\rho_{0}\|_{L^{2}}^{2}.\]
Denoting \(\bar{\alpha}:=\max\{\alpha_{0},|\beta|_{\operatorname{Lip}}\}\), we have
\[\int_{\mathbb{R}^{d}}\rho_{\varepsilon}D_{ii}^{2}(u_{\varepsilon }\beta(\rho_{\varepsilon}))dx =-\int_{\mathbb{R}^{d}}D_{i}(\rho_{\varepsilon})D_{i}(u\beta( \rho_{\varepsilon}))dx\] \[=-\int_{\mathbb{R}^{d}}D_{i}\rho_{\varepsilon}(D_{i}u_{ \varepsilon})\beta(\rho_{\varepsilon})dx-\int_{\mathbb{R}^{d}}(D_{i}\rho_{ \varepsilon})\cdot u_{\varepsilon}\cdot D_{i}(\beta(\rho_{\varepsilon}))dx\] \[=-\int_{\mathbb{R}^{d}}D_{i}\rho_{\varepsilon}(D_{i}u_{ \varepsilon})\beta(\rho_{\varepsilon})dx-\int_{\mathbb{R}^{d}}(D_{i}\rho_{ \varepsilon})\cdot u_{\varepsilon}\cdot\beta^{\prime}(\rho_{\varepsilon})(D_{i }\rho_{\varepsilon})dx\] \[\leq-\bar{\alpha}\int_{\mathbb{R}^{d}}(D_{i}\rho_{\varepsilon}) \cdot\rho_{\varepsilon}\cdot(D_{i}u_{\varepsilon})dx-\alpha_{0}\gamma_{1}\int _{\mathbb{R}^{d}}(D_{i}\rho_{\varepsilon})^{2}dx\] \[=-\bar{\alpha}\int_{\mathbb{R}^{d}}D_{i}(\rho_{\varepsilon}^{2}) \cdot(D_{i}u_{\varepsilon})dx-\alpha_{0}\gamma_{1}\int_{\mathbb{R}^{d}}(D_{i} \rho_{\varepsilon})^{2}dx\] \[=\bar{\alpha}\int_{\mathbb{R}^{d}}\rho_{\varepsilon}^{2}D_{ii}^{ 2}u_{\varepsilon}dx-\alpha_{0}\gamma_{1}\int_{\mathbb{R}^{d}}(D_{i}\rho_{ \varepsilon})^{2}dx\] \[\leq\bar{\alpha}\gamma_{3}\int_{\mathbb{R}^{d}}\rho_{\varepsilon }^{2}dx-\alpha_{0}\gamma_{1}\int_{\mathbb{R}^{d}}(D_{i}\rho_{\varepsilon})^{2}dx.\]
For the divergence term, we get using Young's inequality
\[-\int_{\mathbb{R}^{d}}\rho_{\varepsilon}\operatorname{div}(Db( \rho_{\varepsilon})\rho_{\varepsilon})dx =\int_{\mathbb{R}^{d}}\nabla\rho_{\varepsilon}\cdot[Db(\rho_{ \varepsilon})\rho_{\varepsilon}]dx\] \[\leq\frac{\delta}{2}\int_{\mathbb{R}^{d}}[\nabla\rho_{ \varepsilon}]^{2}dx+\frac{1}{2\delta}\int_{\mathbb{R}^{d}}D^{2}(b(\rho_{ \varepsilon})\rho_{\varepsilon})^{2}dx\] \[\leq\frac{\delta}{2}\int_{\mathbb{R}^{d}}[\nabla\rho_{\varepsilon }]^{2}dx+\frac{\|D\|_{\infty}^{2}\int_{\mathbb{R}^{d}}(b(\rho_{\varepsilon}) \rho_{\varepsilon})^{2}dx\] \[\leq\frac{\delta}{2}\int_{\mathbb{R}^{d}}[\nabla\rho_{\varepsilon }]^{2}dx+\frac{\|b_{\infty}^{2}\|D\|_{\infty}^{2}}{2\delta}\int_{\mathbb{R}^{d} }\rho_{\varepsilon}^{2}dx.\]
In total, we get
\[\frac{1}{2}\|\rho_{\varepsilon}(t)\|_{L^{2}}^{2}-\frac{1}{2}\| \rho_{0}\|_{L^{2}}^{2} =\sum_{i=1}^{d}\int_{0}^{t}\int_{\mathbb{R}^{d}}\rho_{\varepsilon} D_{ii}^{2}(u_{\varepsilon}\beta(\rho_{\varepsilon}))dxds-\int_{0}^{t}\int_{ \mathbb{R}^{d}}\rho_{\varepsilon}\operatorname{div}(Db(\rho_{\varepsilon}) \rho_{\varepsilon})dxds\] \[\leq\sum_{i=1}^{d}\int_{0}^{t}\bar{\alpha}\gamma_{3}\|\rho_{ \varepsilon}\|_{L^{2}}^{2}ds-\sum_{i=1}^{d}\alpha_{0}\gamma_{1}\int_{0}^{t}\int _{\mathbb{R}^{d}}(D_{i}\rho_{\varepsilon})^{2}dxds\] \[\qquad+\frac{\delta}{2}\int_{0}^{t}\int_{\mathbb{R}^{d}}[\nabla \rho_{\varepsilon}]^{2}dxds+\frac{\|b_{\infty}^{2}\|D\|_{\infty}^{2}}{2 \delta}\int_{0}^{t}\int_{\mathbb{R}^{d}}\rho_{\varepsilon}^{2}dxds\] \[=\bar{\alpha}\gamma_{3}d\int_{0}^{t}\|\rho_{\varepsilon}\|_{L^ {2}}^{2}ds-\alpha_{0}\gamma_{1}\int_{0}^{t}\|\nabla\rho_{\varepsilon}\|_{L^{2} }^{2}ds\] \[\qquad+\frac{\delta}{2}\int_{0}^{t}\|\nabla\rho_{\varepsilon}\|_{L^ {2}}^{2}ds+\frac{\|b_{\infty}^{2}\|D\|_{\infty}^{2}}{2\delta}\int_{0}^{t}\| \rho_{\varepsilon}\|_{L^{2}}^{2}ds\]
Rearranging, this yields
\[\|\rho_{\varepsilon}(t)\|_{L^{2}}^{2}+(2\alpha_{0}\gamma_{1}-\delta)\int_{0}^{t} \|\nabla\rho_{\varepsilon}\|_{L^{2}}^{2}ds\leq\|\rho_{0}\|_{L^{2}}^{2}+\big{(}2 \bar{\alpha}\gamma_{3}d+\frac{|b|_{\infty}\|D\|_{\infty}}{\delta}\big{)}\int_{0 }^{t}\|\rho_{\varepsilon}\|_{L^{2}}^{2}ds.\]
As in Lemma 2.6, we proceed by using Gronwall's inequality and send \(\varepsilon\to 0\) to obtain (3.4) and (3.5) for arbitrary \(u\in\mathcal{U}\).
Comparing with Section 2.3, we can conclude the proof of existence of a controller as follows:
Proof of Theorem 3.3.: In view of Lemma 3.4, the statement of Lemma 2.8 directly carries over to the nonlinear case. Note that due to the Lipschitz properties of \(\beta\) and \(y\mapsto b(y)y\), it follows from the convergences appearing in Lemma 2.8 together with (2.11) that \(\rho^{*}=\rho^{u^{*}}\). Therefore, the proof of Theorem 3.3 follows the same arguments as the proof of Theorem 2.5.
The existence theorems
### Lions' theorem: Linear case
The following theorem, see e.g. [11, Thm 10.9].
**Theorem A.1**.: _Let \(V,H\) be Hilbert spaces. Assume that \(V\subset H\) dense and continuous such that_
\[V\subset H\subset V^{*}.\]
_Let \(T>0\) and for almost all \(t\in[0,T]\), we are given a bilinear form \(a(t;\rho,\varphi)\colon V\times V\to\mathbb{R}\) satisfying the following properties:_
1. _For all_ \(\rho,\varphi\in V\)_, the function_ \(t\mapsto a(t;\rho,\varphi)\) _is measurable,_
2. _There exists_ \(M>0\) _such that_ \[|a(t;\rho,\varphi)|\leq M\|\rho\|_{V}\|\varphi\|_{V}\] (A.1) _for almost every_ \(t\in[0,T]\) _and all_ \(\rho,\varphi\in V\)_._
3. _There exist constants_ \(\alpha>0\) _and_ \(C\in\mathbb{R}\) _such that_ \[a(t;\rho,\rho)\geq\alpha\|\rho\|_{V}^{2}-C\|\rho\|_{H}^{2}\] _for almost every_ \(t\in[0,T]\) _and all_ \(\rho\in V\)_._
_Given \(\tilde{f}\in L^{2}(0,T;V^{*})\) and \(\rho_{0}\in H\), there exists a unique function \(\rho\) satisfying_
\[\rho\in L^{2}(0,T;V)\cap C([0,T];H)\text{ and }\frac{d}{dt}\rho\in L^{2}(0,T;V^{*})\]
_and the differential equation_
\[\begin{cases}\langle\frac{d\rho}{dt}(t),\varphi\rangle+a(t;\rho(t),\varphi)& =\langle\tilde{f}(t),\varphi\rangle\text{ a.e. }t\in(0,T),\ \forall\varphi\in V\\ \rho(0)&=\rho_{0}.\end{cases}\] (A.2)
### Lions' theorem: Nonlinear case
For reference of the following theorems, see e.g. [4, Thm. 4.10] or [19, Thm. 2.1.2, p. 162]. We again assume that there exists a triple
\[V\subset H\subset V^{*}\]
and we denote the norms on \(V\) and \(H\) by \(\|\cdot\|\) and \(|\cdot|\), respectively. The norm on \(V^{*}\) is denoted by \(\|\cdot\|_{*}\).
**Theorem A.2**.: _Let \(A\colon V\to V^{*}\) be a demicontinuous, coercive and quasi-monotone operator, i.e.,_
\[{}_{V^{*}}(Au-Av,u-v)\geq-\gamma|u-v|_{H}^{2}\text{ for all }u,v\in V.\]
_Furthermore, assume that \(A\) satisfies the conditions_
\[(Ay,y)\geq\omega\|y\|^{p}-C_{1}|y|_{H}^{2}\text{ for all }y\in V\] (A.3) \[\|Ay\|_{*}\leq C_{2}(1+\|y\|^{p-1})\text{ for all }y\in V\]
_where \(\omega>0\) and \(p>1\). Given \(y_{0}\in H\) and \(\frac{1}{p}+\frac{1}{q}=1\), there exists a unique absolutely continuous function \(y\colon[0,T]\to V^{*}\) that satisfies_
\[y\in C([0,T];H)\cap L^{p}(0,T;V),\ \frac{dy}{dt}\in L^{2}(0,T;V^{*})\] \[\frac{dy}{dt}(t)+Ay(t)=0\text{ a.e. }t\in(0,T),\ y(0)=y_{0},\]
To keep the article as self-contained as possible, we provide an English translation of [19, Theorem 2.1.2, p. 162]. For the following theorem, we assume that we have the triple
\[V\subset H\subset V^{\prime}\]
where \(V\) is a Banach space densely and continuously embedded in the Hilbert space \(H\). We identify \(H\) with its dual, and let \(V^{\prime}\) be the dual of \(V\) (w.r.t. \(H\)).
**Definition A.3**.: _An operator \(A\colon V\to V^{\prime}\) is called monotone if_
\[(A(u)-A(v),u-v)\geq 0\text{ for all }u,v\in V.\]
**Theorem A.4**.: _Let \(V,H\) as stated above and \(V\) separable. Let \(A\colon V\to V^{\prime}\) with the following properties: For some \(1<p<\infty\),_
* \(A\) _is hemicontinuous from_ \(V\) _to_ \(V^{\prime}\)_, i.e.,_ \[\|A(v)\|_{*}\leq c\|v\|^{p-1}\]
* \(A\) _is monotone from_ \(V\) _to_ \(V^{\prime}\)__
* _For all_ \(v\in V\)_, we have_ \[(A(v),v)\geq\alpha\|v\|^{p},\ \alpha>0\] (A.4)
_Let \(f\in L^{p^{\prime}}(0,T;V^{\prime})\) and \(u_{0}\in H\). Then there exists a unique function \(u\) with_
* \(u\in L^{p}(0,T;V)\)__
* \(u^{\prime}+A(u)=f\)__
* \(u(0)=u_{0}\)_._
_It is worth noticing that exploiting the first and second implication, we have that \(u^{\prime}\in L^{p^{\prime}}(0,T;V^{\prime})\), so that the last implication makes sense._
_In applications, the above hypotheses may be too strong. Therefore, it is useful to state the following variant of the theorem: Assume that there exists a seminorm \([v]\) on \(V\) such that_
\[\text{there exists }\lambda>0\text{ and }\beta>0\text{ such that }[v]+\lambda|v|\geq\beta\|v\|\] (A.5)
_for all \(v\in V\). (Here, \(|\cdot|\) and \(\|\cdot\|\) denote the norms on \(H\) and \(V\), respectively.) Also, assume that_
\[(A(v),v)\geq\alpha[v]^{p}.\] (A.6)
_Then we may replace hypothesis (A.4) by (A.5) and (A.6), and the conclusions of this theorem stay valid._
|
2305.11536 | PORTRAIT: a hybrid aPproach tO cReate extractive ground-TRuth summAry
for dIsaster evenT | Disaster summarization approaches provide an overview of the important
information posted during disaster events on social media platforms, such as,
Twitter. However, the type of information posted significantly varies across
disasters depending on several factors like the location, type, severity, etc.
Verification of the effectiveness of disaster summarization approaches still
suffer due to the lack of availability of good spectrum of datasets along with
the ground-truth summary. Existing approaches for ground-truth summary
generation (ground-truth for extractive summarization) relies on the wisdom and
intuition of the annotators. Annotators are provided with a complete set of
input tweets from which a subset of tweets is selected by the annotators for
the summary. This process requires immense human effort and significant time.
Additionally, this intuition-based selection of the tweets might lead to a high
variance in summaries generated across annotators. Therefore, to handle these
challenges, we propose a hybrid (semi-automated) approach (PORTRAIT) where we
partly automate the ground-truth summary generation procedure. This approach
reduces the effort and time of the annotators while ensuring the quality of the
created ground-truth summary. We validate the effectiveness of PORTRAIT on 5
disaster events through quantitative and qualitative comparisons of
ground-truth summaries generated by existing intuitive approaches, a
semi-automated approach, and PORTRAIT. We prepare and release the ground-truth
summaries for 5 disaster events which consist of both natural and man-made
disaster events belonging to 4 different countries. Finally, we provide a study
about the performance of various state-of-the-art summarization approaches on
the ground-truth summaries generated by PORTRAIT using ROUGE-N F1-scores. | Piyush Kumar Garg, Roshni Chakraborty, Sourav Kumar Dandapat | 2023-05-19T09:07:52Z | http://arxiv.org/abs/2305.11536v1 | # PORTRAIT: a hybrid aPproach to cReate extractive ground-TRuth summAry for dIsaster evenT
###### Abstract
Disaster summarization approaches provide an overview of the important information posted during disaster events on social media platforms, such as, Twitter. However, the type of information posted significantly varies across disasters depending on several factors like the location, type, severity, etc. Verification of the effectiveness of disaster summarization approaches still suffer due to the lack of availability of good spectrum of datasets along with the ground-truth summary. Existing approaches for ground-truth summary generation (ground-truth for extractive summarization) relies on the wisdom and intuition of the annotators. Annotators are provided with a complete set of input tweets from which a subset of tweets is selected by the annotators for the summary. This process requires immense human effort and significant time. Additionally, this intuition-based selection of the tweets might lead to a high variance in summaries generated across annotators. Therefore, to handle these challenges, we propose a hybrid (semi-automated) approach (PORTRAIT) where we partly automate the ground-truth summary generation procedure. This approach reduces the effort and time of the annotators while ensuring the quality of the created ground-truth summary. We validate the effectiveness of PORTRAIT on \(5\) disaster events through quantitative and qualitative comparisons of ground-truth summaries generated by existing intuitive approaches, a semi-automated approach, and PORTRAIT. We prepare and release the ground-truth summaries for \(5\) disaster events which consist of both natural and man-made disaster events belonging to \(4\) different countries. Finally, we provide a study about the performance of various state-of-the-art summarization approaches on the ground-truth summaries generated by PORTRAIT using ROUGE-N F1-scores.
Disaster tweet summarization, Ground-truth summary, Social media, Hybrid approach
## I Introduction
Social media platforms, such as Twitter, are important mediums where users share information during disaster events [1]. People from the affected locations share messages about their urgent needs while government organizations, volunteers and humanitarian agencies share information about the availability of resources and services. Government agencies utilize these information from the affected locations to ensure immediate relief operations [2]. Several research works have highlighted the role of social media websites, such as Twitter, for effective disaster management [3, 4, 5]. However, tweets are inherently short and comprise of grammatical errors, abbreviations and informal language, making it highly challenging to identify the relevant information. Additionally, the huge volume of these messages increases the challenges for government organizations, humanitarian agencies, and volunteers to identify relevant information manually [6, 7].
To mitigate these issues, recent research works [8, 9, 10, 11, 12] have proposed automated tweet summarization approaches which can handle the huge number of user tweets posted during a disaster event. Summary generated by these approaches can aid government agencies to identify important information, such as identification of the required resources across affected locations, infrastructural damage, etc. However, it is noticed that the quality of the summary produced by existing approaches varies significantly across different disaster datasets. This is mainly because of the high variance across different datasets in terms of the location, type and severity of disasters. Furthermore, due to the lack of ground-truth summary, existing algorithms can not be thoroughly tested for robustness. To check the effectiveness and robustness of a summarization approach, we require a good number of ground-truth summaries of disaster events from different locations and of different types. Although [11] and [13] have provided ground-truth summary of \(6\) datasets (shown in Table I) which is of huge help to the research community, it is not sufficient for testing. Although addition of new datasets will surely improve this scenario, ground-truth summary generation is a costly task in terms of time and manual effort. This scenario motivates us to come up with a strategy which can reduce human effort and time.
A good summary of an event must capture the relevant and diverse aspects of the event as well as it should cover all the important aspects/topics 1 of the event. So to come up with a good ground-truth summary, an annotator requires initially to identify the topics of each tweet, followed by determination of the relative importance of each topic with respect to the other topics and finally, select tweets from different topics based on the importance of a tweet in its own topic and the importance of topic respect to the event for the final summary. This process requires extensive manual efforts and significant amount of time from the annotators. Moreover, the quality of the final summary depends on the wisdom and understanding of the annotator as all the intermediate steps followed by annotators are subjective. Therefore, we can not rely on a
summary generated by a single annotator [14, 13, 11]. Existing approaches suggest that we should have at least \(3\) annotators to generate \(3\) different summaries. Evaluation of an automatically generated summary should be compared with each of the individual summaries, and an average score of the comparison results across the individual summaries to ensure that the proposed summary is consistent and fair.
Although there are several existing research works [15, 16, 17, 18, 19] which create ground-truth summary of an event, only a few existing research works [20, 21] discuss guidelines/approaches how to generate a ground-truth summary. These existing works can further be segregated on their proposed approach into fully-automated approach [21] for ground-truth creation of news multi-document summarization dataset guided by tweets, semi-automated approach [20] for ground-truth creation of Twitter social events, and completely manual approach [19, 17, 22, 23]. However, fully automated ground-truth creation method [21] is practically a summarization approach without any human intervention. Therefore, there is no justified reason to treat the created summary as ground-truth. In the semi-automated method [20], the authors used a number of summarization methods to select a subset of tweets for annotators. There are few practical issues in this approach as i) it relies on a specific set of summarization algorithms which might result good for a specific dataset and bad for some other dataset ii) it identifies topics by unsupervised clustering methods which suffer from vocabulary overlap issue [8]. Moreover, these existing ground-truth creation guidelines/approaches are not directly applicable to ground-truth summary creation for disaster events. This is mainly due to non-fulfilment of the summary objectives, high vocabulary overlap across clusters in fully-automated and semi-automated approaches, domain-dependent annotation instructions, and high variance in generated summaries across annotators. There are a few existing disaster summarization approaches [11, 13, 24] which provide the ground-truth summary. However, in the above-mentioned approaches, ground-truth summary is generated based on the wisdom and intuition of the annotators, where the annotators are provided with all the tweets with respect to the disaster, and he/she has to select the tweets manually.
In this paper, we propose a hybrid (semi-automated) approach (PORTRAIT) to generate the ground-truth summary where we automate the process partly (without compromising the quality of ground-truth) so that the annotator's efforts are reduced. Along with that, we provide guidelines to ensure consistent summaries. Therefore, we propose a systematic semi-automated approach for ground-truth summary generation. We validate the effectiveness of PORTRAIT on \(5\) disaster events by comparing ground-truth summary generated by PORTRAIT with ground-truth summary generated by existing approaches. We perform both qualitative and quantitative comparisons on the three most important characteristics of summary, namely _coverage, relevance, and diversity_[25, 26]. We perform qualitative comparison with the help of \(3\) meta annotators who rated both the summaries for _coverage, relevance, and diversity_ and utilize metrics to capture _coverage, relevance, and diversity_ for qualitative as well as quantitative comparison. Using both qualitative and quantitative comparisons, it is confirmed that the quality of ground-truth summary generated by PORTRAIT is better compared to the summary generated by annotators' intuition as well as ground-truth summary generated by the existing semi-automated approach. Additionally, we release the ground-truth summaries for \(5\) disaster events, which belong to different types and from different countries, such as the United States of America, Haiti, Mexico, and Pakistan. Our major contributions can be summarized as follows:
1. We propose a semi-automated approach (PORTRAIT) to generate the ground-truth summary for disaster events. PORTRAIT reduces the effort and time of annotators.
2. We provide quantitative and qualitative analysis of the effectiveness of PORTRAIT in ground-truth summary generation. Comparison result confirms that PORTRAIT ensures quality ground-truth summary.
3. We prepare and release the ground-truth summary for \(5\) disaster datasets of different locations and types, which would be highly helpful for the research community.
4. To verify the quality of generated ground-truth summary by PORTRAIT, we have added two additional fields, namely _relevance label_ and _explanations_. _Relevance label_ is a categorical variable which can take values as _high_, _medium or low_ and _explanation_ provides the possible reasoning behind the _relevance label_. We provide this information for \(5\) datasets which we release.
5. We also compare \(13\) existing summarization approaches on these datasets, which might help the research community in understanding the performance of existing
summarization algorithms.
The rest of the paper is organized as follows. We discuss related works in Section II. In Section III, we provide the details of datasets and discuss the details of PORTRAIT in Section IV. In Section V, we discuss results where we provide the qualitative and quantitative comparison results of PORTRAIT summary in Section V-A and Section V-B, respectively. We discuss the experiment details, and results for performance comparison of the existing summarization approaches on the ground-truth summaries generated by PORTRAIT in Section V-C. Finally, we conclude the paper in Section VI.
## II Related Works
Summarization provides a comprehensive gist which includes all the important aspects of an event. This becomes very important when event comprises of sufficiently large amount of text/tweets where there is high chance of duplicate information and noise. This attracts a large group of researchers, and we find a very rich literature on summarization work for different event types.
Tweet summarization approaches proposed for disaster events can be broadly categorized in terms of methodology as content and context-based approaches [24, 27], graph-based approaches [28] and deep learning-based approaches [29]. However, irrespective of the approach, any disaster tweet summarization approach requires a good number of ground-truth summaries of different disaster events from different locations and types for the testing of robustness. There is an important point to be noted that disaster datasets collected from different locations and of different types exhibit a high variance [8]. Hence, it is quite likely a proposed summarization algorithm might be suitable for a set of input datasets while not appropriate for different sets of inputs. Till date, we found a very limited ground-truth dataset for disaster event and hence there is an immediate need to create adequate amount of ground-truth summary of disaster events from different locations and of different types. However, generation of ground-truth summaries for disaster events has several challenges, and very few disaster summarization approaches discuss the procedure to generate the ground-truth summary. Therefore, we initially discuss existing literature for ground-truth summary generation for different applications, such as multiple documents, customer-agent interaction, social media interactions, etc., which can provide us with critical insights on how to develop ground-truth summary generation algorithms. We, finally, discuss the ground-truth summary generation for tweets related to disaster events specifically.
Existing ground-truth summary generation approaches for different applications are either extractive [19, 30, 15, 31] or abstractive [14, 32, 33, 34, 35]. Existing extractive ground-truth summary generation approaches can be further categorized as automated approaches [21], semi-automated approaches [20], or manual annotation-based approaches [19, 17, 22, 23] whereas abstractive summarization approaches found in the literature are only manual annotation based approaches [14, 36, 32, 33, 37]. Manual annotation-based approaches provide the complete set of input sentences to an annotator who selects the sentences into the summary on the basis of their wisdom and intuition. While some of these approaches provide a specific set of instructions [19, 14, 33, 38, 39] to the annotators, the others do not provide any specific instruction [30, 16, 17, 31, 36, 35, 40, 37]. In case of extractive ground-truth summarization approaches without instructions [16, 18, 17, 23], ask the annotator to gauge the importance of a sentence to decide whether it should be selected into the summary, while for abstractive ground-truth summarization approaches without instructions [34, 37] ask the annotator to gauge the importance of a keyword to decide whether it should be selected into the summary. However, understanding the importance of a sentence or the keyword only on the basis of intuition and wisdom can be very difficult for an annotator and further, can lead to inconsistent summaries across annotators. To handle this challenge, few existing manual annotation-based ground-truth summary generation approaches provide more detailed guidelines to help the decision-making of the annotators, such as examples of informative and uninformative summaries [19], description of the summary objectives, like, coherence, readability, abstractivity, coverage, and diversity [14] or specific instruction related to the application, such as understanding of the customer requirements and the desired agent response [39]. Although these guidelines are immensely helpful for the annotators, none of these guidelines intends to reduce the effort of the annotators. Additionally, since all of these guidelines are subjective and generic, they can not ensure consistency across annotators, and therefore, the summary generated by different annotators might vary.
To reduce human effort and inconsistency across ground-truth summaries generated by different annotators, several existing approaches have proposed automated or semi-automated approaches in different applications. For example, Cao et al. [21] proposed an automated approach which initially segregates tweets into clusters, followed by the selection of representative tweets from each cluster by Integer Linear Programming (ILP) based optimization technique to generate the summary. Although an automated approach reduces human efforts completely, this approach does not include the human wisdom and intuition required to resolve the subjective task of ground-truth summarization. Therefore, it is only a summarization approach which can not be treated as ground-truth summary generation approach. On the basis of these existing approaches, we observe that neither automated nor manual approaches can ensure consistent ground-truth summaries with minimum human effort. In order to resolve this, Nguyen et al. [20] proposed a semi-automated approach which initially segregates the tweets into clusters on the basis of their topic. In the next step, Nguyen et al. [20] employ \(3\) existing summarization algorithms such that each algorithm selects the most informative tweets from each cluster into a reference tweet set. Therefore, the reference tweet set includes all the informative tweets by \(3\) summarization algorithms from all the clusters. Finally, the annotator manually selects the tweets from reference tweet set into the ground-truth summary on the basis of their wisdom and intuition. Although this approach integrates both automation and manual-based ground-truth
summary generation, which reduces human effort, it has a few shortcomings. For example, identifying topics by clustering is error-prone as clustering primarily groups tweets based on vocabulary. It is found many times that the same words are being used in different contexts and meanings. Moreover, this approach relies on \(3\) specific summarization approaches to select important tweets from each cluster. There is a high chance that this approach will be highly data dependent which means it might produce good results for certain datasets while it may result bad for some other datasets.
Similarly, there are several existing disaster ground-truth summary creation approaches, such as abstractive [41, 42, 43] or extractive [24, 11, 13]. To the best of our knowledge, we found that all of these approaches are manually generated ground-truth summary generation approaches where they generate the summary without any help of instructions. As previously discussed, manual ground-truth summary generation approaches might not ensure consistency across annotators, fail to ensure objectives of summarization and require a huge amount of human effort and time. Further, generation of ground-truth summary is a subjective task, so we can not depend on only one annotator for the summary, and we require at least \(3\) annotators for their individual summaries [14, 11, 13], thereby, increasing the effort and time from annotators by at least \(3\) times. Therefore, in this paper, we propose a semi-automated approach (PORTRAIT) wherein we provide a formalized set of steps to be followed to generate a summary and furthermore, we provide automated solutions to several of these steps, which reduces the annotator's effort and time and can ensure consistency across annotators. We discuss datasets details next.
## III Datasets
In this Section, we discuss the disaster events for which the ground-truth summaries are available as well as the disaster events for which we prepare the ground-truth summary.
Dutta et al. [11] provided the ground-truth summaries for _Sandy Hook Elementary School Shooting_2, _Uttarakhand Flood_3, _Hagupit Typhoon_4, and _Hyderabad Blast_5 and Rudra et al. [13] provided for _Harda Twin Train Deraillment_6 and _Nepal Earthquake_7, respectively. We show the details of these \(6\) disaster events in Table I.
Footnote 2: [https://en.wikipedia.org/wiki/Sandy_Hook_Elementary_School_shooting](https://en.wikipedia.org/wiki/Sandy_Hook_Elementary_School_shooting)
Footnote 3: [https://en.wikipedia.org/wiki/2013_North_India_foods](https://en.wikipedia.org/wiki/2013_North_India_foods)
Footnote 4: [https://en.wikipedia.org/wiki/YPphoon_Hagupit_](https://en.wikipedia.org/wiki/YPphoon_Hagupit_)(2014)
Footnote 5: [https://en.wikipedia.org/wiki/2013_Hyderabad_Blasts](https://en.wikipedia.org/wiki/2013_Hyderabad_Blasts)
In this paper, we propose a hybrid approach (PORTRAIT) to generate the ground-truth summary with minimum human intervention and prepare ground-truth summaries of \(5\) disaster events, such as _Los Angeles International Airport Shooting_\((D_{1})\), _Hurricane Matthew_\((D_{2})\), _Puebla Mexico Earthquake_\((D_{3})\), _Pakistan Earthquake_\((D_{4})\), and _Midwestern U.S. Floods_\((D_{5})\). We have taken \(D_{1}\) and \(D_{2}-D_{5}\) disaster datasets from [44] and [7], respectively. We specifically select datasets such that it covers different types of disasters, such as natural and man-made, and different continents, such as Asia and USA. We provide the details of these disaster events in Table II.
1. \(D_{1}\): This dataset is based on the tweets related to the terrorist attack on the _Los Angeles International Airport Shooting_8 on November, \(2013\) in California in which \(1\) person was killed and more than \(15\) people were injured [44]. Footnote 8: [https://en.wikipedia.org/wiki/2013_Los_Angeles_International_Airport_shooting](https://en.wikipedia.org/wiki/2013_Los_Angeles_International_Airport_shooting)
2. \(D_{2}\): This dataset is based on the tweets related to the devastating impact of the terrible hurricane, _Hurricane Matthew_9 on October, \(2016\) in Haiti which caused the death of \(603\) people, around \(128\) people were missing, and the estimated damage was around \(\$2.8\) billion USD [7]. Footnote 9: [https://en.wikipedia.org/wiki/2019_Midwestern_U.S._](https://en.wikipedia.org/wiki/2019_Midwestern_U.S._) floods
3. \(D_{3}\): This dataset is based on the tweets related to the _Puebla Mexico Earthquake_10 on September, \(2017\) in Mexico City in which \(370\) people were dead and more than \(6000\) people were injured [7]. Footnote 10: [https://en.wikipedia.org/wiki/2019_Midwestern_U.S._](https://en.wikipedia.org/wiki/2019_Midwestern_U.S._) floods
4. \(D_{4}\): This dataset is based on the tweets related to the _Pakistan Earthquake_11 on September, \(2019\) in which around \(40\) people were dead, \(850\) people were injured, and around \(319\) houses were damaged [7]. Footnote 12: [https://en.wikipedia.org/wiki/2019_Midwestern_U.S._](https://en.wikipedia.org/wiki/2019_Midwestern_U.S._) floods
5. \(D_{5}\): This dataset is based on the tweets related to the _Midwestern U.S. Floods_12 in which around \(14\) million people were affected, and the estimated damage was around \(\$2.9\) billion USD [7]. Footnote 12: [https://en.wikipedia.org/wiki/2019_Midwestern_U.S._](https://en.wikipedia.org/wiki/2019_Midwestern_U.S._) floods
For pre-processing, we perform conversion of cases, lemmatization, removal of URLs, stop words, white-spaces, punctuation marks, and emoticons. We remove Twitter-specific keywords [45], such as usernames and hashtags, as we consider
only the text of the tweets. We also remove the duplicate tweets and retweets and follow Alam et al. [46] to remove noise, i.e., remove any word consisting of less than \(3\) characters except disaster-specific keywords [8]. We show the details of \(D_{1}\)-\(D_{5}\) and gold standard summary length in Table II and make it publicly available 13. We show some examples of tweets for \(D_{2}\) and \(D_{4}\) in Table III.
Footnote 13: [https://drive.google.com/drive/folders/15x-bfddxVlu7b44znXvYuCCiFvCSmFZ?usp=sharing](https://drive.google.com/drive/folders/15x-bfddxVlu7b44znXvYuCCiFvCSmFZ?usp=sharing)
Footnote 14: [https://www.google.com/drive/folders/15x-bfddxVlu7b44znXvYuCCiFvCSmFZ?usp=sharing](https://www.google.com/drive/folders/15x-bfddxVlu7b44znXvYuCCiFvCSmFZ?usp=sharing)
## IV Proposed Approach
In this Section, we elaborate the process of hybrid ground-truth summary generation approach (PORTRAIT) along with justification about which part is automated and which part is left for the human annotators. We also provide a detailed discussion of the process adopted for annotator selection.
### _Ground-truth Summary Generation_
To ensure a good quality summary, an annotator needs to make multiple decisions for various tasks, such as identification of the topic of each tweet, assessment of the importance of the topic with respect to the disaster event, determining the importance of a tweet with respect to the topic and finally, select or leave the tweet into the ground-truth summary on the basis of both the importance of the tweet with respect to the topic and importance of topic with respect to the disaster. These tasks either may be performed explicitly or implicitly by intuition. We observe that in all existing research works that an annotator [24, 11, 47] manually identifies the importance of each tweet with respect to the disaster event and then, decides whether it should be part of the summary or not based on intuition. These approaches mainly depend on the wisdom of the annotators to select the tweets from a flat set of tweets related to a disaster. This might lead to high variance in the summaries generated by different annotators as in every step it depends on human intuition, which varies across annotators. Moreover, it might also fail to preserve all the intended features of a good summary. Along with that, it requires extensive manual effort and time from the annotators. Therefore, we propose PORTRAIT to generate the ground-truth summary where we reduce the effort and time of the annotators by providing a sub-set of the most informative tweets from each topic. Additionally, this also can ensure consistency among the different summaries across annotators. We discuss the proposed PORTRAIT next.
As discussed earlier, a number of steps are required to come up with the summary from a flat set of tweets which includes topic identification of each tweet, assessment of topic importance, and final selection of tweets to ensure all the
important aspects are covered. From the existing literature, it is well understood that the first step of this sequential process which is topic identification can be automated with very high accuracy. There are a number of approaches that can be adopted for automated topic identification. We have chosen Garg et al. [8] to automatically identify the category/topic of a tweet as it was specially designed for disaster tweet classification based on disaster ontology and reported very high F1-score (\(0.98\)) as classification accuracy by considering only those tweets which could be classified using this approach. Our observations indicate that the tweets which are not classified by Garg et al. [8] are either irrelevant or comprise of very less information. Therefore, for our next task, we do not consider the tweets whose category could not be determined using automated method. We show the number of tweets which we classified using automated method in Table II. The next task in the sequential process for PORTRAIT is the assessment of the relative importance of each topic with respect to the disaster event. We find that the relative importance of topics with respect to corresponding disaster event varies significantly across disasters [8], and identifying it automatically could be highly error-prone. So, we believe this task should be performed by human annotators to ensure high-quality ground-truth summary. In the sequential process of annotation, understanding the importance of a tweet with respect to the topic could be considered as the next task. However, this becomes highly time-consuming for the annotators if a topic consists of a huge number of tweets. For example, the number of tweets that belong to different topics, such as _Volunteering Support_ and _Affected Population_ are \(1113\) and \(440\) in _Midwestern U.S. Floods_14 and _Pakistan Earthquake_15, respectively, as shown in Table IV. So, to reduce the efforts of an annotator, we provide only a subset of highly ranked tweets (on the basis of informativeness) from all the tweets that belong to that topic. As highly ranked tweets are more likely to be selected into the summary. Although there are a number of existing approaches for ranking tweets [48, 42, 10, 24], we adopted Disaster specific Maximal Marginal Relevance (DMMR) [8]. We choose DMMR over other approaches, as it considers the specific information of each topic related to disaster events and has been proven to be the most effective for disaster events. We use this automated ranking for selection only if the number of tweets in a topic/category is more than \(25\). For the topic with more than \(25\) tweets, we select the top \(25\%\) most informative tweets by DMMR. However, if the number of tweets in the top \(25\%\) is less than \(25\), then we keep top \(25\) tweets based on DMMR score. We provide the selected tweets finally to the annotators. By this automatic selection of the most informative tweets by DMMR from each topic, we reduce the number of tweets to be read by the annotators significantly, and an annotator only reads around \(26.37-30.59\%\) of the classified tweets for \(D_{1}\) to \(D_{5}\) dataset. Additionally, as we select a significant percentage of tweets from each topic, it is most unlikely that we will lose any important tweet which was supposed to be part of the summary. We experimentally validate this in Section V-B. However, we do not provide any associated DMMR score for the selected tweets when we provide it to the annotators as it might be misleading. Finally, we rely on an annotator to select the set of tweets from each topic into the summary as this is a highly subjective task. We provide annotators with a set of instructions/guidelines to help them.
Footnote 14: [https://en.wikipedia.org/wiki/Z019_Midwestern_U.S._fl.oods](https://en.wikipedia.org/wiki/Z019_Midwestern_U.S._fl.oods)
1. Annotators are instructed to read about the disaster event from external and trusted sources of information.
2. Annotators are also instructed to go through a set of example tweets and corresponding topic descriptions created by us. This is done for all the topics. An overview of this information is shown in Table V.
3. Annotators need to select the tweets from each topic on the basis of wisdom and intuition. The annotator must consider the importance of the topic with respect to the disaster and the importance of a tweet with respect to the topic to decide whether a tweet should be selected or not. An annotator can even decide not to select any tweet from a topic if he/she feels the topic/tweets of that topic/category is not important for the disaster event.
### _Annotator Selection_
We observe that existing research works [24, 11, 13, 42] for ground-truth summary generation for disaster events do not provide any quality checking strategy for annotator selection. However, as the quality of ground-truth summary depends on the intuition and understanding of the annotators, we propose _Quality Assessment Evaluation_ to select annotators. For _Quality Assessment Evaluation_, we evaluate annotators performance on a subset of tweets, \(T^{{}^{\prime}}\) from Hurricane Matthew 16 (\(D_{2}\)) dataset. \(T^{{}^{\prime}}\) comprises of \(2\%\) of tweets from each topic of a dataset. To handle fractions, we round-up \(2\%\) of tweets. However, if the roundup results in zero tweet selection for a topic, we change it to \(1\).
Footnote 16: [https://en.wikipedia.org/wiki/Hurricane_Matthew](https://en.wikipedia.org/wiki/Hurricane_Matthew)
In the _Quality Assessment Evaluation_, we ask the annotators \({}^{17}\) to \(1\)) identify the topic given a tweet, and 2) select the tweets from each topic into summary. To identify the topic, we provide the annotators with a list of the possible topics along with descriptions and examples as shown in Table V. On the basis of this provided information, the annotators assign the topic that seems the most relevant to the tweet text. To select the tweets into the summary, the annotator needs to identify the importance of a topic to determine its representation in summary and select the most representative tweets from each topic on the basis of the importance of that topic. We measure the annotator's performance on the basis of the generated summary quality through the objectives of text summarization [25], such as _Coverage_, _Relevance_, and _Diversity_, through the opinion of a meta-annotator. _Relevance_ refers to the identification of the importance of each tweet with respect to a disaster event, _Coverage_ refers to the selection of the important aspects in summary, and _Diversity_ refers that all
selected tweets in summary should have diverse/unique information, i.e., no two tweets convey the same information. We follow the existing summarization works [14, 19], where a meta-annotator scores the summary generated by an annotator in the range of \(1\) (worst score) - \(10\) (best score) on the basis of the fulfillment of the objectives, such as _Coverage_, _Relevance_, and _Diversity_. A meta-annotator is a university graduate in the age group \(20-30\), is well-versed in English and is conversant with Twitter. We consider an annotator to have passed the _Quality Assessment Evaluation_ if he/she scores more than \(7\). For our ground-truth summary generation, we observed that \(6\) out of \(10\) annotators passed the _Quality Assessment Evaluation_, we selected top-ranked \(3\) annotators from them. We refer to these annotators as \(P_{1}\), \(P_{2}\), and \(P_{3}\) in the rest of the paper.
### _Summary Length_
We decide the length of the summary as \(40\) on the basis of existing disaster summarization works [11, 24]. We do not follow any automated system to determine the number of tweets to be in summary on the basis of the disaster tweets.
## V Results and Discussions
In this Section, we evaluate the effectiveness of PORTRAIT by comparing the ground-truth summary generated by PORTRAIT with the ground-truth summary generated by an existing semi-automated approach [20] and the existing research works specific to disaster events [24, 11, 47]. We refer to the summary generated by existing semi-automated approach as _Semi-automated Summary_, existing approaches specific to disaster events as _Baseline Summary_ and the summary generated by PORTRAIT as _Proposed Summary_. As _Semi-automated Summary_ and _Baseline Summary_ require at least \(3\) annotators, we employ \(3\) annotators for both of them. We refer the annotators for _Semi-automated Summary_ as \(S_{1}\), \(S_{2}\) and \(S_{3}\) and for the _Baseline Summary_ as \(B_{1}\), \(B_{2}\) and \(B_{3}\). As previously discussed, we refer to the annotators for PORTRAIT as \(P_{1}\), \(P_{2}\) and \(P_{3}\).
We have considered \(3\) metrics, namely _Coverage_, _Relevance_, and _Diversity_ for performance evaluation of PORTRAIT. For qualitative comparison, we employ \(3\) meta-annotators for the subjective understanding of each summary on the basis of considered metrics _Coverage_, _Relevance_, and _Diversity_ in subsection V-A. Additionally, we compare the summaries through the quantitative understanding of _Coverage_, _Relevance_, and _Diversity_ in subsection V-B. We, finally, provide a case study where we evaluate the existing summarization approaches on the ground-truth summaries generated by PORTRAIT for \(D_{1}-D_{5}\) datasets in subsection V-C.
### _Qualitative Comparison_
Qualitative assessment is a well-accepted method to evaluate summary quality. For quality assessment, we gave the input tweets related to the disaster event, _Proposed Summary_, _Semi-automated Summary_ and _Baseline Summary_ to \(3\) meta-annotators. We asked the meta-annotator to rate the summary on the basis of three factors, namely _Coverage_, _Relevance_, and _Diversity_. We also provide annotators with the definition of these three factors as follows - 1) _Coverage_ indicates the percentage of important sub-events/aspects present in the input tweets that are covered in summary, 2) _Relevance_ of a tweet indicates how much relevant a tweet is with respect to the corresponding disaster event. So, the _Relevance_ of a summary depends on the percentage of tweets in the summary which are relevant to the disaster event, and 3) _Diversity_ indicates that tweets in summary comprise of diverse information. We asked the meta-annotators to rate the _Proposed Summary_, _Semi-automated Summary_ and _Baseline Summary_ on each factor in the range of \(1\) (worst rating) - 5 (best rating) for the \(5\) disaster datasets. We also asked them to choose a fractional score if required. In Table VI, we show the aggregated (average) score of \(3\) annotators for all the three factors on \(5\) datasets. We observe that the aggregated score for all factors are more than \(4\) for all the \(5\) datasets for the _Proposed Summary_. Additionally, we observe that the Aggregated coverage score for all datasets ranges between \(4.49\)-\(4.83\), the relevance score between \(4.25\)-\(4.84\) and the diversity score between \(4.71\)-\(4.85\) for the _Proposed Summary_ whereas the Aggregated coverage score ranges between \(3.67\)-\(4.47\) and \(3.69\)-\(4.33\), the relevance score between \(3.64\)-\(4.44\) and \(3.31\)-\(4.22\), the diversity score between \(3.47\)-\(4.25\) and \(3.38\)-\(4.42\) for _Semi-automated Summary_ and _Baseline Summary_ respectively. Therefore, our observations indicate that the quality of the _Proposed Summary_ is very high.
### _Quantitative Comparison_
In this Section, we present the quantitative comparison among _Proposed Summary_, _Semi-automated Summary_ and _Baseline Summary_ in terms of coverage, relevance and diversity.
_Coverage:_ As mentioned earlier that a good quality summary should cover all the important sub-events/aspects/topics of the event. In order to understand this, we compare the topic coverage among all the summaries. We utilize the topics identified by PORTRAIT in Section IV-A for the _Proposed Summary_, _Semi-automated Summary_ and _Baseline Summary_. We show the number of topics for \(D_{1}-D_{5}\) datasets in Table VII. We found that there is atmost one topic is not captured in _Proposed Summary_ with respect to all the topics in input tweets. However, on observing the tweets related to the topic which is not captured, we found that both the number of tweets and relevance of those tweets with respect to the disaster is very low. For example, for \(D_{1}\) which comprises of tweets related to the disaster event, _Los Angeles International Airport Shooting_18, we found that there is no tweet which belongs to the topic, _Infrastructure Damage_ in _Proposed Summary_. However, as the event name suggests, there was no major infrastructure damage during _Los Angeles International Airport Shooting_, and the number of tweets that belongs to this topic was very low, i.e., \(1\) tweet. Additionally, we observe that there was no tweet that belonged to _Infrastructure Damage_ in the _Semi-automated Summary_ and _Baseline Summary_ for \(D_{1}\). However, there were other topics, such as, _Emotional Distress_ which comprised of \(1\) tweet and \(5\) tweets for _Hurricane Matthew_19 (\(D_{2}\)) and _Puebla Mexico Earthquake_20 (\(D_{3}\)), respectively, _Humumitarian Event_ which comprised of \(5\) tweets for _Midwestern U.S. Floods_21 (\(D_{5}\)), etc., were missing in both the _Semi-automated Summary_ and _Baseline Summary_. We observe similar findings across all the \(5\) datasets that the topic which was not captured by _Proposed Summary_ was not captured by either _Semi-automated Summary_ or _Baseline Summary_. However, both the _Semi-automated Summary_ and _Baseline Summary_ did not capture several additional topics which were covered by _Proposed Summary_. Therefore, our observations show that _Proposed Summary_ has a higher topic coverage than both _Semi-automated Summary_ and _Baseline Summary_ across all datasets.
Footnote 19: [https://en.wikipedia.org/wiki/Hurricane_Matthew](https://en.wikipedia.org/wiki/Hurricane_Matthew)
Footnote 20: [https://en.wikipedia.org/wiki/2017_Puebla_earthquake](https://en.wikipedia.org/wiki/2017_Puebla_earthquake)
_Relevance:_ Summary should ensure that the relevant tweets of the disaster event are captured. In order to understand the relevance of each tweet, we ask meta-annotators to annotate all the tweets in the input dataset with _relevance label_, which are _high_, _medium_ or _low_ on the basis of their wisdom and intuition. Additionally, we ask the meta-annotator to provide explainables or explanations behind their decision of the _relevance label_ for each tweet to support _relevance label_ annotation. A meta-annotator has good knowledge of English and was not a part of this project. We show a few examples of this annotation in Table VIII. In order to evaluate _Proposed Summary_ with the _Semi-automated Summary_ and _Baseline Summary_ with respect to _relevance_, we check the distribution of _high_, _medium_ and _low relevance label_ tweets in the respective summaries. We show the percentage of each _relevance label_ for all the summaries of all \(3\) annotators for \(D_{1}-D_{5}\) datasets in Table IX. Our observation indicates that \(82.50\%-92.50\%\) of tweets in the _Proposed Summary_ have _high relevance labels_, whereas \(22.50\%-75.00\%\) of tweets in the _Semi-automated Summary_ and \(30.00\%-70.00\%\) of tweets in the _Baseline Summary_ have _high relevance labels_. Similarly, \(7.50\%-17.50\%\) of tweets in the _Proposed Summary_ have _medium relevance labels_, whereas \(2.50\%-30.00\%\) of tweets in the _Semi-automated Summary_ and \(7.50\%-22.50\%\) of tweets in the _Baseline Summary_ have _medium relevance labels_. We further observe that none of the tweets in the _Proposed Summary_ has _low relevance labels_ across the disasters, whereas \(15.00\%-62.50\%\) of tweets in the _Semi-automated Summary_ and \(22.50\%-65.00\%\) of the tweets in the _Baseline Summary_ have _low relevance labels_. Therefore, based on this observation, we can say that PORTRAIT ensures more _high relevance labels_ tweets and no _low relevance labels_ tweets in summary.
_Diversity:_ A summary should ensure that the tweets selected in the summary capture diverse information. In order to calculate the diversity of the summary, \(S\), we calculate the aggregate (average) diversity score, which is the average of diversity between each pair of tweets, say \(T_{i}\) and \(T_{j}\), \(Div(T_{i},T_{j})\) as \(AvgDiv(S)\). We calculate \(Div(T_{i},T_{j})\) as :
\[Div(T_{i},T_{j})=1-Sim(T_{i}^{x},T_{j}^{x}) \tag{1}\]
where, \(Sim(T_{i}^{x},T_{j}^{x})\) represents the semantic similarity between a pair of tweets explainables, \(T_{i}^{x}\) and \(T_{j}^{x}\) of \(T_{i}\) and \(T_{j}\), respectively, by:
\[Sim(T_{i}^{x},T_{j}^{x})=\frac{\vec{E_{i}}\cdot\vec{E_{j}}}{|\vec{E_{i}}|\ |\vec{E_{j}}|} \tag{2}\]
where, \(\vec{E_{i}}\) and \(\vec{E_{j}}\) are the embedding of \(T_{i}^{x}\) and \(T_{j}^{x}\) respectively. We calculate \(\vec{E_{i}}\) and \(\vec{E_{j}}\) as the average of the values of the tweet _explainable_ keywords embedding of \(T_{i}^{x}\) and \(T_{j}^{x}\), respectively. We consider the embedding of an _explainable_ keyword of a tweet using a pre-train Word2Vec model provided by CrisisNLP [1], which is trained on \(52\) million crisis-related messages of various disaster events. However, as tweets do not inherently contain _explainables_ which can represent the information present in the tweet about a disaster event, we rely on the _explainables_ provided by meta-annotators (as discussed in subsection V-B) of all the tweets in summary. We calculate \(AvgDiv(S)\) of the _Proposed Summary_, _Semi-automated Summary_ and _Baseline Summary_ of \(3\) meta-annotators for all the \(5\) datasets. Our observations as shown in Table X indicate that \(AvgDiv(S)\) ranges from \(0.45-0.69\) in _Proposed Summary_, whereas it ranges from \(0.40-0.66\) in _Semi-automated Summary_ and \(0.43-0.66\) in _Baseline Summary_. Therefore, _Proposed Summary_ obtains \(2.62\%-8.12\%\) and \(2.28\%-5.68\%\) higher aggregate diversity score as compared to _Semi-automated Summary_ and _Baseline Summary_, respectively, which implies PORTRAIT ensures more diverse tweets in summary than existing ground-truth summary techniques.
### _Case Study : Evaluation of Existing Summarization Approaches_
In this subsection, we initially discuss the details of the existing state-of-the-art summarization approaches. Then, we provide a performance comparison of these approaches on the ground-truth summaries generated by PORTRAIT for \(5\) disaster datasets.
#### V-C1 Existing Summarization Approaches
We segregate these approaches into _content-based_, _graph-based_, _matrix factorization-based_, _semantic similarity-based_, _ontology-based_ and _deep learning-based_ approaches. We select few prominent tweet summarization approaches from each type which we discuss next.
1. _Content-based Approaches:_ We discuss the existing content-based summarization approaches as follows:
1. _LUHN_: Luhn et al. [49] propose a frequency-based summarization approach which initially determines the term frequency score of each word in a document (after removing stopwords and stemming) and then, generates a summary by the selection of those sentences
into summary which has the highest frequency scoring words.
2. _SumBasic_: Nenkova et al. [50] initially identify the probability of occurrence of each word in a document and then, select those tweets into summary which has the words with the maximum probability of occurrence.
3. _COWTS_: Rudra et al. [24] initially calculate the score of each keyword (i.e., noun, main verb and numerals) using TF-IDF and then, select a tweet into summary if it contains the keywords with maximum score.
4. _DEPSUB_: Rudra et al. [47] initially identify the sub-events from the tweets and select those representative tweets from each sub-event into summary, which can ensure maximum coverage of the sub-event.
2. _Graph-based Approaches_: We discuss the existing graph-based summarization approaches as follows: 1. _Cluster Rank_: Garg et al. [51] initially segments a document into clusters followed by PageRank [52] algorithm to identify the tweets from each cluster to be selected into summary. 2. _LexRank_: Erkan et al. [53] propose initially constructs a graph where the nodes are the sentences and the edges represent the cosine similarity between each pair of sentences and finally, selects those sentences which have the highest Eigenvector [54] centrality score into the summary. 3. \(EnSum\): Dutta et al. [11] propose an ensemble graph-based tweet summarization approach, \(EnSum\) in which they initially identify the tweets by \(9\) summarization algorithms and then, create a tweet graph that comprises of these tweets as nodes and edges represent their similarity. Finally, they select tweets with the highest representativeness score from the tweet graph in summary. 4. _COWEXABS_: Rudra et al. [10] propose initially iden
tify the most relevant disaster-specific keywords and then, select those tweets into the summary that provide maximum information coverage of these keywords.
* _MEAD_: Radev et al. [55] propose a centroid-based summarization approach which initially identifies the clusters by agglomerative clustering and then, selects tweets from each cluster into the summary on the basis of centrality score and diversity score.
* _Matrix factorization-based Approaches:_ We discuss the most popular matrix factorization-based summarization approaches. 1. _LSA_: Gong et al. [56] propose a document summarization approach, _LSA_, which selects the tweets with the largest eigenvalues after Singular Value Decomposition (SVD) of the keyword matrix created from all the tweets. 2. _SumDSDR_: He et al. [57] propose a data reconstruction-based document summarization approach. _SumDSDR_ measure the relationship among the sentences using linear reconstruction and non-linear reconstruction objective functions and then create a summary by minimizing the reconstruction error.
* _Ontology-based Approach:_ Garg et al. [8] propose an ontology-based tweet summarization approach, _OnttoDSumm_, which initially identifies the category of each tweet using an ontology-based pseudo-relevance feedback approach followed by determination of the importance of each category with respect to a disaster. Finally, select the representative tweets from each category based on the disaster-specific maximal marginal relevance (DMMR) based approach to create a summary.
* _Deep learning-based Approach:_ Nguyen et al. [42] propose disaster-specific abstractive tweet summarization approach, _RATSUM_, which identify the key-phrases present in tweets using a pre-trained BERT model [58] and then generate the word summary by maximizing the coverage of key-phrases in the final summary. For our experiments, we select those tweets into the summary, which provides the maximum coverage of key phrases in the final summary.
#### V-B2 Comparison Results and Discussions
To evaluate the performance of the various state-of-the-art summarization approaches, we compare the summary generated by different approaches using ROUGE-N [59] scores. ROUGE-N score is a well-known measure in text summarization tasks, which computes the score on the basis of overlapping words between the system-generated summary and the ground-truth summary. We use F1-score for \(3\) different variants of the ROUGE-N score, i.e., N=\(1\), \(2\) and L, respectively. The higher the ROUGE score, better is the quality of the summary. Our observations from Table XI indicate that _OntoDSumm_ ensures the best ROUGE-N F1-scores on \(D_{1}-D_{5}\) followed by _RATSUM_. The reason behind the high performance is that _OntoDSumm_ utilizes ontology knowledge with respect to each topic to identify the importance of each tweet in a topic. Additionally, it captures the representation of each topic in summary and handles the information diversity in summary tweets. Further, our observation indicates that _RATSUM_ ensures the best ROUGE-N F1-scores on \(D_{1}\) and \(D_{3}\) followed by _LexRank_. The reason for the high performance is that _RATSUM_ better captures the content and context information presents in the tweet to predict the tweet importance. However, it does not cover the information diversity in summary tweets. The performance of _MEAD_ and _COWTS_ are the worst for \(D_{1}\) and \(D_{3}-D_{5}\), and \(D_{2}\), respectively, because they did not cover category representation and information diversity in summary.
## VI Conclusions and Future works
In this paper, we propose a hybrid approach, PORTRAIT, which partially automates the extractive ground-truth summary generation for disaster events. Therefore, by this hybrid approach, we can handle both of the inherent challenges for ground-truth summary generation, i.e., reduce the effort and time of human annotators and ensure consistency in summary irrespective of the annotators. In order to understand whether the adoption of automation and reduction of human effort and time in ground-truth summary generation affects the ground-truth summary quality, we compare the performance of PORTRAIT with the existing approaches for ground-truth summary generation by \(3\) annotators both quantitatively and qualitatively on \(5\) disaster events datasets. Our observations indicate that the summary quality by PORTRAIT is better than the existing approaches by both quantitative and qualitative measures. Additionally, we observed that the variance among the ground-truth summaries generated by the \(3\) annotators for \(5\) disaster events datasets is very less, which indicates that PORTRAIT can ensure consistent summaries across annotators. Further, on the basis of these observations, we can explore a new direction in the ground-truth summary generation for disaster events such that there is no requirement for multiple annotators.
Apart from PORTRAIT, in this paper, we generate and publically provide ground-truth summaries for \(5\) different disaster datasets of different types, including earthquake, hurricane, flood, and mass shootings, which occurred in various countries, such as the United States of America, Haiti, Mexico, and Pakistan. We believe this will help in the development and evaluation of disaster tweet summarization approaches. Additionally, we perform a case study where we study and evaluate the performance of \(13\) state-of-the-art summarization approaches on these \(5\) disaster datasets summaries using ROUGE-N F1-scores.
## Acknowledgements
The authors would like to express their gratitude to the annotators who provided us with the ground-truth summary. The authors thank Aditya Kumar, Juhi Rani, and Thiyagura Pragathi for their help in the implementation of some existing summarization approaches.
|
2305.10628 | The Magic of Networks Grown by Redirection | We highlight intriguing features of complex networks that are grown by
\emph{redirection}. In this mechanism, a target node is chosen uniformly at
random from the pre-existing network nodes and the new node attaches either to
this initial target or to a neighbor of this target. This exceedingly simple
algorithm generates preferential attachment networks in an algorithmic time
that is linear in the number of network nodes $N$. Even though preferential
attachment ostensibly requires \emph{global knowledge} of the network,
redirection requires only \emph{local knowledge}. We also show that changing
just a \emph{single} attachment rate in linear preferential attachment leads to
a non-universal degree distribution. Finally, we present unexpected
consequences of redirection in networks with undirected links, where highly
modular and non-sparse networks arise. | P. L. Krapivsky, S. Redner | 2023-05-18T00:40:27Z | http://arxiv.org/abs/2305.10628v4 | # The Magic of Networks Grown by Redirection
###### Abstract
We highlight intriguing features of complex networks that are grown by redirection. In this mechanism, a target node is chosen uniformly at random from the pre-existing network nodes and the new node attaches either to this initial target or to a neighbor of this target. This exceedingly simple algorithm generates preferential attachment networks in an algorithmic time that is linear in the number of network nodes \(N\). Even though preferential attachment ostensibly requires global knowledge of the network, redirection requires only local knowledge. We also show that changing just a single attachment rate in linear preferential attachment leads to a non-universal degree distribution. Finally, we present unexpected consequences of redirection in networks with undirected links, where highly modular and non-sparse networks arise.
## I Introduction
Redirection is a natural mechanism to create growing networks. In a social setting, you may meet somebody and ultimately befriend of one the friends of your initial acquaintance. This redirection also underlies a growth mechanism in Facebook, where you are encouraged to create new links to some of the friends of your initial Facebook friend [1; 2]. The simplest implementation of redirection for networks where each link has a prescribed directionality is the following (Fig. 1):
1. A new node n picks a pre-existing node x from the network uniformly at random.
2. With probability \(0<1-r<1\), n attaches to x.
3. Otherwise, with probability \(r\), n attaches to the (unique) ancestor node y of x.
These steps are repeated until a network of a desired size is generated.
By construction, a network with a tree topology always remains a tree. While it is straightforward to generalize to networks with loops by the new node choosing multiple provisional targets, we focus on trees both for their simplicity and because they illustrate many of the intriguing features of networks that are grown by preferential attachment.
Without the redirection step, the above growth rules define a model that is known as the random recursive tree (RRT). We discuss this fundamental null model [3] in Sec. II. Redirection represents a minimalist extension of the RRT; this idea was suggested in [4] and developed in [5]. (Alternative extensions of growth mechanisms that are still local in character [6; 7; 8; 9] have also yield networks with broad degree distributions.) The standard redirection is equivalent [5] to shifted linear preferential attachment, in which the rate of attaching to a pre-existing network node of degree \(k\) is proportional to \(k+\lambda\), with \(\lambda=\frac{1}{r}-2\). This connection highlights an fascinating aspect of redirection--it transforms a purely local growth mechanism--namely the RRT plus redirection to the ancestor--into the global mechanism of linear preferential attachment. The motivation for preferential attachment stems from the "rich get richer" parable [10; 11]; that is, popular high-degree nodes are more likely to attract additional links merely by virtue of being popular. While enormous effort has been devoted to understanding the properties of these types of
Figure 1: Illustration of redirection. The rate at which a new node n attaches to the ancestor y of node x by redirection is proportional to the number of upstream neighbors of y (green nodes).
networks (see, e.g., Refs. [12; 13; 14; 15; 16; 17; 18; 19]), we will present, in Sec. III, a number of surprising and under-appreciated features of preferential attachment.
As we will discuss in this section, networks that are built by the redirection mechanism of Fig. 1 have a degree distribution that possesses a non-universal algebraic tail,
\[N_{k}\sim\frac{N}{k^{\nu}}\,,\qquad\nu=1+\frac{1}{r}>2\,. \tag{1}\]
The exponent must satisfy \(\nu>2\) for all sparse networks whose degree distribution has an algebraic tail. This bound follows from the identity \(\sum_{k\geq 1}kN_{k}=2L\) and the linear growth of the number of links \(L\) with \(N\), which is the defining property of sparse networks. For trees, in particular, \(\sum_{k\geq 1}kN_{k}=2L=2(N-1)\).
However, a disconcerting feature of several complex networks [20] is that they are apparently characterized by degree distributions with tail exponent \(\nu<2\), which violates the bound in Eq. (1). Mathematically, this implies that the sum \(\sum_{k\geq 1}kN_{k}\) grows superlinearly with \(N\), which cannot occur in sparse networks with \(N_{k}\sim N/k^{\nu}\). An exponent value \(\nu<2\) may arise in densifying networks [21; 22; 23], where \(L\) increases superlinearly with \(N\). Intriguingly, such an anomalously small exponent also occurs in undirected growing trees that are generated by complete redirection (Sec. IV.3). To be consistent with the constraint \(\sum_{k\leq N}kN_{k}\sim N\), the amplitude of the degree distribution must grow sublinearly with \(N\), namely
\[N_{k}\sim\frac{N^{\nu-1}}{k^{\nu}}\,,\qquad\nu<2\,. \tag{2}\]
Networks grown by this parameter-free complete redirection mechanism: (a) are highly modular; (b) have numerous macrohubs; (c) consist almost entirely of leaves (nodes of degree 1); (d) the "core" of the network (nodes of degree \(k\geq 2\)) comprises a vanishingly small fraction of the network as \(N\to\infty\); and (e) are non-self-averaging, namely, basic characteristics, such as \(N_{k}\) for any \(k>1\), exhibit huge fluctuations from realization to realization. In spite of the simplicity of complete redirection, there is little analytical understanding of its intriguing consequences and these represent an appealing future challenge.
We emphasize that the redirection algorithm is extremely efficient. To build a network of \(N\) nodes requires a computation time that scales linearly with \(N\), with a prefactor of the order of one. Redirection also allows one to build networks with more general preferential attachment mechanisms, such as sublinear preferential attachment, with nearly the same efficiency as the original redirection algorithm (Sec. IV.2).
## II The random recursive tree (RRT)
We begin our discussion with the RRT, first introduced by Otter [3], in which nodes are added to the network one by one. Each new node attaches to a single "target" node that is chosen uniformly at random among the already existing nodes; that is, the attachment \(A_{k}=1\), for any degree \(k\). By the restriction that each new node has a single attachment point (equivalently, the out degree of every node equals 1), the resulting network is a tree. If a new node attaches to more than one pre-existing node, loops could form. The degree distribution of a network with loops is modified only in the amplitude of the degree distribution compared to growing trees. On the other hand, topological features of networks with loops are different than trees, but our focus is on the degree distribution, for which it is simplest to focus on tree networks.
The growth rules of the RRT thus are:
1. Pick one of the nodes of the RRT--defined as the target--with uniform probability.
2. Introduce a new node that links to the target node.
Starting with a single node, these two steps are repeated until the tree reaches a desired number of nodes \(N\).
### The degree distribution
We first outline how to derive the exact degree distribution and then determine the degree distribution in the limit \(N\to\infty\). The degree state of any network is characterized by the vector \(\mathbf{N}\equiv\{N_{1},N_{2},\ldots\}\), where \(N_{k}\) denotes the number of nodes of degree \(k\). When a new node is introduced, the changes in the network state vector \(\mathbf{N}\) are [24; 25]:
\[\text{attach to node of degree 1}\colon (N_{1},N_{2})\to(N_{1},N_{2}+1)\] \[\text{attach to node of degree }k>1\colon (N_{1},N_{k},N_{k+1})\to(N_{1}+1,N_{k}-1,N_{k+1}+1)\,, \tag{3}\]
hile the state of all other network nodes are unchanged. Typically we are not interested in the full probability distribution \(P(\mathbf{N})\), but just the average number of nodes of a given degree, \(\left\langle N_{k}\right\rangle\), namely, the degree distribution; the angle brackets denote an average over all possible growth histories of the network.
Let us determine how the \(N_{k}\) change when a new node is added to the network. As indicated by Eq. (3), we need to separately consider nodes of degree 1 and nodes of degree greater than 1. The number of nodes of degree 1, \(N_{1}(N)\), i.e., the number of leaves, is a random variable that changes with each node addition according to
\[N_{1}(N+1)=\begin{cases}N_{1}(N)&\text{probability}\quad\frac{N_{1}}{N}\\ N_{1}(N)+1&\text{probability}\quad 1-\frac{N_{1}}{N}\,.\end{cases} \tag{4}\]
These equations apply for \(N\geq 2\), while the natural initial condition is \(N_{1}(2)=2\). This equation expresses the two possibilities when a new node joins the network: with probability \(N_{1}/N\), the new node attaches to a node of degree 1 and the number of such nodes does not change, while with probability \((1-N_{1}/N)\), the new node attaches to a node of degree \(k>1\) and \(N_{1}\) increases by 1.
The evolution equation for the average number of leaves is therefore
\[\left\langle N_{1}(N+1)\right\rangle = \tag{5}\] \[= 1+\Big{(}1-\frac{1}{N}\Big{)}\big{\langle}N_{1}(N)\big{\rangle}\,.\]
Because the relevant time-like variable that characterizes the network size is the total number of nodes \(N\), we will always use \(N\) as the time variable. The solution to this recursion, for \(N\geq 2\), is
\[\left\langle N_{1}(N)\right\rangle=\frac{N}{2}+\frac{1}{N-1}. \tag{6}\]
The discrete approach can be used to determine higher moments of the random variable \(N_{1}(N)\). The second moment \(\left\langle N_{1}^{2}(N)\right\rangle\) is especially important as we can obtain the variance and thereby quantify degree fluctuations. From Eq. (4), we deduce the recurrence for the second moment
\[\left\langle N_{1}^{2}(N+1)\right\rangle=1+\left(1-\frac{2}{N}\right)\left\langle N _{1}^{2}(N)\right\rangle+\left(2-\frac{1}{N}\right)\left\langle N_{1}(N) \right\rangle,\]
whose solution is
\[\left\langle N_{1}^{2}(N)\right\rangle=\frac{N(3N+1)}{12}+\frac{N}{N-1}. \tag{7}\]
From the first two moments, the variance, for \(N\geq 3\), is
\[\left\langle N_{1}^{2}(N)\right\rangle_{c}\equiv\left\langle N_{1}^{2}(N) \right\rangle-\left\langle N_{1}(N)\right\rangle^{2}=\frac{N}{12}-\frac{1}{(N -1)^{2}}\, \tag{8}\]
so the deviation of \(N_{1}(N)\) from its average is of the order of \(\sqrt{N}\). Higher cumulants of the number of leaves also grow as \(\sqrt{N}\). The cumulants \(\left\langle N_{1}^{p}(N)\right\rangle_{c}\) with arbitrary integer \(p\geq 1\) are given by remarkably simple formula
\[\left\langle N_{1}^{p}(N)\right\rangle_{c}=p^{-1}B_{p}N+\frac{(-1)^{p-1}(p-1)!} {(N-1)^{p}} \tag{9}\]
Figure 2: A random recursive tree of 9 nodes, showing the ordering of the nodes and each of their attachment points.
applicable when \(N\geq p+1\). Here \(B_{p}\) are Bernoulli numbers defined [26] as the coefficients in the power series
\[\frac{z}{e^{z}-1}+z=\sum_{p\geq 0}B_{p}\,\frac{z^{p}}{p!}\]
Thus for large \(N\), the number of nodes of degree \(1\) is sharply distributed about its average value. For this reason, one may ignore fluctuations and focus on the average. This same holds for all nodes of higher degrees as long as the number of such nodes is large, \(N_{k}(N)\gg 1\). Thus we again focus on the average.
By similar reasoning as that used for \(N_{1}\), the number of nodes of degree \(k\geq 2\), evolves according to
\[N_{k}(N\!+\!1)=\begin{cases}N_{k}(N)-1&\text{probability}\quad\frac{N_{k}}{N} \\ N_{k}(N)+1&\text{probability}\quad\frac{N_{k-1}}{N}\\ N_{k}(N)&\text{probability}\quad 1-\frac{N_{k-1}+N_{k}}{N}\end{cases} \tag{10}\]
after each node addition. Following the same steps that led to Eq. (5), the evolution equation for \(\langle N_{k}\rangle\) is
\[\langle N_{k}(N\!+\!1)\rangle=\langle N_{k}(N)\rangle+\bigg{\langle}\frac{N_{k -1}(N)-N_{k}(N)}{N}\bigg{\rangle}. \tag{11}\]
While this equation can again be solved to give the exact degree distribution for finite networks, we now restrict ourselves to the leading behavior of the degree distribution for \(N\to\infty\). For simplicity, we drop the angle brackets and the argument \(N\), so that we write \(N_{k}\) for the average number of nodes of degree \(k\) in a network that contains \(N\) nodes. Next, we replace the discrete differences with derivatives in Eqs. (5) and (11), so that the asymptotic degree distribution evolves according to the master equation
\[\dot{N}_{k}\equiv\frac{dN_{k}}{dN}=\frac{N_{k-1}-N_{k}}{N}+\delta_{k,1}\.\] (12a) The first equation is \[\dot{N}_{1}=-N_{1}/N+1\], with solution \[N_{1}=N/2\]. Then \[\dot{N}_{2}=(N_{1}-N_{2})/N\], with solution \[N_{2}=N/4\]. Continuing one finds that all the \[N_{k}\] are proportional to \[N\]. Thus we write \[n_{k}\equiv N_{k}/N\] and reduce Eq. ( 12a ) to \[n_{k}=n_{k-1}-n_{k}+\delta_{k,1}\] (12b) leading to the exponential degree distribution \[n_{k}=2^{-k}\].
## III Preferential attachment
In preferential attachment, the rate \(A_{k}\) at which a node attaches to a pre-existing node of degree \(k\) is an increasing function of \(k\). A ubiquitous feature of preferential attachment networks is that their degree distributions have broad tails, a fact that sparked much interest in this class of networks over the past two decades. We now derive this scale-free degree distribution using the approach of in Ref. [5].
### Master equation
The evolution of the degree distribution for a network whose growth is governed by an attachment rate \(A_{k}\) is (compare with Eq. (12a) for the RRT):
\[\dot{N}_{k}=\frac{A_{k-1}N_{k-1}-A_{k}N_{k}}{A}+\delta_{k1}. \tag{13}\]
The first term on the right accounts for the new node connecting to a pre-existing node that already has \(k-1\) links, thereby increasing \(N_{k}\) by one. Since there are \(N_{k-1}\) nodes of degree \(k-1\), the total rate at which such processes occur is proportional to \(A_{k-1}N_{k-1}\). The total rate \(A\equiv A(N)\equiv\sum_{j\geq 1}A_{j}N_{j}\) in the denominator means that \(A_{k-1}/A\) is the probability for a node of degree \(k-1\) to become a node of degree \(k\). A corresponding role is played by the second term on the right. The overall amplitude of \(A_{k}\) is immaterial, since only the ratio \(A_{k}/A\) appears in the master equation. The last term accounts for the introduction of a new node that has one outgoing link and no incoming links.
To determine the degree distribution, we need to specify the attachment rate \(A_{k}\). We focus on power-law preferential attachment, \(A_{k}=k^{\gamma}\), with \(\gamma\geq 0\). We will show that different behaviors arise for sublinear (\(\gamma<1\)), superlinear (\(\gamma>1\)), and linear (\(\gamma=1\)) attachment rates. The linear case is especially rich because the degree distribution is nonuniversal.
When confronted with determining a non-trivial distribution, it is often instructive to first deal with the simpler problem of determining low-order moments of the degree distribution \(M_{\alpha}(N)\equiv\sum_{j}j^{\alpha}N_{j}\). The zeroth and first moments of this distribution have particularly simple \(N\) dependences: \(\dot{M}_{0}=\sum_{j}\dot{N}_{j}=1\) and \(\dot{M}_{1}=\sum_{j}j\,\dot{N}_{j}=2\). The equation for \(M_{0}\) states that the total number of nodes (of any degree) increases by \(1\) each time a new node is introduced. Similarly, the equation for \(M_{1}\) states the total degree of the network, \(\sum jN_{j}\), increases by two when the single link associated with the new node is added to the network. Since both the zeroth and first moments of the degree distribution increase linearly with \(N\), the total rate \(A=\sum_{j}j^{\gamma}N_{j}\) also grows linearly with \(N\), because \(A\) is intermediate to the zeroth and first moments. Asymptotically, \(A\simeq\mu N\), with the as yet-undetermined amplitude \(\mu\) that must range between \(1\) and \(2\) as \(\gamma\) increases from \(0\) to \(1\).
Solving for the first few \(N_{k}\) from Eq. (13), it becomes clear that each \(N_{k}\) is also proportional to \(N\). This fact suggests substituting \(N_{k}(N)=n_{k}N\) and \(A\simeq\mu N\) into these master equations. With this step, the overall \(N\) dependence cancels, leaving behind the recursion relations \(n_{k}=(A_{k-1}n_{k-1}-A_{k}n_{k})/\mu\) for \(k>1\) and \(n_{1}=1-A_{1}n_{1}/\mu\). After straightforward algebra, the degree distribution is
\[n_{k}=\frac{\mu}{A_{k}}\prod_{1\leq j\leq k}\left(1+\frac{\mu}{A_{j}}\right)^{ -1}.\] (14a) Using the definition \[\mu=\sum_{j\geq 1}A_{j}n_{j}\] in ( 14a ) we obtain \[\sum_{k\geq 1}\prod_{1\leq j\leq k}\left(1+\frac{\mu}{A_{j}}\right)^{-1}=1. \tag{14b}\]
To extract the physical meaning of the general solution (14a) with \(\mu\) implicitly determine by (14b) we examine the asymptotic behavior for the three generic cases of sublinear, superlinear, and linear preferential attachment.
#### iii.1.1 Sublinear preferential attachment
For \(A_{k}=k^{\gamma}\) with \(\gamma<1\), we rewrite the product in Eq. (14a) as the exponential of a sum of logarithms, convert the sum to an integral, and then expand the logarithm inside the integral in a Taylor series. These straightforward steps lead to
\[n_{k}\sim\begin{cases}k^{-\gamma}\exp\left[-\mu\left(\frac{k^{1-\gamma}-2^{1- \gamma}}{1-\gamma}\right)\right]&\frac{1}{2}<\gamma<1,\\ \\ k^{(\mu^{2}-1)/2}\exp\left[-2\mu\,\sqrt{k}\right]&\gamma=\frac{1}{2},\\ \\ k^{-\gamma}\exp\left[-\mu\,\frac{k^{1-\gamma}}{1-\gamma}+\frac{\mu^{2}}{2}\, \frac{k^{1-2\gamma}}{1-2\gamma}\right]&\frac{1}{3}<\gamma<\frac{1}{2}\,,\end{cases} \tag{15}\]
with similar, but more complicated expressions for \(n_{k}\) for still smaller values of \(\gamma\). Each time \(\gamma\) decreases through \(\frac{1}{m}\), where \(m\) is an integer, an additional term is generated in the exponential that is an increasing function of \(k\). Nevertheless, for any value of \(\gamma<1\), the leading behavior is always the universal stretched exponential decay, \(\exp(-\text{const.}\times k^{1-\gamma})\).
#### iii.1.2 Superlinear preferential attachment
For \(\gamma>1\), a gelation-like phenomenon occurs in which nearly all links attach to a single node. Let us first treat the ultra singular behavior that arises for \(\gamma>2\), for which there is a non-zero probability for a "bible" to occur--a node that links to every other node in an infinite network, while only a finite number of links exist between all other nodes. To determine the probability for a bible, suppose that a network of \(N+1\) nodes contains a bible (Fig. 3). The probability that the next node links to the bible is \(N^{\gamma}/(N+N^{\gamma})\), and the probability that this pattern of connections
continues indefinitely is \(\mathcal{P}=\prod_{N\geq 1}N^{\gamma}/(N+N^{\gamma})\). Using the same asymptotic analysis as above, where we write the product as the exponential of a sum of logarithms, expand the logarithm for large \(N\), and approximate the sum as an integral, the asymptotic behavior of this product is \(\mathcal{P}=0\) for \(\gamma\leq 2\), and \(\mathcal{P}>0\) for \(\gamma>2\). Strikingly, there a non-zero probability for a bible to exist in an infinite network for \(\gamma>2\)!
When \(1<\gamma<2\), the attachment pattern of low-degree nodes is not as simple as in Fig. 3, but there continues to be a single node whose degree is of the order of \(N\). There is also an infinite sequence of transition points when \(\gamma\) passes through \(\frac{m}{m-1}\), with \(m\) an integer greater than \(2\), in which number of nodes of degree \(k\leq m\) grows as \(N^{k-(k-1)\gamma}\), while the number of nodes of degree \(k>m\) remain finite for \(N\to\infty\) (Fig. 4). To understand this behavior in a simple way, it is instructive to study the governing equations for each \(N_{k}\) one by one. For \(N_{1}\) we have
\[\dot{N}_{1}=1-\frac{N_{1}}{A}\,.\]
We now make the assumption that the total attachment rate is governed by the single highest-degree node, with degree of the order of \(N\). Thus \(A=\sum j^{\gamma}N_{j}\sim N^{\gamma}\). Since \(N_{1}\) can at most be of the order of \(N\), the second term in the above equation is negligible, so that \(\dot{N}_{1}\sim 1\) or \(N_{1}\sim N\). Similarly, the equation for \(N_{2}\) is
\[\dot{N}_{2}\simeq\frac{N_{1}-2^{\gamma}N_{2}}{N^{\gamma}}\,.\]
Again, neglecting the second term, gives \(\dot{N}_{2}\simeq N^{1-\gamma}\), from which \(N_{2}\sim N^{2-\gamma}\). We can then verify that the term that we dropped is indeed negligible. Continuing this self-consistent procedure for general degree \(k\), we find
\[N_{k}\sim N^{k-(k-1)\gamma}\,, \tag{16}\]
as long as the exponent of \(N_{k}\) is positive, while \(N_{k}\) will be finite for \(N\to\infty\) for values of \(k\) for which \(k-(k-1)\gamma\) is negative (Fig. 4).
Figure 4: Illustration of the sequence of phase transitions that arise in superlinear preferential attachment. Starting with an ultra-condensed network for \(\gamma>2\), the network contains progressively more low-degree nodes each time \(\gamma\) passes through \(m/(m-1)\). The network becomes sparse when \(\gamma\) reaches \(1\), where the number of nodes of any degree are all proportional to \(N\).
Figure 3: Creation of a “bible” in which each new node attaches only to the bible (red).
Thus we predict an infinite sequence of transitions at \(\gamma=\gamma_{m}=\frac{m}{m-1}\). For \(\gamma>\gamma_{m}\), the number of nodes of degree \(k>m\) are all of \(\mathcal{O}(1)\), while nodes of degrees \(k\leq m\) grows sublinearly with \(N\), as \(N^{k-(k-1)\gamma}\). This set of transitions becomes progressively more dense as \(\gamma\to 1\) from above. At \(\gamma=1\), the network changes its character from condensed, where a hub node has degree of \(\mathcal{O}(N)\), to sparse, where the number of nodes of any degree is proportional to \(N\).
#### ii.1.3 Linear preferential attachment
Here, it is important to distinguish between strictly linear preferential attachment, \(A_{k}=k\), and asymptotically linear preferential attachment, \(A_{k}\simeq k\). In the former case, the total attachment rate is \(A=\sum_{k}A_{k}N_{k}=\sum_{k}kN_{k}=2N\). Substituting this value of \(\mu=2\) into Eq. (14a) and performing some simple algebra immediately leads to the discrete power-law form of the degree distribution
\[n_{k}=\frac{4}{k(k+1)(k+2)}=\frac{4\,\Gamma(k)}{\Gamma(k+3)}\, \tag{17}\]
where \(\Gamma\) is the Euler gamma function. From this power-law degree distribution, the mean degree \(\langle k\rangle=\sum_{k\geq 1}kn_{k}=2\), as it must, but the mean-square degree \(\langle k^{2}\rangle=\infty\). Thus fluctuations in the mean degree, namely, the spread in the mean degree for different realization of large networks of \(N\) nodes, diverges for \(N\to\infty\).
The surprising feature of asymptotically linear preferential attachment growth is that the degree distribution exponent is non-universal. This non-universality is at odds with the common wisdom of statistical physics in which the absence of a characteristic scale leads to universal scaling properties. One natural form for an asymptotically linear attachment rate is \(A_{k}=k+\lambda\), with \(\lambda\) a constant. This modification implies that the amplitude \(\mu\) in \(A=\mu N\) is no longer equal to \(2\), but assume a wide range of values (see below). Now Eq. (14a) becomes
\[n_{k}=\frac{\mu}{A_{k}}\prod_{1\leq j\leq k}\left(1+\frac{\mu}{ A_{j}}\right)^{-1} \sim\frac{\mu}{k}\exp\left[-\int_{1}^{k}\ln\left(1+\frac{\mu}{j} \right)\,dj\right]\] \[\sim\frac{\mu}{k}\exp\left[-\mu\int_{1}^{k}\frac{dj}{j}\right]\] \[\sim k^{-(1+\mu)}. \tag{18}\]
Thus the degree exponent \(\nu=1+\mu\) can take any value larger than \(2\) merely by tuning the amplitude \(\mu\).
As an explicit and surprising example, consider the attachment rate \(A_{k}=k\) for \(k\geq 2\), while \(A_{1}\equiv\alpha\) is arbitrary. It is now convenient to separate \(A_{1}\) and \(A_{k}\) for \(k\geq 2\) in Eq. (14a) to recast this equation as
\[\mu=A_{1}\sum_{k=2}^{\infty}\prod_{j=2}^{k}\left(1+\frac{\mu}{A_{j}}\right)^{ -1}=\ \alpha\ \sum_{k=2}^{\infty}\Gamma(2+\mu)\,\frac{\Gamma(1+k)}{\Gamma(1+\mu+k)}, \tag{19}\]
where we express the product as the ratio of gamma functions.
The sum can be evaluated by employing the identity [26]
\[\sum_{k=2}^{\infty}\frac{\Gamma(a+k)}{\Gamma(b+k)}=\frac{\Gamma(a+2)}{(b-a-1) \Gamma(b+1)}\,\]
so that Eq. (19) becomes \(\mu(\mu-1)=2\alpha\), with solution \(\mu=(1+\sqrt{1+8\alpha})/2\). Thus the degree exponent \(\nu=1+\mu\) is
\[\nu=\frac{3+\sqrt{1+8\alpha}}{2}. \tag{20}\]
As examples, the degree distribution exponent is \(\nu=4\) for \(\alpha=3\) and \(\nu=5/2\) for \(\alpha=3/8\). For \(0<\alpha<1\), the exponent lies in the range \(2<\nu<3\), while for \(\alpha>1\), \(\nu>3\). While the degree distribution exponent must satisfy the lower bound \(\nu>2\), there is no upper bound for \(\nu\); in particular, \(\nu\to\sqrt{2\alpha}\) as \(\alpha\to\infty\). We emphasize that changing just a single attachment rate leads to a global effect on the degree distribution. This global effect arises because the amplitude \(\mu\) appears inside the infinite product in Eq. (14a). This multiplicative nature strongly affects the degree distribution itself and thereby the degree distribution exponent.
Network growth by redirection
We now discuss a deceptively simple modification of the RRT with profound consequences. This is the notion of redirection where a new node may attach to a pre-existing target node, or it to a neighbor of this target [4; 5].
### Constant redirection probability
First we treat the redirection algorithm [5] that was outlined in the introduction. There is one subtlety in this algorithm because redirection requires that every node has an ancestor. To ensure this condition always holds, the initial state, for example, could consist of at least two nodes and one link, with each node defined as the ancestor of the other. Other simple starting graphs are equally suitable, such as a triangle with cyclic links.
According to the redirection algorithm, the degree distribution evolves according to
\[\dot{N}_{k}=\frac{1-r}{N}\Big{[}N_{k-1}-N_{k}\Big{]}+\frac{r}{N}\Big{[}(k-2)N_ {k-1}-(k-1)N_{k}\Big{]}+\delta_{k,1}\,.\] (21a) The terms within the first square brackets correspond to attachment to the initially selected node, whose evolution equation is just that of the RRT ( 12a ) for redirection probability \[r=0\]. The terms within the second square brackets account for the change in \[N_{k}\] due to redirection. To understand their origin, consider first the gain term. Since the initial node is chosen uniformly, if redirection does occur, then the probability that a node of degree \[k-1\] receives the newly redirected link is proportional to the number of its upstream neighbors (green nodes in Fig. 1), which equals \[k-2\]. A parallel argument applies for the redirection-driven loss term. The crucial point is that the rate at which attachment occurs to a given node is proportional to the number of its upstream neighbors, which, in turn, is proportional its degree. Thus linear preferential attachment is implicit in this purely local redirection rule.
The redirection mechanism has an unexpected connection to the friendship paradox [27; 28], which states that the neighbors of a randomly selected node are more popular (have higher degrees), on average, than the initially selected node. As illustrated in Fig. 1, there are three distinct ways to attach node \(\mathbf{y}\) by redirection from upstream nodes. The higher the degree of node \(\mathbf{y}\), the more likely attachment to it by redirection occurs. Thus we expect that node \(\mathbf{y}\) will have more neighbors, on average, than the initial node \(\mathbf{x}\)
By a straightforward rearrangement of terms, (21a) may be re-expressed as
\[\dot{N}_{k} =\frac{r}{N}\left\{\left[k-1+\left(\frac{1}{r}-2\right)\right]N_ {k-1}-\left[k+\left(\frac{1}{r}-2\right)\right]N_{k}\right\}+\delta_{k,1}\] \[\equiv\frac{1}{A}\Big{\{}\left(k-1+\lambda\right)N_{k-1}-\left(k +\lambda\right)N_{k}\Big{\}}+\delta_{k,1}\,, \tag{21b}\]
with \(\lambda=\frac{1}{r}-2\) and total attachment rate \(A=N/r=(2+\lambda)N\). Thus uniform attachment, in conjunction with redirection, generates shifted linear preferential attachment, with \(A_{k}=k+\lambda\). The particular case of strictly linear preferential attachment arises for the choice \(r=\frac{1}{2}\). When we now substitute attachment rate \(A_{k}=k+\lambda\) and \(\mu=2+\lambda\) into the general formula (14a) for the degree distribution, we obtain
\[n_{k}=\frac{\mu}{A_{k}}\prod_{1\leq j\leq k}\left(1+\frac{\mu}{A_{j}}\right)^ {-1}=(2+\lambda)\frac{\Gamma(3+2\lambda)}{\Gamma(1+\lambda)}\frac{\Gamma(k+ \lambda)}{\Gamma(k+3+2\lambda)}\sim k^{-(3+\lambda)}\,. \tag{21c}\]
Since the redirection probability lies between \(0\) and \(1\), the additive shift \(\lambda\) lies between \(-1\) and \(\infty\). Thus the degree distribution exponent can take on any value that is greater than \(2\). In the extreme case of \(r=1\) a star-like network arises whose detailed structure depends on the initial condition.
It is also worth mentioning the many intriguing results that emerge from simple extensions of this redirection mechanism. Starting with the RRT, each node has a genealogical tree of ancestors. It is natural to grow a network in which redirection can occur equiprobably to any node in the genealogical tree of an initial target node [29], or to all nodes in this genealogical tree [30]. The latter leads to a network that is no longer sparse, as the number of links \(L\) grows as \(N\ln N\). Amusingly, this redirection mechanism to all ancestors is isomorphic to a basic hypergraph model, known as the random recursion hypergraph [31].
Finally, we wish to emphasize the extreme simplicity of this redirection algorithm. Each node addition requires only two elemental operations: (i) select a target node, and (ii) choose to attach either to this target or to its ancestor. This algorithm allows one to generate a network of \(N\) nodes in roughly \(2N\) algorithmic steps. It is therefore possible to quickly generate very large networks. Crucially, a purely local rule--tracking the ancestor of each node--is equivalent
to the global rule that underlies preferential attachment. Ostensibly, one needs to know the degrees of all the nodes in the network to implement preferential attachment. As the redirection algorithm shows, this global information is not needed.
### Degree-based redirection
To illustrate the utility and generality of redirection, we exploit the local information that is readily available--the degree \(a\) of the initial target node and the degree \(b\) of the ancestor--to efficiently generate sublinear preferential attachment networks. In degree-based redirection [32], we merely define the redirection probability \(r\) to be a suitably chosen function of these two degrees \(a\) and \(b\); that is \(r=r(a,b)\) (see Fig. 5).
To show how sublinear preferential attachment can be achieved from this still-local information, we define \(f_{k}\) as the total probability that an incoming link is redirected from a randomly selected target node of degree \(k\) to the parent of this target. Similarly, we define \(t_{k}\) as the total probability that an incoming link is redirected to a parent node of degree \(k\) after the incoming node initially selected one of the child nodes of this parent. Formally, these probabilities are defined in terms of the redirection probabilities by
\[f_{k}=\sum_{b\geq 1}\frac{r(k,b)N(k,b)}{N_{k}}\,,\qquad\qquad t_{k}=\sum_{a \geq 1}\frac{r(a,k)N(a,k)}{(k-1)N_{k}}\,, \tag{22}\]
where \(N_{k}=\sum_{b\geq 1}N(k,b)\) and \(N(a,b)\) is the correlation function that specifies the number of nodes of degree \(a\) that have a parent of degree \(b\). Thus \(f_{k}\) is the mean redirection probability averaged over all \(N_{k}\) possible target nodes of degree \(k\). Likewise, since each node of degree \(k\) has \(k-1\) children, there are \((k-1)N_{k}\) possible target nodes whose redirection probabilities are averaged to give \(t_{k}\).
In terms of these probabilities \(f_{k}\) and \(t_{k}\), the master equation that governs the evolution of \(N_{k}\) is
\[\dot{N}_{k}=\frac{(1\!-\!f_{k\!-\!1})N_{k-1}-(1\!-\!f_{k})N_{k}}{N}+\frac{(k\!- \!2)t_{k-1}N_{k-1}-(k-1)t_{k}N_{k}}{N}+\delta_{k,1}. \tag{23}\]
The first ratio corresponds to instances of network growth for which the incoming node actually attaches to the initial target. For example, the term \((1-f_{k})N_{k}/N\) gives the probability that one of the \(N_{k}\) target nodes of degree \(k\) is randomly selected and that the link from the new node is not redirected away from this target. Similarly, the second ratio corresponds to instances in which the link to the target node is redirected to the ancestor. For example, the term \((k-1)t_{k}N_{k}/N\) gives the probability that one of the \((k-1)N_{k}\) children of nodes of degree \(k\) is chosen as the target and that the new node is redirected. Lastly, the term \(\delta_{k,1}\) accounts for the newly added node of degree \(1\).
By rearranging terms, we express (23) in the generic form of Eq. (13), with the attachment rate
\[\frac{A_{k}}{A}=\frac{(k-1)t_{k}+1-f_{k}}{N}\,.\] (24a) Since the quantities \[f_{k}\] and \[t_{k}\] are normalized probabilities, the asymptotic behavior of the above expression is \[A_{k}\sim k\,t_{k}\]. Thus a redirection probability \[r(a,b)\] for which \[t_{k}\] is a decreasing function of \[k\] will asymptotically correspond to sublinear preferential attachment. A natural choice for such a redirection probability is \[r(a,b)=b^{\gamma-1}\], with \[0<\gamma<1\], so that the redirection probability decreases as the degree of the parent node increases. Because
Figure 5: Illustration of degree-based redirection. A new node (blue) attaches to a random target of degree \(a\) with probability \(1-r(a,b)\) and attaches to the ancestor node (degree \(b\)) of the target with probability \(r(a,b)\).
\(r\) depends only on the degree of the parent node (Fig. 5), Eq. (22) reduces to \(t_{k}=k^{\gamma-1}\). Using this form of \(t_{k}\) in Eq. (24a) yields
\[\frac{A_{k}}{A}=\frac{k^{\gamma}-k^{\gamma-1}+1-f_{k}}{N}\,, \tag{24b}\]
whose leading behavior is indeed sublinear preferential attachment, \(A_{k}\sim k^{\gamma}\). This equivalence to sublinear preferential attachment allows to generate a network of \(N\) nodes with stretched exponential degree distribution in an algorithmic time that is also of the order of \(N\).
What happens in the opposite case of enhanced redirection, in which the redirection probability is an increasing function of the degree of the parent node [32, 33]? This attachment rule leads to highly modular networks that contains multiple macrohubs, with most nodes having degree 1 (leaves). Furthermore, the degree distribution exhibits the anomalous scaling given in Eq. (2), with \(\nu\) strictly less than 2. Similar phenomenology also occurs in the simpler example of redirection rule for undirected networks (see below).
### Complete redirection in undirected networks
Link directionality is important in social and technological networks, but there are many situations where networks are undirected [34, 35, 36]. The influence of redirection on undirected networks is profound and there is little analytical understanding of this enigmatic case.
The growth rule for isotropic redirection is nearly the same as that given in Sec. IV for directed networks, but with a small but profound difference [37, 38] that is embodied by the following growth rule:
1. Pick a pre-existing node x from the network uniformly at random.
2. With probability \(1-r\), the new node n attaches to x.
3. Otherwise, with probability \(r\), the new node n attaches to any neighbor of x, chosen uniformly at random.
Repeat these steps a until a network of a desired size is generated. The growth rules for directed and undirected redirection are illustrated in Fig. 6.
We focus on the limit of \(r=1\), which we term complete redirection. This limiting case leads to the most striking phenomenology. Simulation data also suggest that it is only the special case of \(r=1\) that gives rise to emergent modularity. The network realizations shown in Fig. 7 for \(r=1\) are highly modular and each consists of a number of well-resolved modules. Each module contains a central macrohub whose degree is a finite fraction of the total number of nodes \(N\); thus each macrohub is connected to a large number of leaves (nodes of degree 1). Typical networks consist almost entirely of leaves as \(N\to\infty\); that is, the number of leaves satisfies \(N_{1}/N\to 1\) as \(N\to\infty\). Nodes with degrees \(k\geq 2\) constitute what we term the network "core". This core comprises an infinitesimal fraction of the network, viz., the number of core nodes \(\mathcal{C}=\sum_{k\geq 2}N_{k}\) grows as \(N^{\nu-1}\), with \(\nu\approx 1.567\), as determined by numerical simulations [37].
The degree distribution for complete redirection has an algebraic tail \(N_{k}\propto k^{-\nu}\) with \(\nu\approx 1.567\). As discussed in the introduction, a degree distribution with \(\nu<2\) cannot occur in sparse networks, which exhibit standard extensive \(N_{k}\propto N\) scaling. However, a degree distribution with such a fat tail can arise if the amplitude grows sub-extensively with network size, that is, \(N_{k}\sim N^{\nu-1}/k^{\nu}\). Thus the number of nodes of any fixed degree \(k\) with \(k\geq 2\) grows sublinearly in \(N\), with \(N_{k}\sim N^{\nu-1}\).
Several features of networks grown by complete redirection can be understood analytically [37], while others, such as the exponent \(\nu\) currently appear to be beyond the reach of available techniques. The difficulty in making theoretical
Figure 6: Comparison of redirection for (a) directed and (b) undirected networks. (a) The new node (blue) attaches by redirection to the unique ancestor (black) of the target (red). (b) With the same target in an undirected network, the new node attaches to any one of the red neighboring nodes.
progress is that the change in the degree of a specific node depends on the degrees of all its neighbors. This inherent non-locality in the growth rule means that it is not possible to write a master equation for the degree distribution alone. Instead, the equation for the degree distribution must involve degree correlation functions between neighboring nodes, and this quantity, in turn, involves higher-order correlation functions.
## V Concluding remarks
Preferential attachment networks have been the focus of intense investigation for the past two decades. Part of the reason for this explosion of interest stemmed from the confluence of theoretical insights that were inspired by the existence of new datasets about networked systems. The network paradigm is alluring and a large number of seemingly unrelated many-body systems are now studied within the context of complex networks.
While the field has advanced significantly, some basic facts about the simplest network models seem under-appreciated. One is that the degree distribution of linear preferential attachment networks sensitively depends on microscopic details of the network growth mechanism. While the earliest theoretical studies of linear preferential attachment networks found a degree distribution exponent of \(\nu=3\), any exponent value with \(\nu>2\) can be achieved by linear preferential attachment. This non-universality is surprising because the standard lore from statistical physics suggests that exponent values should be universal and independent of the details of the network growth process.
Another important facet of complex networks that has yet to be fully exploited is that they can be generated by simple redirection algorithms. When a new node joins the network, it either attaches with a given probability to a pre-existing node that is chosen uniformly at random, or it attaches to the ancestor this target node with the complementary probability. This algorithm is simple to implement and efficient because it generates networks of \(N\) nodes in an algorithmic time that is also of the order of \(N\). We showed how to generate networks that are equivalent to sublinear preferential attachment and to shifted linear preferential attachment by suitable redirection rules. Our redirection perspective provides crucial insights that relate the random recursive tree to preferential attachment networks.
We also briefly discussed undirected networks that are grown by complete redirection. The resulting networks have a highly modular structure (Fig. 7): the number of core (nodes of degree \(\geq 2\)) scales sublinearly with the total number of nodes, as \(N^{\nu-1}\), with \(\nu\approx 1.567\). A natural question here is: why does this redirection mechanism lead to such singular networks? We really don't know. The master equation approach, which works so well for directed networks, is inadequate to describe the structure this class of networks. This inadequacy stems from the effective non-locality in the growth mechanism, and different approaches seem to be needed to truly understand the behavior of this network. An even deeper reason is the lack of self averaging: the random quantities \(N_{k}\) for any \(k>1\) exhibit huge fluctuations
Figure 7: Examples of tree networks of \(10^{4}\) nodes that are grown by complete redirection. Green: nodes of degree \(k=1\) (leaves); yellow, \(2\!\leq\!k\!\leq\!10\); cyan, \(11\!\leq\!k\!\leq\!99\); blue \(100\!\leq\!k\!\leq\!500\); violet \(\to\) red, \(k\!>\!501\). The node radius also indicates its degree.
from realization to realization. Therefore averages \(\langle N_{k}\rangle\) incompletely characterize each \(N_{k}\), and, by construction, the master equation approach only gives average quantities.
The oldest and perhaps still most famous complex network is the evolving random graph or Erdos-Renyi (ER) random graph [39]. The same model appeared earlier in the work of Flory and Stockmayer [40; 41; 42]; this model turns out to be equivalent to aggregation with the product kernel [25]. The percolation transition manifested by the emergence of the giant component [43] in evolving random graphs is equivalent to gelation in aggregation [25].
The ER random graph initially consists of \(N\) disjoint nodes, and it evolves by drawing randomly chosen pairs of nodes and connecting them. Thus only the number of links increases. Combining the ER graph with preferential attachment, one may postulate that nodes of degree \(i\) and \(j\) connect with probability proportional to \((i+\lambda)(j+\lambda)\). This evolving graph undergoes a percolation transition and later a condensation transition when the entire system condenses into a single component [44]. Closer to our modeling is a network that grows via two distinct mechanisms: (i) a new node is added with probability \(p\), and (ii) a new link between existing nodes is created with probability \(1-p\). Both of these steps can incorporate redirection in a natural way. Earlier work on similar models [45] was focused on network characteristics, such as the degree distribution. The distribution of components remains mostly unexplored and it would be interesting to analyze percolation and condensation transitions for this type of network. There are indications [45] that the percolation transition could be different from the standard Curie-type transition appearing in the ER graphs [43], viz., a Berezinskii-Kosterlitz-Thouless infinite-order transition [46; 47; 48; 49] that often appears in growing networks.
This research was partially supported by various NSF awards over the past two decades, and most recently by NSF grant DMR-1910736.
|
2307.06633 | Joint Estimation and Control for Multi-Target Passive Monitoring with an
Autonomous UAV Agent | This work considers the problem of passively monitoring multiple moving
targets with a single unmanned aerial vehicle (UAV) agent equipped with a
direction-finding radar. This is in general a challenging problem due to the
unobservability of the target states, and the highly non-linear measurement
process. In addition to these challenges, in this work we also consider: a)
environments with multiple obstacles where the targets need to be tracked as
they manoeuvre through the obstacles, and b) multiple false-alarm measurements
caused by the cluttered environment. To address these challenges we first
design a model predictive guidance controller which is used to plan
hypothetical target trajectories over a rolling finite planning horizon. We
then formulate a joint estimation and control problem where the trajectory of
the UAV agent is optimized to achieve optimal multi-target monitoring. | Savvas Papaioannou, Christos Laoudias, Panayiotis Kolios, Theocharis Theocharides, Christos G. Panayiotou | 2023-07-13T09:01:12Z | http://arxiv.org/abs/2307.06633v1 | # Joint Estimation and Control for Multi-Target Passive Monitoring
###### Abstract
This work considers the problem of passively monitoring multiple moving targets with a single unmanned aerial vehicle (UAV) agent equipped with a direction-finding radar. This is in general a challenging problem due to the unobservability of the target states, and the highly non-linear measurement process. In addition to these challenges, in this work we also consider: a) environments with multiple obstacles where the targets need to be tracked as they manoeuvre through the obstacles, and b) multiple false-alarm measurements caused by the cluttered environment. To address these challenges we first design a model predictive guidance controller which is used to plan hypothetical target trajectories over a rolling finite planning horizon. We then formulate a joint estimation and control problem where the trajectory of the UAV agent is optimized to achieve optimal multi-target monitoring.
## I Introduction
Increased mobility, flexibility, and rapid deployment are highly desirable properties in many application domains. Nowadays, unmanned aerial vehicles (UAVs) have demonstrated their potential in a wide variety of applications including surveillance and security tasks [1, 2, 3, 4], search-and-rescue missions [5, 6, 7], and situational awareness for first responders [8]. To meet the requirements in the above-mentioned applications, the UAV agents often carry on-board various sensors such as direction-finders that passively scan the spectrum to detect and resolve the direction of a target transmission, and radars that provide the direction and/or distance by processing the reflections on objects by either purposefully transmitted signals (i.e., _active_ radar) or by ambient signals of opportunity (i.e., _passive_ radar).
Passive systems, e.g., based on radar technology [9], are less power demanding as no signal transmissions are required, thus extending the UAV agent's flight time. While this makes them preferable in many practical scenarios, such systems typically provide bearing-only measurements, i.e., the _angle_ between the target-agent line and a reference direction (e.g., magnetic north). In this case several challenges appear due to the highly nonlinear measurement process, and the unobservability of the target states, especially when a single UAV agent in considered.
Over the past years a plethora of approaches have been proposed in the literature for the problem of passive target monitoring/tracking. In this work we will focus mainly on the task of passive target monitoring with a single UAV agent which utilizes angle measurements (i.e., bearings), for estimating the target's state. A recent survey paper on this topic can be found in [10]. Regarding the bearings-only passive target monitoring, the authors in [11] provide a thorough analysis of 3 different state-estimation approaches for tracking a single target with a single sensor. The authors in [12] design a particle-filter based estimator that uses multiple radar measurements with glint noise in order to passively monitor a single moving target, and the work in [13] proposes a robust fuzzy extended-Kalman filter for monitoring a moving target.
With respect to the agent/observer control aspect which appears in the passive target monitoring applications, the work in [14] proposes a greedy algorithm for optimally choosing the measurement locations in order to localize a stationary target in the least amount of time. Similarly, in [15] the authors first use the geometric dilution of precision to characterise the uncertainty of passive target localization using angle measurements, and then they propose a measurement gathering strategy that jointly minimizes the target localization error of a stationary target, and the time spend in gathering the measurements. The problem of optimally controlling an autonomous agent/observer for accurate passive monitoring of a moving target is further investigated in [16, 17]. Specifically, in [16] various particle-filter estimators are proposed based on the multiple model jump Markov system framework to tackle the various manoeuvres of the target, whereas in [17] the observer control is posed as stochastic optimal control problem which aims at maximizing the tracking accuracy. Finally, the authors in [18] formulate the problem of observer control for bearings-only target localization as an optimal control problem which maximizes the determinant of the Fisher information matrix.
Complementary to the related works discussed above, in this paper we propose a multi-target passive-monitoring approach in which an autonomous UAV agent optimally decides its control inputs such that the combined uncertainty over the states of all targets is minimized. Contrary to existing solutions, this work investigates the problem in complex environments with obstacles, that need to be avoided by the targets, and which also cause multiple false-alarm measurements that need to be rejected by the estimator.
The rest of the paper is structured as follows. The system model is discussed in Section II and the proposed controller for planning the target trajectories is described in Section III. Section IV presents the proposed joint estimation and control approach, and Section V discusses the performance evalua
tion of the proposed approach. Finally, Section VI provides concluding remarks.
## II System Modelling
### _Target Dynamics_
In this work we assume that a known number of \(M\) ground targets \(\mathbf{x}^{j},\ j\in[1,..,M]\) operate inside a bounded surveillance environment \(\mathcal{E}\subset\mathbb{R}^{3}\) according to the following stochastic discrete-time dynamical model:
\[\mathbf{x}^{j}_{t}=A\mathbf{x}^{j}_{t-1}+B\mathbf{u}^{j}_{t-1}+\mathbf{\nu}_{t-1},\ j\in[1,..,M] \tag{1}\]
where \(\mathbf{x}^{j}_{t}=[x^{j}_{t}(x),x^{j}_{t}(y),x^{j}_{t}(z),\dot{x}^{j}_{t}(x),\dot {x}^{j}_{t}(y),\dot{x}^{j}_{t}(z)]^{\top}\in\mathbb{R}^{6}\) denotes the state of the \(j_{\text{th}}\) target at time \(t\), which is composed of the target's position \((x^{j}_{t}(x),x^{j}_{t}(y),x^{j}_{t}(z))\), and velocity \((\dot{x}^{j}_{t}(x),\dot{x}^{j}_{t}(y),\dot{x}^{j}_{t}(z))\) components in 3D Cartesian coordinates. The control input \(\mathbf{u}^{j}_{t}\in\mathbb{R}^{3}\) denotes the applied control force which allows the target to change its direction and speed, and the term \(\mathbf{\nu}_{t}\) is the process noise which models the uncertainty on the target's state, and which is distributed according to a zero mean multi-variate Gaussian distribution with covariance matrix \(Q\), i.e., \(\mathbf{\nu}_{t}\sim\mathcal{N}(0,Q)\). Without loss of generality we assume that the process noise profile is the same for all targets. The matrices \(A\) and \(B\) are defined as:
\[A=\begin{bmatrix}I_{3\times 3}&\Delta t\cdot I_{3\times 3}\\ 0_{3\times 3}&(1-\varepsilon)\cdot I_{3\times 3}\end{bmatrix},\ B=\begin{bmatrix}0_{3 \times 3}\\ \frac{\Delta t}{m}\cdot I_{3\times 3}\end{bmatrix}, \tag{2}\]
where \(\Delta t\) is the sampling interval, \(\varepsilon\in[0,1]\) models the effect of friction on the target's velocity, and \(m\) is the target mass which for brevity we assume to be the same for all targets. Moreover, \(I_{3\times 3}\), and \(0_{3\times 3}\) denote the identity and zero matrices of size 3-by-3 respectively. Finally, it is assumed that during a reconnaissance phase, the approximate target initial location, and final destination have been acquired and made available to the UAV agent. Therefore, we assume that: a) target's \(j\) initial state \(\mathbf{x}^{j}_{0}\) is distributed according to \(\mathbf{x}^{j}_{0}\sim\mathcal{N}(\mathbf{\mu}^{j}_{0},\Sigma^{j}_{0})\), and b) the target \(j\) is moving towards a goal region on the ground denoted hereafter as \(\mathcal{G}^{j}\subset\mathbb{R}^{3}\).
### _Agent Dynamics_
An autonomous UAV agent/observer, equipped with a passive direction-finding radar which is calibrated for a certain altitude \(h\), is deployed inside the surveillance environment \(\mathcal{E}\) with the purpose of monitoring the trajectories of the \(M\) targets on the ground. The state of the UAV agent at time-step \(t\) i.e., \(\mathbf{s}_{t}=[s_{t}(x),s_{t}(y),s_{t}(z)]^{\top}\in\mathcal{E}\) which is composed of the agent's position in cartesian coordinates, evolves in time according to:
\[\begin{bmatrix}s_{t}(x)\\ s_{t}(y)\\ s_{t}(z)\end{bmatrix}\!\!=\!\!\begin{bmatrix}s_{t-1}(x)\\ s_{t-1}(y)\\ h\end{bmatrix}\!\!+\!\!\begin{bmatrix}\lambda\Delta_{r}\text{cos}(\kappa\Delta _{\theta})\\ \lambda\Delta_{r}\text{sin}(\kappa\Delta_{\theta})\\ 0\end{bmatrix}\!\!,\begin{array}{l}\lambda\in[0,..,N_{r}]\\ \kappa\in[1,..,N_{\theta}]\end{bmatrix}\!\!, \tag{3}\]
where \(\Delta_{r}\) is the radial step size, \(\Delta_{\theta}=2\pi/N_{\theta}\), and the parameters \((N_{r},N_{\theta})\) specify the set \(\mathcal{S}_{t}\) containing all possible states \(\mathbf{s}_{t}\in\mathcal{S}_{t}\) which the agent can take at time-step \(t\). Therefore, the set \(\mathcal{S}_{t}\) is given by: \(\mathcal{S}_{t}=\{(s_{t-1}(x)+\lambda\Delta_{r}\text{cos}(\kappa\Delta_{\theta }),s_{t-1}(y)+\lambda\Delta_{r}\text{sin}(\kappa\Delta_{\theta}),h)\}\), \(\forall\lambda\in[0,..,N_{r}],\ \forall\kappa\in[0,..,N_{\theta}]\).
### _Agent Sensing Model_
As already mentioned the UAV agent is equipped with a passive radar (i.e., a direction-finder) which is used for monitoring nearby ground targets operating inside its sensing range. Specifically, at each time-step \(t\), the UAV agent receives a set of noisy angular measurements (i.e., bearings) from each target \(j\), denoted as \(\Phi^{j}_{t}=\{\phi^{j}_{t,1},..,\phi^{j}_{t,|\Phi^{j}_{t}|}\},\ \phi^{j}_{t,i}\in(-\pi,\pi]\) rad, where the number of total received measurements, i.e., \(|\Phi^{j}_{t}|\) (\(|.|\) denotes the set cardinality), is random. In particular, it is assumed that due to various obstacles and clutter in the environment the UAV agent receives at each time-step \(t\): a) with a Poisson rate \(\Lambda\) multiple false-alarm measurements (denoted as \(\tilde{\phi}^{j}_{t,i}\in\Phi^{j}_{t}\)) which are distributed over the measurement space according to the probability distribution \(p_{\tilde{\phi}}(\tilde{\phi}^{j}_{t,i})\), and b) a single bearing measurement \(\hat{\phi}^{j}_{t}\in\Phi^{j}_{t}\) from the target \(j\) with probability \(p_{D}\). The target generated measurement \(\hat{\phi}^{j}_{t}\) is related to the target and agent states according to the measurement model \(\hat{\phi}^{j}_{t}=\ell(\mathbf{x}^{j}_{t},\mathbf{s}_{t})+w_{t}\), where:
\[\ell(\mathbf{x}^{j}_{t},\mathbf{s}_{t})=\tan^{-1}\left(\frac{x^{j}_{t}(x)-s_{t}(x)}{x^ {j}_{t}(y)-s_{t}(y)}\right), \tag{4}\]
and \(w_{t}\) is a Gaussian random variable which models the measurement noise, and which is distributed according to \(w_{t}\sim\mathcal{N}(0,\sigma^{2}_{\phi})\). Without loss of generality we assume that the same target detection probability, false-alarm rate, and the measurement noise applies for all targets, since all targets are sensed by the same radar equipment. In addition, we assume in this work that the targets are sensed by the UAV agent through different communication channels.
### _Obstacle Model_
We consider the existence of multiple convex obstacles \(\xi_{n}\in\Xi,\ n\in[1,..,|\Xi|]\) inside the surveillance area \(\mathcal{E}\), which are represented in this work as cuboids of arbitrary sizes. In particular, a regular cuboid \(\xi\) is a box-shaped object with six rectangular faces, and 8 right angles; therefore, a point \(\mathbf{p}=[p(x),p(y),p(z)]^{\top}\in\mathbb{R}^{3}\) that resides inside the convex-hull of cuboid \(\xi_{n}\) must satisfy the following 6 linear inequalities:
\[a^{n}_{1}(x)p(x)+a^{n}_{1}(y)p(y)+a^{n}_{1}(z)p(z) \leq b^{n}_{1},\] \[a^{n}_{2}(x)p(x)+a^{n}_{2}(y)p(y)+a^{n}_{2}(z)p(z) \leq b^{n}_{2},\] \[\vdots\] \[a^{n}_{6}(x)p(x)+a^{n}_{6}(y)p(y)+a^{n}_{6}(z)p(z) \leq b^{n}_{6},\]
where \(\mathbf{a}^{n}_{i}=[a^{n}_{i}(x),a^{n}_{i}(y),a^{n}_{i}(z)],\ i\in[1,..,6]\) is the outward unit normal vector on the \(i_{\text{th}}\) face of the \(n_{\text{th}}\) cuboid obstacle, and \(b^{n}_{i}\) is a constant obtained from the dot product between \(\mathbf{\alpha}^{n}_{i}\) and a known point on the plane which contains the \(i_{\text{th}}\) face. This obstacle model has the flexibility to create 3D objects of varying dimensions, thus adequately representing real-world settings.
Suppose now that \(\mathbf{p}\) describes the position of a target \(\mathbf{x}^{j}_{t}\) at time-step \(t\). This target, can avoid a potential collision with obstacle \(\xi_{n}\) when the following condition holds:
\[\exists\ i\in[1,..,6]:\text{dot}(\mathbf{a}^{n}_{i},\mathbf{p})>b^{n}_{i}, \tag{5}\]
where dot\((a,b)\) is the dot product between vectors \(a\) and \(b\). In essence, we require that the target's position resides outside the convex-hull of obstacle \(\xi_{n},\ n\in[1,..,|\Xi|]\).
## III Target Trajectory Planning
As we have already mentioned in Sec. II-A, for each target \(j\) we consider the availability of the following information: a) its approximate initial location, i.e., we know that the state of target \(j\) is initially distributed according to \(\mathbf{x}_{0}^{j}\sim\mathcal{N}(\mathbf{\mu}_{0}^{j},\Sigma_{0}^{j})\), and b) its final destination, i.e., we know that target's \(j\) objective is to move towards, and reach a specific goal region \(\mathcal{G}^{j}\). Based on these information, and in combination with a known map of the environment (i.e., in this work we use information regarding the position, and dimensions of various obstacles), the objective is to generate a hypothetical trajectory for each target, which can then be passively monitored through sensing, i.e., via the received bearing measurements, as discussed in Sec. II-C.
To do that, the target trajectory hypothesis generation is formulated in this work as a model predictive control problem, where we seek to find target's \(j\) hypothetical control inputs \(U_{t}^{j}=\{\mathbf{u}_{t+\tau|t}^{j}\},\forall\tau\in[0,..,T-1]\) inside a rolling finite planning horizon of length \(T\) time-steps, which enable the guidance of the target to its goal region, subject to kinematic and collision avoidance constraints.
Let us denote the future hypothetical trajectory of target \(j\) over a planning horizon of length \(T\) time-steps, as \(X_{t}^{j}=\{\mathbf{x}_{t+\tau+1|t}^{j}\},\forall\tau\in[0,..,T-1]\), where the notation \(\mathbf{x}_{t\cdot|t}\) is used here to denote the predicted target state at time-step \(t^{\prime}\) which was generated at time-step \(t\). Now, based on Eq. (1), observe that the target trajectory \(X_{t}^{j}\) is in fact a stochastic process, with each future target state \(\mathbf{x}_{t+\tau+1|t}^{j},\forall\tau\), to be distributed according to \(\mathbf{x}_{t+\tau+1|t}^{j}\sim\mathcal{N}(\mathbf{\mu}_{t+\tau+1|t}^{j},\Sigma_{t+\tau +1|t}^{j})\), where \(\mathbf{\mu}_{t+\tau+1|t}^{j}\), and \(\Sigma_{t+\tau+1|t}^{j}\) are given by:
\[\mathbf{\mu}_{t+\tau+1|t}^{j} =A^{\tau+1}\mathbf{\mu}_{t}^{j}+\sum_{k=0}^{\tau}A^{\tau-k}B\mathbf{u}_{t+ k|t}^{j}, \tag{6}\] \[\Sigma_{t+\tau+1|t}^{j} =A^{\tau+1}\Sigma_{t}^{j}(A^{\top})^{\tau+1}\!+\!\sum_{k=0}^{\tau }A^{\tau-k}Q(A^{\top})^{\tau-k}.\]
Observe that Eq. (6), has been obtained from the recursive application of Eq. (1). The parameters \(\mathbf{\mu}_{t}^{j}\), and \(\Sigma_{t}^{j}\) are respectively the mean, and covariance matrix of the target state at time-step \(t\), which for time-step \(t=0\) are given by \(\mathbf{\mu}_{0}^{j}\) and \(\Sigma_{0}^{j}\) respectively. In order to generate the trajectory which guides target \(j\) to its goal region \(\mathcal{G}^{j}\), the following cost function is minimized for the control inputs \(U_{t}^{j}=\{\mathbf{u}_{t+\tau|t}^{j}\},\forall\tau\in[0,..,T-1]\):
\[\begin{split}\operatorname*{arg\,min}_{U_{t}}\ \mathbb{E}[\mathcal{J}^{j}(X_{t}^{j},U_{t}^{j})]&=\|\mathbf{\mu}_{t+ T|t}^{j,\text{pos}}-\mathcal{G}_{0}^{j}\|_{2}^{2}\\ &\qquad\qquad+\sum_{\tau=1}^{T-1}\|\mathbf{u}_{t+\tau|t}^{j}-\mathbf{u}_{t +\tau-1|t}^{j}\|_{2}^{2},\end{split} \tag{7}\]
where \(\mathbb{E}\) is the expectation operator, \(\|.\|_{2}\) is the 2-norm, \(\mathbf{\mu}_{t+T|t}^{j,\text{pos}}\) is the predicted mean of the target's position at the end of the planning horizon computed with Eq. (6), and \(\mathcal{G}_{0}^{j}\) is the centroid point of the goal region \(\mathcal{G}^{j}\) on the ground. The second term in Eq. (7) is used in order to minimize abrupt changes in the target's direction and speed, and thus produce more realistic smooth trajectories. The predicted target trajectory for agent \(j\) is then generated with the guidance controller shown in Problem (P1). As shown in Problem (P1), at each time-step \(t\) the optimal control inputs \(U_{t}^{j}=\{\mathbf{u}_{t+\tau|t}^{j}\},\forall\tau\in[0,..,T-1]\) are computed over a rolling planning horizon of length \(T\) time-steps, by solving an open-loop optimal control problem shown, which essentially drives the target to its goal region, while at the same time considering obstacle avoidance constraints.
**Problem (P1)**: Guidance Controller
\[\min_{U_{t}^{j}}\ \mathbb{E}[\mathcal{J}^{j}(X_{t}^{j},U_{t}^{j})] \forall j\] (8a) **subject to** \[\tau\in[0,..,T-1]\] **:** \[\mathbf{\mu}_{t+\tau+1|t}^{j}=A^{\tau+1}\mathbf{\mu}_{t}^{j}+\sum_{k=0}^{\tau}A^{\tau-k}B \mathbf{u}_{t+k|t}^{j}, \forall\tau,j \tag{8b}\] \[\mathbf{\mu}_{t}^{j}=\mathbf{\tilde{\mu}}_{t|t-1}^{j},\Sigma_{t}^{j}=\hat{ \Sigma}_{t|t-1}^{j}, \forall j\] (8c) \[\text{dot}(\mathbf{a}_{i}^{n},\mathbf{\mu}_{t+\tau+1|t}^{j,\text{pos}})>b_{i }^{n}-Hy_{\tau,i}^{j,n}, \forall\tau,j,n,i\] (8d) \[\sum_{i=1}^{6}y_{\tau,i}^{j,n}\leq 5, \forall\tau,j,n\] (8e) \[X_{t}^{j}\in\mathcal{X},U_{t}^{j}\in\mathcal{U}\] (8f) \[y_{\tau,i}^{j,n}\in\{0,1\},\ n=[1,..,|\Xi|],\ i=[1,..,6]\]
Specifically, in Problem (P1) the constraints in Eq. (8b)(8c) compute the expected state of target \(j\) (i.e., \(\mathbf{\mu}_{t+\tau+1|t}^{j}\)) inside the planning horizon, which has an associated covariance matrix \(\Sigma_{t+\tau+1|t}^{j}\). Observe that the covariance matrix does not depend on the generated control inputs, and thus can be pre-computed as shown in Eq. (6). The constraints in Eq. (8d)-(8e) enable the generation of collision-free trajectories, by making sure that all targets avoid collisions with the obstacles in the environment. As a reminder, a collision with some obstacle \(\xi_{n},n=[1,..,|\Xi|]\), which is represented as a cuboid, is avoided at time-step \(t\) when the target state (i.e., its position coordinates) resides outside the convex hull of \(\xi_{n}\) as explained in Sec. II-D. In order to enable this functionality we use the binary variable \(y_{\tau,i}^{j,n}\in\{0,1\}\) which is activated i.e., \(y_{\tau,i}^{j,n}=1\) when the inequality dot\((\mathbf{a}_{i}^{n},\mathbf{\mu}_{t+\tau+1|t}^{j,\text{pos}})>b_{i}^{n}\) is not satisfied for target \(j\) with position \(\mathbf{\mu}_{t+\tau+1|t}^{j,\text{pos}}\) at time-step \(t+\tau+1|t\), and the \(i_{\text{th}}\) face of the \(n_{\text{th}}\) obstacle. In such cases the activation of \(y_{\tau,i}^{j,n}\) makes the constraint shown in Eq. (8d) valid with the utilization of a large positive constant \(H\in\mathbb{Z}^{+}\). Now, as discussed in Sec. II-D a collision is avoided at time-step \(t+\tau+1|t\) between the target \(j\) with state \(\mathbf{\mu}_{t+\tau+1|t}^{j,\text{pos}}\), and the obstacle \(\xi_{n}\) when \(\exists\ i\in[1,..,6]:\text{dot}(\mathbf{a}_{i}^{n},\mathbf{\mu}_{t+\tau+1|t}^{j,\text{pos}})>b_{i}^{n}\), which is achieved via the constraint in Eq. (8e) by enforcing the binary variable \(y_{\tau,i}^{j,n}\) to take the value of zero for at least one face i.e., \(\exists i\in[1,..,6]:y_{\tau,i}^{j,n}=0\). Finally, the constraints in Eq. (8f) restrict the target's speed and control inputs within the desired limits. We should point out that Problem (P1) is
a mixed integer quadratic program (MIQP), which can be solved efficiently using off-the-shelf optimization tools [19].
## IV Autonomous UAV Control for Passive Multi-Target Monitoring
### _Target State Estimation_
For each target \(j\) the UAV agent maintains a Bayes filter [20], which uses in order to compute, and recursively update over time its belief (i.e., a probability distribution) on the state of each target. This is shown in Eq. (9) where we denote as \(bel(\mathbf{x}_{t+1}^{j})\) the agent's initial belief on the state of target \(j\) for the next time-step \(t+1\), and with \(\hat{bel}(\mathbf{x}_{t+1}^{j})\) we denote the posterior belief on the target's state after incorporating the received target measurements.
\[bel(\mathbf{x}_{t+1}^{j}) =\int f(\mathbf{x}_{t+1}^{j}|\mathbf{x}_{t}^{j},\mathbf{u}_{t}^{j})\hat{bel}( \mathbf{x}_{t}^{j})d\mathbf{x}_{t}^{j} \tag{9a}\] \[\hat{bel}(\mathbf{x}_{t+1}^{j}) =\eta^{-1}g(\Phi_{t+1}^{j}|\mathbf{x}_{t+1}^{j},\mathbf{s}_{t+1})bel(\mathbf{ x}_{t+1}^{j}) \tag{9b}\]
The agent's initial belief \(bel(\mathbf{x}_{t+1}^{j})\) is computed through the prediction step shown in Eq. (9a), where \(f(\mathbf{x}_{t+1}^{j}|\mathbf{x}_{t}^{j},\mathbf{u}_{t}^{j})\) is the target state transition density which is governed by the target dynamics in Eq. (1), and therefore is given by \(f(\mathbf{x}_{t+1}^{j}|\mathbf{x}_{t}^{j},\mathbf{u}_{t}^{j})=\mathcal{N}(A\mathbf{x}_{t}^{j}+ B\mathbf{u}_{t}^{j},Q)\). On the other hand, \(\hat{bel}(\mathbf{x}_{t}^{j})\) is the posterior belief of the current time-step i.e., \(\hat{bel}(\mathbf{x}_{t}^{j})=\mathcal{N}(\hat{\mathbf{\mu}}_{t}^{j},\hat{\Sigma}_{t} ^{j})\), and thus \(bel(\mathbf{x}_{t+1}^{j})=\mathcal{N}(A\hat{\mathbf{\mu}}_{t}^{j}+B\mathbf{u}_{t}^{j},A \hat{\Sigma}_{t}^{j}A^{\top}+Q)\). Observe that this result is also obtained from Eq. (6) by setting \(\tau=0\), to obtain the one step look-ahead predictive density for the state of target \(j\) computed at time-step \(t\) i.e., \(\mathbf{x}_{t+1|t}^{j}\sim\mathcal{N}(\mathbf{\mu}_{t+1|t}^{j},\Sigma_{t+1|t}^{j})=bel (\mathbf{x}_{t+1}^{j})\).
Subsequently, at time-step \(t+1\) the agent with state \(\mathbf{s}_{t+1}\) receives from each target \(j\) the measurement set \(\Phi_{t+1}^{j}\), and updates its belief by computing \(\hat{bel}(\mathbf{x}_{t+1})\) with the update step shown in Eq.(9b). Specifically, \(\eta=\int g(\Phi_{t+1}^{j}|\mathbf{x}_{t+1}^{j},\mathbf{s}_{t+1})bel(\mathbf{x}_{t+1}^{j}) d\mathbf{x}_{t+1}^{j}\) is a normalizing constant, and the measurement likelihood function \(g(\Phi_{t+1}^{j}|\mathbf{x}_{t+1}^{j},\mathbf{s}_{t+1})\) gives the likelihood that the agent with state \(\mathbf{s}_{t+1}\) will receive at time-step \(t+1\) the measurement set \(\Phi_{t+1}^{j}\) from target \(j\) with state \(\mathbf{x}_{t+1}^{j}\).
To compute this likelihood function, first observe that the measurement set \(\Phi_{t+1}^{j}\) contains a random number of random measurements i.e., multiple false-alarm measurements \(\hat{\phi}_{t+1,i}^{j}\in\Phi_{t+1}^{j}\) coming with a Poisson rate \(\Lambda\), which are distributed according to \(p_{\hat{\mathcal{G}}}(\tilde{\phi}_{t+1,i}^{j})\), and up to one target measurement \(\hat{\phi}_{t+1}^{j}\in\Phi_{t+1}^{j}\) which is received with probability \(p_{D}\), and which is distributed according to \(c(\hat{\phi}_{t+1}^{j})=\mathcal{N}(\hat{\phi}_{t+1}^{j};\ell(\mathbf{x}_{t+1}^{j},\mathbf{s}_{t+1}),\sigma_{\phi}^{2})\) as discussed in Sec. II-C. That said, the measurement likelihood function is derived as:
\[g(\Phi_{t+1}^{j}|\mathbf{x}_{t+1}^{j},\mathbf{s}_{t+1})=(1-p_{D})n^{j} \Psi(n^{j};\Lambda)\prod_{\phi\in\Phi_{t+1}}p_{\hat{\mathcal{G}}}(\phi)\] \[+(n^{j}-1)!\Psi(n^{j}-1;\Lambda)p_{D}\!\!\sum_{\phi\in\Phi_{t+1}} c(\phi)\!\!\prod_{\begin{subarray}{c}\varphi\in\Phi_{t+1}\\ \varphi\neq\phi\end{subarray}}\!\!p_{\hat{\mathcal{G}}}(\varphi) \tag{10}\]
where \(n^{j}=|\Phi_{t+1}^{j}|\) is the total number of received measurements, and \(\Psi(n^{j};\Lambda)\) is probability mass function of the Poisson distribution with rate parameter \(\Lambda\), and input argument \(n^{j}\). Therefore, the first term in Eq. (10) computes the event of receiving at time-step \(t+1\) exactly \(n^{j}\) false-alarm measurements (i.e., \(\Psi(n^{j};\Lambda)\prod_{\phi\in\Phi_{t+1}}p_{\hat{\mathcal{G}}}(\phi)\)), and no measurement from target \(j\), i.e., the target is not detected with probability \((1-p_{D})\); and the factor \(n^{j}!\) accounts for all possible permutations of the measurements in the set. On the other hand, the second term in Eq. (10) accounts for the event where the measurement set \(\Phi_{t+1}^{j}\) contains a single target measurement \(\hat{\phi}\) with likelihood \(p_{D}c(\hat{\phi})\), and \((n-1)\) false-alarm measurements. Finally, the posterior mean \((\hat{\mathbf{\mu}}_{t+1}^{j})\) and covariance \((\hat{\Sigma}_{t+1}^{j})\) of the state of target \(j\) for time-step \(t+1\) is extracted from \(\hat{bel}(\mathbf{x}_{t+1}^{j})\), which are used to initialize the guidance controller for the next time-step, and subsequently, the recursion shown in Eq. (9) is repeated for the next time-step.
### _Monitoring Control_
In order to optimize the monitoring performance at time-step \(t+1\) for a particular target \(j\) it suffices to select the agent's next state \(\hat{\mathbf{s}}_{t+1}\in\mathcal{S}_{t+1}\), which will result in the future measurement set \(\Phi_{t+1}^{j}\), which maximizes the observability of the target state. This strategy however cannot be applied directly since the measurement set \(\Phi_{t+1}^{j}\) becomes available only after the agent moves to its new state \(\hat{\mathbf{s}}_{t+1}\). To overcome this limitation, we follow the procedure described next: For each admissible agent state \(\mathbf{s}_{t+1}\in\mathcal{S}_{t+1}\), we generate for each target \(j\) the hypothetical ideal (i.e., noise-free, no false-alarms) measurement set \(Z_{t+1}^{j}=\{z_{t+1}^{j}\}\), which would have been received if the agent moves at time-step \(t+1\) to state \(\mathbf{s}_{t+1}\), and target \(j\) is distributed according to \(bel(\mathbf{x}_{t+1}^{j})\) (computed with Eq. (9a)), with expected position denoted as \(\mu_{t+1}^{j,\text{pos}}\). That said, the hypothetical measurement set \(Z_{t+1}^{j}=\{z_{t+1}^{j}\}\) is generated as:
\[z_{t+1}^{j}=\tan^{-1}\left(\frac{\mu_{t+1}^{j,\text{pos}}(x)-s_{t+1}(x)}{\mu_{t+1 }^{j,\text{pos}}(y)-s_{t+1}(y)}\right). \tag{11}\]
Then, for each pair \((\mathbf{s}_{t+1},z_{t+1}^{j})_{i}\), \(i\in[1,...,|\mathcal{S}_{t+1}|]\) we compute the pseudo-posterior distribution \(\hat{bel}(\mathbf{x}_{t+1}^{j},\mathbf{s}_{t+1},z_{t+1}^{j})_{i}\) according to Eq. (9b), where the measurement likelihood function \(g(z_{t+1}^{j})\) is now given by \(g(z_{t+1}^{j}|\mathbf{x}_{t+1}^{j},\mathbf{s}_{t+1})=\mathcal{N}(z_{t+1}^{j};\ell(\mathbf{x}_{t +1}^{j},\mathbf{s}_{t+1}),\sigma_{\phi}^{2})\). Finally, we extract the pseudo-posterior target state mean and covariance \((\hat{\mathbf{\mu}}_{t+1}^{j},\hat{\Sigma}_{t+1}^{j})_{i}\). The optimal state \(\hat{\mathbf{s}}_{t+1}\) of the UAV agent for time-step \(t+1\) which achieves optimized monitoring performance is then obtained as:
\[\hat{\mathbf{s}}_{t+1}=\operatorname*{arg\,min}_{\mathbf{s}\in\mathcal{S}_{t+1}}\sum_{j= 1}^{M}\operatorname{tr}\left(\tilde{\Sigma}_{t+1}^{j}(\mathbf{s})\right), \tag{12}\]
where \(\operatorname{tr}(\Sigma)\) is the trace of matrix \(\Sigma\), and the notation \(\tilde{\Sigma}_{t+1}^{j}(\mathbf{s})\) denotes the covariance matrix associated with the pseudo-posterior distribution of the state of the \(j_{\text{th}}\) target, which was obtained under the assumption that the agent moved to state \(
## V Evaluation
### _Simulation Setup_
To evaluate the proposed approach we have used the following simulation setup. The surveillance area \(\mathcal{E}\subset\mathbb{R}^{3}\) is given by a cube with a total volume of 1km\({}^{3}\). The target dynamics are given by Eq. (1) with \(\Delta t=1\)s, \(\varepsilon=0.2\), and \(m=1300\)kg, and are the same for all \(M=4\) targets. The process noise \(\mathbf{\nu}_{t}\) is distributed according to \(\mathbf{\nu}_{t}\sim\mathcal{N}(0,Q)\), with \(Q=\text{diag}([30~{}30~{}eps~{}3~{}3~{}eps])\), where \(eps\) is a very small number i.e., \(eps=1\text{E}-10\), which indicates our knowledge that the targets evolve on the ground plane. Initially it is assumed that the four targets are distributed according to \(\mathbf{x}_{0}^{1}\sim\mathcal{N}(\mathbf{\mu}_{0}^{1},\Sigma_{0})\), \(\mathbf{x}_{0}^{2}\sim\mathcal{N}(\mathbf{\mu}_{0}^{2},\Sigma_{0})\), \(\mathbf{x}_{0}^{3}\sim\mathcal{N}(\mathbf{\mu}_{0}^{3},\Sigma_{0})\), and \(\mathbf{x}_{0}^{4}\sim\mathcal{N}(\mathbf{\mu}_{0}^{4},\Sigma_{0})\), where \(\mathbf{\mu}_{0}^{1}=[281,925,0]\)m, \(\mathbf{\mu}_{0}^{2}=[238,706,0]\)m, \(\mathbf{\mu}_{0}^{3}=[901,925,0]\)m, and \(\mathbf{\mu}_{0}^{4}=[885,676,0]\)m. The covariance matrix \(\Sigma_{0}\) is given by \(\Sigma_{0}=\text{diag}([200~{}200~{}eps~{}20~{}20~{}eps])\) for all targets.
The control input \(\mathbf{u}_{t}\) is bounded in the \(x\), and \(y\) dimensions inside the interval \([-6000,6000]\)N for all targets, and in the \(z\) dimension is zero. The targets can reach a ground speed of up to 16m\(/\)s. The agent dynamics are given by Eq. (3) with \(\Delta_{r}=5\)m, \(N_{\theta}=15\), \(N_{r}=4\), and \(h=40\)m. In addition, the measurement noise \(w_{t}\) is distributed according to \(w_{t}\sim\mathcal{N}(0,\sigma_{\phi}^{2})\), with \(\sigma_{\phi}=1\deg\), and the target detection probability is set to \(p_{D}=0.95\). The false-alarms are uniformly distributed inside the measurement space \((-\pi,\pi]\), and arrive with a Poisson rate \(\Lambda=1\). Finally, we note that the stochastic filtering recursion in Eq. (9) has been implemented as a particle filter [21] mainly for handling the non-linear measurement model i.e., Eq. (4), and the guidance problem i.e., Problem (P1), was solved with the Gurobi's MIQP solver.
### _Performance Evaluation_
Figure 1(a) shows in 3D and top-down views, the initial position of targets \(\mathbf{x}^{1}\), \(\mathbf{x}^{2}\), \(\mathbf{x}^{3}\), and \(\mathbf{x}^{4}\) which are marked with a red, blue, purple and green \(\star\) respectively. The initial covariance of the target states is drawn as an error-ellipse around the mean of the target location as shown. The obstacles in the environment are shown as grey coloured cuboids, and the goal region, which in this scenario is the same for all targets, is shown with the red rectangular region. The UAV agent is initialized in this example at \(\mathbf{s}_{0}=[150,200,40]\)m, as shown with the black \(\diamond\). Figure 1(b) shows the output of the proposed guidance controller as depicted in Problem (P1), which has generated the hypothetical planned trajectories for the 4 targets over a planning horizon \(T=50\) time-steps. As shown in the figure, this optimization allows the targets to avoid the obstacles in the environment, and based on their mobility capabilities to reach the goal region as soon as possible. Fig. 1(c) shows the uncertainty on the targets' states over the planning horizon as computed at time-step \(t=1\), with the Eq. (6). In particular, the figure shows the target position as particles sampled from \(\mathcal{N}(\mathbf{\mu}_{1+\tau+1|1}^{j},\Sigma_{1+\tau+1|1}^{j}),\forall\tau,\forall j\). Finally, Fig. 1(d), shows the optimal control inputs (\(x\) and \(y\) dimensions) over the planning horizon that guide the targets to the goal region, while producing smooth trajectories without abrupt changes in the speed and direction.
Next we demonstrate the performance of the proposed approach for the task of passively monitoring the four ground targets. The objective now becomes the selection of the optimal UAV control inputs at each time-step such that the collective uncertainty on the target states is minimized. To achieve this, the UAV agent makes a prediction on the target next states as discussed in Sec. III, and then uses the received target measurements to update those predictions using the filtering procedure discussed in Sec. IV. Figure 2(a)(b) show the result of the optimization problem in Eq. (12) i.e., the UAV's optimal trajectory which maximizes the monitoring performance i.e., minimizes the uncertainty on the target states. Essentially, the UAV agent seeks at each time-step to select its next state from which will obtain the most informative bearing measurement, and which in turn will allow the estimation of the target state. We define the
Fig. 1: The figure illustrates the proposed target trajectory hypothesis generation approach, which is realized with the guidance controller shown in Problem (P1), and allows the 4 targets to be guided to the goal region while avoiding collisions with the obstacles in the environment.
root mean square error (RMSE) on the target position at time-step \(t\) as \(\epsilon_{t}=\sqrt{N^{-1}\sum_{n=1}^{N}||\hat{\mathbf{x}}_{t}^{\text{pos}}(n)-\mathbf{x}_ {t}^{\text{pos}}||_{2}^{2}}\), where \(||.||_{2}^{2}\) is the squared 2-norm, \(N\) is the number of Monte-Carlo trials, \(\hat{\mathbf{x}}_{t}^{\text{pos}}(n)\) denotes the estimated \((x,y)\) target coordinates at time-step \(t\) on the \(n_{\text{th}}\) trial, and \(\mathbf{x}_{t}^{\text{pos}}\) is the true target position at the same time-step. Figure 2(c) shows the average positional RMSE obtained for tracking the four targets during the scenario depicted in Fig. 2(a)(b). This scenario was simulated for \(N=100\) trials, where in each trial the UAV initial position was randomly initialized inside the surveillance area. This result is then compared with the positional error obtained from a 3-sensor tracking system. Specifically, we assume that three fixed direction-finding sensors located at \([150,200]\)m, \([800,200]\)m, and \([500,900]\)m, receive three bearing measurements from each target at each time-step, and localize the targets according to the procedure discussed in Sec. IV by combining their individual measurement likelihood functions. The measurement noise profile in this case is as discussed in Sec. V-A, however without false-alarms. As shown in the graph, although the 3-sensor system achieves better results (note that in this case the target state is fully observable), the proposed single sensor system by optimizing the measurement collection process, achieves comparable performance (i.e., solid black line) despite the presence of false-alarms. Finally, the black dotted-line shows what is the achievable performance of the proposed approach in scenarios with higher false-alarm rates i.e., \(\Lambda=8\). Although, the rate of false-alarms degrades the overall monitoring performance as shown in the figure, the targets can still be tracked with a reasonable accuracy, which can be adequate for certain application domains.
## VI Conclusion
In this work we propose a joint estimation and control approach for passively monitoring multiple targets of interest in challenging conditions (i.e., environments with obstacles, and false-alarm measurements) with a single UAV agent equipped with a direction-finding sensor. Model predictive control is used for generating hypothetical target trajectories inside a rolling finite planning horizon, which are then refined through stochastic filtering. In particular, we show how the agent's path can be optimized in order to minimize the collective uncertainty over the target states. Future work includes the implementation of the proposed approach on UAV hardware platforms and its validation in real-world settings.
|
2307.09500 | Exact hole-induced $SU(N)$ flavor-singlets in certain $U=\infty$ $SU(N)$
Hubbard models | We prove that the motion of a single hole induces $SU(N)$ flavor-singlets in
the $U=\infty$ $SU(N)$ (Fermi) Hubbard model on a Husimi-like tree graph. The
result is also generalized to certain $t$-$J$ models with antiferromagnetic
interactions and singlet hopping terms typically neglected in the literature.
This is an $SU(N)$ generalization of the "counter-Nagaoka theorem" introduced
in [Phys. Rev. B 107, L140401 (2023)]. Our results suggest the existence of
antiferromagnetic or resonating-valence-bond (RVB)-like polarons in the $t$-$J$
models on a more realistic non-bipartite lattice. Such antiferromagnetic/RVB
polarons may be relevant for a novel strong-coupling mechanism of
superconductivity or other exotic fractionalized phases of matter. | Kyung-Su Kim, Hosho Katsura | 2023-07-18T18:00:00Z | http://arxiv.org/abs/2307.09500v2 | # Exact hole-induced \(Su(n)\) flavor-singlets in certain \(U=\infty\)\(Su(n)\) Hubbard models
###### Abstract
We prove that the motion of a single hole induces \(SU(N)\) flavor-singlets in the \(U=\infty\)\(SU(N)\) (Fermi) Hubbard model on a Husimi-like tree graph. The result is also generalized to certain \(t\)-\(J\) models with antiferromagnetic interactions and singlet hopping terms typically neglected in the literature. This is an \(SU(N)\) generalization of the "counter-Nagaoka theorem" introduced in [Phys. Rev. B **107**, L140401 (2023)]. Our results suggest the existence of antiferromagnetic or resonating-valence-bond (RVB)-like polarons in the \(t\)-\(J\) models on a more realistic non-bipartite lattice. Such antiferromagnetic/RVB polarons may be relevant for a novel strong-coupling mechanism of superconductivity or other exotic fractionalized phases of matter.
The \(SU(2)\) Hubbard model in the presence of a dilute hole doping has been the subject of extensive study, especially as it is expected to capture essential features of the high-temperature superconductivity in cuprate superconductors [1; 2; 3; 4]. However, even such a simple model leads to a notorious challenge and complexity due to competing tendencies to develop distinct ordered phases in the intermediate coupling regime [5]. Even in the strong coupling limit, \(U=\infty\), the analytical solution on a square lattice exists only for a single hole doping on a finite-sized system--the so-called "Nagaoka theorem" states that such a system leads to a fully polarized ferromagnet [6; 7; 8; 9; 10; 11]. More physically, Nagaoka's theorem implies the formation of the ferromagnetic Nagaoka polaron, which has been observed in numerics [12; 13] and in cold-atom experiments [14].
On the other hand, it is known that the hole motion in the \(U=\infty\)\(SU(2)\) Hubbard model on a _non-bipartite_ lattice (e.g., triangular lattice) induces antiferromagnetic correlations around it [15; 16; 17; 18]. However, for such a non-bipartite lattice, even the single-hole problem is poorly understood due to the frustration inherent in antiferromagnetism. The problem has been recently solved in a frustration-free version of a non-bipartite lattice, which unambiguously demonstrated that a hole is surrounded by resonating-valence-bond-like (RVB) correlations [19]. Such a result suggests the formation of an antiferromagnetic/RVB polaron on a more realistic non-bipartite lattice.
For systems with an emergent (or exact) \(SU(N)\) symmetry with \(N>2\)[20; 21; 22; 23; 24; 25], e.g., systems with degenerate multiple valleys or flavors [26; 27; 28; 29; 30], their physics may be characterized by the \(SU(N)\) Hubbard model or its generalizations under suitable circumstances. If so, the magnetism at \(\frac{1}{N}\)th filling (one fermion per site) in the strong coupling regime, \(U\gg t\), can be understood in terms of the \(SU(N)\) Heisenberg model with exchange interactions \(J=4t^{2}/U\). However, when \(t\gg J\) (\(U\to\infty\) limit), it is the motion of a hole that is responsible for the magnetism upon hole doping of such a Mott insulator. Therefore, \(SU(N)\) generalizations of the Nagaoka and counter-Nagaoka theorems are needed. In Ref. [31; 32], it is shown that, with the "unusual" sign of the hopping matrix element, \(t<0\), such a \(U=\infty\)\(SU(N)\) Hubbard model hosts a fully polarized Nagaoka ground state due to a single hole motion. However, less is understood for the same problem with the "usual" sign of hopping \(t>0\), again due to the frustration inherent in antiferromagnetism.
In this paper, we study the dynamics of a single hole doped away from the \(\frac{1}{N}\)th filling of the \(U=\infty\)\(SU(N)\) Hubbard and the \(t\)-\(J\) models on certain solvable graphs. We first consider such a problem on an \((N+1)\)-site graph that satisfies the connectivity condition (as defined later), and show that the ground state is in the \(SU(N)\) flavor
Figure 1: (a-b) Examples of a complete graph with fully connected edges. (c) An example of a non-complete graph which nevertheless satisfies the connectivity condition. (d) The ground state of the \(U=\infty\)\(SU(3)\) Hubbard model in the presence of a single hole on a tetrahedron with uniform \(t_{ij}=t\) and \(\hat{V}=0\) in Eq. (1). Magenta trimers denote \(SU(3)\) flavor-singlets, i.e., 3 fermions with complete flavor-antisymmetry, and circles denote the location of the hole. The signs associated with the many-body states appearing in \(|\Psi_{0}\rangle\) are defined implicitly in Eq. (5).
singlet sector. Any other flavor configurations frustrate the hole motion. Then, from such an \((N+1)\)-site subgraph, we construct a _subgraph tree_, on which the single hole problem in the \(SU(N)\)\(t\)-\(J\) model is exactly solvable. The ground state is a positive superposition of \(SU(N)\) flavor-singlet covering states. We then speculate on the possibility of exotic phases of matter that can arise from such a mechanism in the presence of dilute but finite hole concentration.
We note that the exact solvability of the single hole problem in a subgraph tree is due to the existence of an extensive number of local \(SU(N)\) symmetries -- in some sense, this is Hilbert space fragmentation [33; 34; 35] from a restricted hole motion.
\(SU(N)\)_flavor-singlet in an \((N+1)\)-site graph_. We start by solving a single hole problem in the \(U=\infty\)\(SU(N)\) Hubbard models (\(N\geq 2\)) on an \((N+1)\)-site graph that satisfies the connectivity condition (to be defined below). We assume that the hopping matrix elements are positive but otherwise arbitrary \(t_{ij}>0\):
\[\hat{H}=-\sum_{\langle i,j\rangle}\sum_{a=1}^{N}t_{ij}\left(c_{i,a }^{\dagger}c_{j,a}+\text{H.c.}\right)+\hat{V}(\{n_{i}\})+[U=\infty]. \tag{1}\]
Here, \(a=1,2,...,N\) is a flavor index of a fermion in the fundamental representation, \(i=0,1,2,...,N\) is a site index, and \(\langle i,j\rangle\) is an edge of the graph. \(\hat{V}(\{n_{i}\})\) describes arbitrary density-density interactions (\(n_{i}\equiv\sum_{a=1}^{N}c_{i,a}^{\dagger}c_{i,a}\)):
\[V(\{n_{i}\})=\sum_{i}\epsilon_{i}n_{i}+\sum_{i,j}V_{ij}n_{i}n_{j}+\cdots. \tag{2}\]
The last \(U=\infty\) condition requires that each site be occupied by at most one fermion.
_Lemma_: The ground state of the Hamiltonian (1) in the presence of a single hole on an \((N+1)\)-site graph that satisfies the connectivity condition is unique and is an \(SU(N)\) flavor-singlet state.
In order to prove the _Lemma_, it is convenient to work in a particular many-body basis in a single hole sector. In doing so, we restrict ourselves to a _flavor-balanced subspace_, where each flavor \(a=1,2,...,N\) index appears exactly once. (It can be shown straightforwardly that such a flavor-balanced subspace contains a state in every irreducible representation (irrep) that appears in \(N\)-direct products of the fundamental representation \(\mathbf{N}\), \(\mathbf{N}^{N}\equiv\mathbf{N}\times\mathbf{N}\times\cdots\times\mathbf{N}\). Since any state within the same irrep can be reached by repeated applications of raising/lowering operators, it suffices to restrict ourselves in the flavor-balanced subspace.) For example,
\[|\cdot,1,2,...N\rangle \equiv c_{1,1}^{\dagger}c_{2,2}^{\dagger}\cdots c_{N,N}^{\dagger }\left|\emptyset\right\rangle\equiv|0,1,...,N\rangle\] \[\equiv c_{0,0}c_{0,0}^{\dagger}c_{1,1}^{\dagger}c_{2,2}^{\dagger} \cdots c_{N,N}^{\dagger}\left|\emptyset\right\rangle \tag{3}\]
is a flavor-balanced state, where \(\left|\emptyset\right\rangle\) is the vacuum state with no fermions and \(0\) in the third expression denotes that the site \(i=0\) is unoccupied. This can be re-expressed as the final expression by creating a ghost fermion with flavor index \(a=0\) at the hole site and annihilating it. This is a useful notation that will be used throughout the paper. From this state, we form a complete orthonormal basis in a flavor-balanced subspace by applying a permutation of \((N+1)\) objects (a hole and \(N\) fermions), \(\sigma\in S_{N+1}\), where \(S_{N+1}\) is the symmetric group of \((N+1)\) objects:
\[|\sigma\rangle \equiv|\sigma(0),\sigma(1),...,\sigma(N)\rangle\equiv(-1)^{i} \text{sgn}(\sigma)c_{0,\sigma(0)}^{\dagger}c_{1,\sigma(1)}^{\dagger}\times\] \[\quad\cdots\times c_{i-1,\sigma(i-1)}^{\dagger}c_{i+1,\sigma(i+1)} ^{\dagger}\cdots c_{N,\sigma(N)}^{\dagger}\left|\emptyset\right\rangle\] \[\equiv\text{sgn}(\sigma)c_{\sigma^{-1}(0),0}c_{0,\sigma(0)}^{ \dagger}c_{1,\sigma(1)}^{\dagger}\cdots c_{N,\sigma(N)}^{\dagger}\left| \emptyset\right\rangle, \tag{4}\]
where we assumed that the \(i\)th site is occupied by a hole, i.e., \(\sigma(i)=0\) and again we introduced a ghost fermion with flavor \(a=0\) in the last expression for convenience.
Among the states in the flavor-balanced subspace are the completely flavor-antisymmetric, \(SU(N)\) flavor-singlet (FS) states with the hole at site \(i\),
\[|i,N\text{-FS}\rangle\equiv\frac{1}{\sqrt{N!}}\sum_{\sigma\in S_{N+1},\sigma^ {-1}(0)=i}|\sigma(0),\sigma(1),...,\sigma(N)\rangle\,. \tag{5}\]
_Connectivity condition_: The graph is said to satisfy the connectivity condition if all the basis states in Eq. (4) can be reached from one another by repeated applications of the hopping operators in Eq. (1), \(\hat{T}_{ij}\equiv-t_{ij}\sum_{a=1}^{N}\left(c_{i,a}^{\dagger}c_{j,a}+\text{H.c.}\right).\) For example, Fig. 1 (c) is an example of the graph that satisfies the connectivity condition: starting from the state \(\left|0,1,2,...N\right\rangle\), moving a hole around the triangular loop induces a transposition (1 2) and moving it around the largest, length \((N+1)\) loop induces the \((N+1)\)-cycle (0 1 2... \(N\)). These two permutations can generate the \(S_{N+1}\) group.
_Proof of the Lemma_: Any two basis states in the flavor-balanced subspace, \(\left|\sigma\right\rangle\) and \(\left|\tau\right\rangle\), have a nonzero hopping matrix element only when they differ by one transposition involving a hole: \(\sigma^{-1}(0)=\tau^{-1}(a)\) and \(\tau^{-1}(0)=\sigma^{-1}(a)\) for some flavor \(a\), and \(\sigma^{-1}(k)=\tau^{-1}(k)\) for \(k\neq 0,a\). Let \(\sigma^{-1}(0)=i\) and \(\tau^{-1}(0)=j\). Then, any nonzero off-diagonal matrix element is negative:
\[\left\langle\sigma\right|\hat{T}_{ij}\left|\tau\right\rangle=-t_{ij}<0. \tag{6}\]
Also, the interaction term \(V(\{n_{i}\})\) only contributes to diagonal matrix elements. Therefore, the Perron-Frobenius theorem ensures that there exists a _unique_ ground state \(\left|\Psi_{0}\right\rangle\) which is a positive superposition of all the basis states (\(A_{\sigma}>0\)):
\[\left|\Psi_{0}\right\rangle=\sum_{\sigma\in S_{N+1}}A_{\sigma}\left|\sigma \right\rangle. \tag{7}\]
Since this state has a nonzero overlap with a flavor-singlet state \(\left|i,N\text{-FS}\right\rangle\), it must be a flavor-singlet state (if it were instead a superposition of multiple irreps of \(SU(N)\), then it is possible to construct degenerate ground states, in contradiction to the uniqueness of the ground state). Therefore, it is possible to rewrite Eq. (7) as a positive superposition (\(A(i)>0\)) of \(\left|i,N\text{-FS}\right\rangle\):
\[\left|\Psi_{0}\right\rangle=\sum_{i}A(i)\left|i,N\text{-FS}\right\rangle. \tag{8}\]
See Fig. 1 (d) for the illustration of such a state. \(\square\)
\(SU(N)\)_flavor-singlets in a subgraph tree_. It is now straightforward to generalize the previous result to a "subgraph tree" constructed as follows. Starting from an \((N+1)\)-site subgraph that satisfies the connectivity condition, attach other \((N+1)\)-site subgraphs to some (or all) of the vertices of the initial subgraph, in such a way that the original and newly added subgraphs only share one vertex. Also, newly added subgraphs must not share any vertex. This generates depth \(1\) tree of \((N+1)\)-site subgraph. Continuing this \(n\) times will generate a depth \(n\) subgraph tree, which has the property that the only cycles (a loop of length \(l\geq 3\) in which only the first and the last vertices are equal) of the graph are contained within each subgraph. Let \(N_{\text{SG}}\) be the number of subgraphs constituting such a subgraph tree. The number of sites in such a graph is \(NN_{\text{SG}}+1\). Figures 2 (a-b) are examples of such graphs. We will consider the Hamiltonian (1) on such a graph in the presence of a single hole.
The advantage of such a subgraph tree is that there is an \(SU(N)\) symmetry associated with each subgraph as can be seen as follows (Ref. [19] deals with a special case \(N=2\)). First, many-body basis can be constructed by locating the site of the hole \(i\), and then specifying the flavor configuration on the rest of the sites. Once the hole location is specified, it is easy to see that there is a unique \(N\)-mer covering of the lattice (See Fig. 2 (c) for the illustration of such a covering). In any step in which the hole hops to a neighboring site, one \(N\)-mer is moved, but in such a way that it remains inside the initial \((N+1)\)-site subgraph in which it was contained. Thus, we can label the \(N\)-mers uniquely by a subgraph index \(s=1,...,N_{\text{SG}}\), and the total flavor \(SU(N)\) symmetry is preserved for \(N\) fermions contained in each \(s\)\(N\)-mer. Therefore, the total symmetry group is \(SU(N)^{N_{\text{SG}}}=SU(N)\otimes SU(N)\otimes...\otimes SU(N)\).
Thanks to such \(SU(N)^{N_{\text{SG}}}\) symmetry, it is enough to consider a subspace that is flavor-balanced _in each \(s\)\(N\)_-mer. Any other states in the Hilbert space can be reached by repeated applications of raising and lowering operators on each \(s\)\(N\)-mer. (See the supplementary material for the expression of those raising/lowering operators.) We now construct a many-body basis restricted in such a flavor-balanced subspace analogously to Eqs. (3) and (4). We start from occupying a hole at a particular location (call it \(i=0\)). Then, for each \(s\)\(N\)-mer defined accordingly, we label the sites contained in it by \(i=(s-1)\cdot N+1,(s-1)\cdot N+2,\cdots,s\cdot N\) (See Fig. 2. (a) for the illustration of such a site numbering scheme along with subgraph indices \(s\)) and occupy it with fermions with flavors \(a=1,\cdots,N\), respectively. This defines one of the basis state:
\[\left|0,\left(1,...,N\right),\left(1,...,N\right),...,\left(1,..., N\right)\right\rangle\] \[\equiv c_{0,0}c_{0,0}^{\dagger}\left(c_{1,1}^{\dagger}c_{2,2}^{\dagger} \cdots c_{N,N}^{\dagger}\right)\left(c_{N+1,1}^{\dagger}\cdots c_{2N,N}^{ \dagger}\right)\] \[\times\cdots\times\left(c_{N\cdot(N_{\text{SG}}-1)+1,1}^{\dagger} \cdots c_{N\cdot N_{\text{SG}},N}^{\dagger}\right)\left|\emptyset\right\rangle, \tag{9}\]
where again, the ghost flavor index \(a=0\) is introduced for convenience in \(c_{0,0}\). Using the fact that fermions in different \(N\)-mers do not exchange one another due to a special geometry, we might as well treat them as distinguishable and rename a flavor index \(a\) in \(s\)\(N\)-mer to be \((s-1)N+a\). Hence, the basis state (9) can be denoted by
\[\left|0,1,...,N,N+1...,...,N\cdot N_{\text{SG}}\right\rangle\] \[\equiv c_{0,0}c_{0,0}^{\dagger}\left(c_{1,1}^{\dagger}c_{2,2}^{\dagger} \cdots c_{N,N}^{\dagger}\right)\left(c_{N+1,N+1}^{\dagger}\cdots c_{2N,2N}^{ \dagger}\right)\] \[\times\cdots\times\left(c_{N(N_{\text{SG}}-1)+1,N(N_{\text{SG}} -1)+1}^{\dagger}\cdots c_{NN_{\text{SG}},NN_{\text{SG}}}^{\dagger}\right) \left|\emptyset\right\rangle. \tag{10}\]
From this state, any other basis state that is flavor-balanced for each \(s\)\(N\)-mer can be reached by repeated applications of hopping operators \(\hat{T}_{ij}\). There are \((NN_{\text{SG}}+1)(N!)^{N_{\text{SG}}}\) different such (orthonormal) basis
Figure 2: (a-b) Examples of subgraph tree. In (a), sites are numbered in the way specified in the main text above Eq. (9). (c) The ground state in a single hole sector of the \(U=\infty\)\(SU(3)\) Hubbard and certain \(t\)-\(J\) models is a positive (\(A(i)>0\)) superposition of the \(SU(3)\) flavor-singlet covering states.
states. Each basis state has a permutation operator \(\sigma\in S_{NN_{\text{SG}}+1}\) associated with it, given by the shuffling of the flavors from the initial configuration Eq. (10) (We emphasize that flavor indices are renamed to have values \(a=0,1,...,NN_{\text{SG}}\)). That is, if site \(i\) is occupied by the flavor \(a\), then \(\sigma(i)\equiv a\). Let \(P\) be the collection of all such permutations. Now we define the basis \(\{\left|\sigma\right\rangle:\;\sigma\in P\}\) labelled by \(\sigma\) with a particular sign structure analogous to Eq. (4):
\[\left|\sigma\right\rangle\equiv \left|\sigma(0),...,\sigma(NN_{\text{SG}})\right\rangle\equiv(-1) ^{i}\text{sgn}(\sigma)c_{0,\sigma(0)}^{\dagger}c_{1,\sigma(1)}^{\dagger}\] \[\times\cdots\times c_{i-1,\sigma(i-1)}^{\dagger}c_{i+1,\sigma(i +1)}^{\dagger}\cdots c_{NN_{\text{SG}},\sigma(NN_{\text{SG}})}^{\dagger} \left|\emptyset\right\rangle\] \[= \text{sgn}(\sigma)c_{\sigma^{-1}(0),0}c_{0,\sigma(0)}^{\dagger} c_{1,\sigma(1)}^{\dagger}\cdots c_{NN_{\text{SG}},\sigma(NN_{\text{SG}})}^{ \dagger}\left|\emptyset\right\rangle, \tag{11}\]
where we again assumed that the \(i\)th site is occupied by a hole, i.e., \(\sigma(i)=0\). The sign structure again allows us to write the \(SU(N)\) flavor-singlet covering (\(N\)-FSC) state, the state with an \(SU(N)\) flavor-singlet on every \(N\)-mer, as a uniform superposition of the basis states:
\[\left|i,N\text{-FSC}\right\rangle\equiv\left|i,N\text{-FS}_{1}, \cdots,N\text{-FS}_{N_{\text{SG}}}\right\rangle\] \[=\frac{1}{\sqrt{(N!)^{N_{\text{SG}}}}}\sum_{\sigma\in P\atop\sigma (i)=0}\left|\sigma(0),\sigma(1),...,\sigma(NN_{\text{SG}})\right\rangle. \tag{12}\]
The following Theorem is the main result of our paper.
_Theorem_: The ground state of the Hamiltonian (1) in the presence of a single hole on a "subgraph tree" is unique and is a positive (\(A(i)>0\)) superposition of the \(SU(N)\) flavor-singlet covering (\(N\)-FSC) states [36]:
\[\left|\Psi_{0}\right\rangle=\sum_{i}A(i)\left|i,N\text{-FSC}\right\rangle. \tag{13}\]
(See Fig. 2 (c) for the illustration of this state)
_Proof of the Theorem_: It is straightforward to show that any nonzero off-diagonal element of the Hamiltonian matrix is negative, \(\left\langle\sigma\right|\hat{T}_{ij}\left|\tau\right\rangle=-t_{ij}<0\), as in Eq. (6). Also, since any basis state labelled by \(\sigma\) can be reached one another by a repeated applications of \(\hat{T}_{ij}\), one concludes from the Perron-Frobenius theorem that the ground state is unique and is a positive (\(A_{\sigma}>0\)) superposition of all the basis states:
\[\left|\Psi_{0}\right\rangle=\sum_{\sigma\in P}A_{\sigma}\left|\sigma\right\rangle. \tag{14}\]
As in the proof of the Lemma, this has a positive overlap with a flavor-singlet covering state \(\left|i,N\text{-FSC}\right\rangle\), and hence \(N\) fermions in every \(N\)-mer must be a flavor-singlet. Hence, \(\left|\Psi_{0}\right\rangle\) can be rewritten as a superposition of flavor-singlet covering states as in Eq. (13). \(\square\)
\(SU(N)\)\(t\)-\(J\)_model_. Now we generalize the previous results to the \(SU(N)\)\(t\)-\(J\) model. In the presence of a finite but large \(U\) (\(\gg t\)) term, \(\frac{U}{2}\sum_{i}\hat{n}_{i}(\hat{n}_{i}-1)\), one can obtain the \(SU(N)\)\(t\)-\(J\) model from the \(SU(N)\) Hubbard model by projecting out the states with multiply occupied sites [37; 38; 39]:
\[\hat{H}_{t\text{-}J} =\hat{H}+\sum_{\left\langle i,j\right\rangle}J_{ij}\left(\hat{ \mathbf{\lambda}}_{i}\cdot\hat{\mathbf{\lambda}}_{j}-\frac{N-1}{2N}\hat{n}_{i}\hat{n}_ {j}\right)\] \[-\sum_{\left\langle i,j,k\right\rangle,1\leq a<b\leq N}K_{ijk} \hat{\Delta}_{jk}^{ab\dagger}\hat{\Delta}_{ij}^{ab}+O\left(\frac{t^{3}}{U^{2}}\right)\] \[\equiv\hat{H}+\sum_{\left\langle i,j\right\rangle}\hat{J}_{ij}+ \sum_{\left\langle i,j,k\right\rangle}\hat{K}_{ijk}+O\left(\frac{t^{3}}{U^{2}} \right). \tag{15}\]
Here \(\hat{H}\) is the Hamiltonian for the \(U=\infty\) Hubbard model (1), \(J_{ij}=4t_{ij}^{2}/U\) and \(K_{ijk}=2t_{ij}t_{jk}/U\), \(\left\langle i,j,k\right\rangle\) denotes the triplet of sites such that \(j\) is a nearest neighbor to \(i\) and \(k\), and \(\hat{\Delta}_{ij}^{ab}\equiv\frac{1}{\sqrt{2}}(c_{i,a}c_{j,b}-c_{i,b}c_{j,a})\) is the annihilation operator of a flavor-antisymmetric state on a bond \(\left\langle i,j\right\rangle\). \(\hat{\mathbf{\lambda}}_{i}=(\hat{\lambda}_{i}^{(1)},...,\hat{\lambda}_{i}^{(N^{2}- 1)})\) denotes \((N^{2}-1)\) generators of the \(SU(N)\) group at site \(i\) with the normalization \(\text{Tr}(\lambda_{i}^{(r)})\lambda_{j}^{(r^{\prime})}=\frac{1}{2}\delta_{r,r^{ \prime}}\delta_{i,j}\)[40]. The Heisenberg operator can be rewritten in terms of a flavor-permutation operator \(\hat{P}_{ij}\) as \(J_{ij}(\hat{\mathbf{\lambda}}_{i}\cdot\hat{\mathbf{\lambda}}_{j}-\frac{N-1}{2N}\hat{n }_{i}\hat{n}_{j})=\frac{1}{2}j_{ij}(\hat{P}_{ij}-\hat{\mathbbm{1}})\hat{n}_{i} \hat{n}_{j}\). In the last line, we defined \(\hat{J}_{ij}\equiv J_{ij}(\hat{\mathbf{\lambda}}_{i}\cdot\hat{\mathbf{\lambda}}_{j}- \frac{N-1}{2N}\hat{n}_{i}\hat{n}_{j})\) and \(\hat{K}_{ijk}\equiv K_{ijk}\sum_{1\leq a<b\leq N}\hat{\Delta}_{jk}^{ab\dagger} \hat{\Delta}_{ij}^{ab}\). Note that \(\hat{J}\) and \(\hat{K}\) terms in Eq. (15) lower the energy only when it acts on the flavor-antisymmetric bond, enhancing the tendency towards flavor-singlet formation. The following two Corollaries summarize this observation.
_Corollary 1_: Let us study \(\hat{H}_{t\text{-}J}\) on an \((N+1)\)-site graph that satisfies the connectivity condition. \(J_{ij}\geq 0\) and \(K_{ijk}\geq 0\) do not have to be related to one another and can be arbitrary independent parameters. Then, the ground state of \(\hat{H}_{t\text{-}J}\) in the presence of a single hole is unique and is a positive superposition of flavor-singlet states (8) as in the Lemma.
_Corollary 2_: For \(\hat{H}_{t\text{-}J}\) defined on a subgraph tree, let \(J_{ij}=J_{s}\geq 0\) be uniform within each subgraph and connect any two sites within it. Also, let \(K_{ijk}\geq 0\) terms act only on three sites \(\left\langle i,j,k\right\rangle\) fully contained within a subgraph. Again, \(J_{ij}\) and \(K_{ijk}\) can be independent parameters, except for the above constraints. Then, the ground state of \(\hat{H}_{t\text{-}J}\) in the presence of a single hole is unique and is a positive superposition of flavor-singlet covering states (13) as in the Theorem.
_Proof of the Corollary 1_: For a single hole problem in an \((N+1)\)-site graph, the total \(SU(N)\) symmetry is intact even in the presence of \(\hat{J}\) and \(\hat{K}\) terms, and one can work in the flavor-balanced basis (4). As in the proof of the Lemma, it is sufficient to show that all the nonzero off-diagonal elements are negative. In particular, for \(\sigma\neq\tau\), \(\left\langle\sigma\right|\hat{J}_{ij}\left|\tau\right\rangle\) is nonzero only when \(\sigma\) and \(\tau\) differ by one transposition between occupied sites: \(\sigma(i)=\tau(j)\neq 0\), \(\sigma(j)=\tau(i)\neq 0\) and \(\sigma(k)=\tau(k)\) for \(k\neq i,j\). In such a
case, one obtains
\[\left\langle\sigma\right|\hat{J}_{ij}\left|\tau\right\rangle=-J_{ij}/2<0. \tag{16}\]
Similarly, any nonzero off-diagonal element of \(\hat{K}_{ijk}\) is negative
\[\left\langle\sigma\right|\hat{K}_{ijk}\left|\tau\right\rangle=-K_{ijk}/2<0. \tag{17}\]
This completes the proof. \(\square\)
_Proof of the Corollary 2_: For a subgraph tree, consider first the case when \(\hat{J}=0\). When \(K_{ijk}\) are nonzero only for triplets of sites \(\left\langle i,j,k\right\rangle\) fully contained in a subgraph, \(SU(N)^{N_{\rm SG}}\) symmetry is intact. Thus, one can still work in the flavor-balanced basis (11) and the same proof as in the Theorem can be applied to prove Corollary 2.
When \(\hat{J}\neq 0\), the \(SU(N)^{N_{\rm SG}}\) symmetry is lost. However, for the special case where \(J_{ij}=J_{s}\) is uniform within each subgraph and connects any two sites within it, one can rewrite the Heisenberg term as (density-density interactions in \(\hat{J}\) can be absorbed in \(\hat{V}\) term in Eq. (1)):
\[\sum_{\left\langle i,j\right\rangle}J_{ij}\hat{\Delta}_{i}\cdot\hat{\Delta}_{ j}=\sum_{s=1}^{N_{\rm SG}}\frac{J_{s}}{2}\left[\left(\sum_{i=1}^{N+1}\hat{ \Delta}_{\left\langle s,i\right\rangle}\right)^{2}-\sum_{i=1}^{N+1}\left(\hat {\Delta}_{\left\langle s,i\right\rangle}\right)^{2}\right]. \tag{18}\]
Here \(\left(s,i\right)\) denotes the site \(i=1,2,...,N+1\) in a subgraph \(s\). This Heisenberg operator takes the lowest possible eigenvalue for the flavor-singlet covering states (12):
\[\sum_{\left\langle k,l\right\rangle}J_{kl}\hat{\Delta}_{k}\cdot\hat{\Delta}_{ l}\left|i,N\text{-FSC}\right\rangle=\left(-\frac{N^{2}-1}{4}\sum_{s=1}^{N_{\rm SG }}J_{s}\right)\left|i,N\text{-FSC}\right\rangle. \tag{19}\]
This means that the ground state of \(\hat{H}_{t\text{-}J}\) is still in the flavor-singlet covering subspace spanned by states 12 and is still of the form Eq. (13) with positive \(A(i)>0\). \(\square\)
The results of Ref. [19] on the \(SU(2)\)\(t\)-\(J\) model on a triangular cactus is the \(N=2\) case of Corollary 2.
_Discussion._ Our result demonstrates the fundamental importance of the sign of the hopping matrix elements \(t_{ij}\) on a kinetic magnetism in the \(U=\infty\) limit, which in turn, manifests as a particle-hole asymmetry in the magnetic phase digram. More precisely, in the usual \(SU(2)\) Hubbard model, the particle-hole transformation \(c_{i,\sigma}\to c_{i,\sigma}^{\dagger}\), with \(\sigma=\uparrow,\downarrow\), maps the single doublon problem to the single hole problem with the opposite sign of the hopping matrix element \(t_{ij}\)[4]. This implies that for a bipartite lattice--where the sign of \(t_{ij}\) can be changed by a gauge transformation--the phase diagram is particle-hole symmetric around half-filling. On the other hand, for a non-bipartite lattice the phase diagram exhibits a particle-hole asymmetry. For example, the single hole dynamics in the triangular lattice \(U=\infty\) Hubbard model leads to a \(120^{\circ}\) antiferromagnetic ordering [16; 17] whereas the single doublon problem satisfies the Nagaoka's theorem and leads to a fully polarized ferromagnet (except for one singlet for a doublon). Performing such a particle-hole transformation to the \(SU(N)\) Hubbard model, one maps a single hole problem at \(1/N\) filling to a single \(N\)-on (\(N\) fermions at a site) problem at \((N-1)/N\) filling with the opposite sign of \(t_{ij}\). Also, since \((N-1)\) fermions at the same site must be completely flavor-antisymmetric, such \(N-1\) electrons form a complex conjugate representation \(\bar{\mathbf{N}}\) of the fundamental representation. Hence we see that for the usual sign of the hopping, while the Nagaoka state appears for a single fermion doping of the \(\frac{N-1}{N}\) filled Mott insulator, a single hole dynamics at \(\frac{1}{N}\) filling induces antiferromagnetic/RVB correlations.
We note that in a more realistic non-bipartite lattice (e.g. a triangular or pyrochlore lattice), it is likely that the hopping operators \(\hat{T}\) and exchange interactions \(\hat{J}\) (or singlet hopping terms \(\hat{K}\)) favor different local magnetic correlations. In such a case, the hole can only delocalize in a finite number of sites due to the competition with other local magnetic tendencies, leading to the formation of an antiferromagnetic/RVB polaron.
Going from such a single RVB/antiferromagnetic polaron problem to a multi-polaron (or multi-hole) problem requires yet another technical development, but we can speculate on possible outcomes (apart from a trivial phase separation scenario). First, it is possible to have a broken-\(SU(N)\)-symmetry phase with a long-range flavor-antiferromagnetic order when flavor-singlets are supported over a sufficiently long distance [16; 17; 41; 18]. If \(SU(N)\) flavor-singlets are supported only on a short enough distance, one can have a flavor-disordered phase with a topological order [42; 43; 44; 45; 46]. The flavor-disordered state with a broken translation symmetry corresponds to various topologically ordered flavor-disordered crystalline phases [47]. Finally, it is possible to have various exotic liquid phases with a topological character such as a \(\mathbb{Z}_{N}\) topologically ordered Fermi liquid (FL* phase) [48] or a high-temperature superconductivity.
## Acknowledgement
K-S.K. acknowledges the hospitality of Massachusetts Institute of Technology, where most of the work is done. K-S.K. thanks Samuel Alipour-fard for teaching him the representation theory of \(SU(N)\) group. K-S.K. appreciates helpful discussions with Zhaoyu Han. K-S.K. was supported by the Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering, under contract DE-AC02-76SF00515. H.K. was supported by JSPS KAKENHI Grants No. JP18K03445, No. JP23H01093, No. 23H01086, and MEXT KAKENHI Grant-in-Aid for Transformative Research Areas A "Extreme Universe" (KAKENHI Grant No. JP21H05191). |
2302.13386 | NBA2Vec: Dense feature representations of NBA players | Understanding a player's performance in a basketball game requires an
evaluation of the player in the context of their teammates and the opposing
lineup. Here, we present NBA2Vec, a neural network model based on Word2Vec
which extracts dense feature representations of each player by predicting play
outcomes without the use of hand-crafted heuristics or aggregate statistical
measures. Specifically, our model aimed to predict the outcome of a possession
given both the offensive and defensive players on the court. By training on
over 3.5 million plays involving 1551 distinct players, our model was able to
achieve a 0.3 K-L divergence with respect to the empirical play-by-play
distribution. The resulting embedding space is consistent with general
classifications of player position and style, and the embedding dimensions
correlated at a significant level with traditional box score metrics. Finally,
we demonstrate that NBA2Vec accurately predicts the outcomes to various 2017
NBA Playoffs series, and shows potential in determining optimal lineup
match-ups. Future applications of NBA2Vec embeddings to characterize players'
style may revolutionize predictive models for player acquisition and coaching
decisions that maximize team success. | Webster Guan, Nauman Javed, Peter Lu | 2023-02-26T19:05:57Z | http://arxiv.org/abs/2302.13386v1 | # NBA2Vec: Dense Feature Representations of NBA Players
###### Abstract
Understanding a player's performance in a basketball game requires an evaluation of the player in the context of their teammates and the opposing lineup. Here, we present _NBA2Vec_, a neural network model based on _Word2Vec_[1] which extracts dense feature representations of each player by predicting play outcomes without the use of hand-crafted heuristics or aggregate statistical measures. Specifically, our model aimed to predict the outcome of a possession given both the offensive and defensive players on the court. By training on over 3.5 million plays involving 1551 distinct players, our model was able to achieve a 0.3 K-L divergence with respect to the empirical play-by-play distribution. The resulting embedding space is consistent with general classifications of player position and style, and the embedding dimensions correlated at a significant level with traditional box score metrics. Finally, we demonstrate that NBA2Vec accurately predicts the outcomes to various 2017 NBA Playoffs series, and shows potential in determining optimal lineup match-ups. Future applications of NBA2Vec embeddings to characterize players' style may revolutionize predictive models for player acquisition and coaching decisions that maximize team success.
## I Introduction
Successful coaches construct optimal lineups for given situations in basketball games based on a deep understanding of each player's play-style, strengths, and weaknesses in the context of all other players on the court. Studying the distribution of contexts and their outcomes in which a player takes part may provide insights into aspects of player's performance and play style that are not otherwise reflected in traditional basketball statistics. While much of basketball analytics relies on the use of hand-crafted advanced statistics (e.g. Wins Above Replacement and Offensive/Defensive rating) and aggregate statistics (e.g. FG%, assists), they tend to not capture these contextual influences and effects not present in box scores. Models capable of characterizing players based on these contextual factors would offer greater insight into individual player performance and play-style, and may shed light on how to construct optimal lineups for specific situations. Constructing such frameworks may be possible given the wealth of play-by-play game data and recent advances in machine learning and natural language processing (NLP) algorithms.
In particular, the problem of generating accurate representations of players in sports analytics is analogous to the problem of word embeddings in NLP. Word embedding models aim to create real-valued vector embeddings of words that encode complex semantic structure. _Word2Vec_ is a class of word embedding models that extract useful features of words given the sentences, known as the "context," in which the words are commonly used [1]. This allows Word2Vec to be trained in an unsupervised way on a large corpus of written works and extract meaningful relationships. Once trained, the word embeddings can then be applied to a variety of other tasks as a pretrained initial transformation in manner analogous to transfer learning.
In the training phase, the way in which the context of each word is used can be different; in particular, Word2Vec uses either a continuous bag-of-words (CBOW) model, or a skip-gram model. The skip-gram method (Figure 0(a)) takes the target word as input to a neural network's hidden layer(s) and attempts to predict the words that immediately surround the target word in a given sentence. On the other hand, the CBOW method (Figure 0(b)) takes the words around a target word as input, predicting the target word as output. The result of training these models is a dense vectorial word representation that captures the meaning of each word. For example, Word2Vec finds more similarity between the words "king" and "queen" than between "king" and "vindication." The ability of word embeddings to accurately capture the relationship and analogies among words is shown by Word2Vec arithmetic: for instance, _Paris \(-\) France \(+\) Italy \(=\) Rome_.
The success of word embeddings in NLP has inspired its recent application in sports analytics to characterizing batter and pitcher performance and play style in baseball [2]. In that study, a neural network was trained using pitcher-batter matchups as inputs and the outcome of each at-bat (e.g. single, home run, flyout to right field) as outputs to create the player embeddings. The author was able to successfully visualize macroscopic differences in embedding clusters (e.g. right-handed vs. left-handed vs. switch hitters, power hitters vs. on-base hitters) and model previously unseen at-bat matchups, suggesting that the word embedding concept may be feasible and promising for creating player representations.
In this study, we applied this concept to extract representations of different NBA players by producing a embeddings of every player, which we term _NBA2Vec_. Similar to [2], the embedding for each player was generated by training a neural network aimed at predicting the outcome of each possession given the ten players on the court. Unlike in [2], which takes advantage of the mostly one-on-one nature of baseball dynamics, we used all players on the court to ensure accurate modeling of holistic relationships and dynamics between players on the same and different teams. This increased the complexity of the network, and due to the
n-body nature of the problem, required the network to exhibit permutation invariance. Unlike previous attempts to generate NBA player representations [3] using purely high level, aggregate statistics and hand-picked features (e.g. number of shots taken in the paint, FG%, FT%, assists, etc.), our embedding approach learns directly from raw play-by-play data by automatically generating rich features which account for the "context" that affects a player's style and statistics. The latent features encoded in these player embeddings can shed light on both the play style and effectiveness of different types of players, and can be used as inputs for further downstream processing and prediction tasks.
## II Methods
### _Data Sets and Preprocessing_
We used play-by-play and players-on-court data provided by the NBA, which featured over 9 million distinct plays, with 1551 distinct players taking the court in these plays. To create the input to the network, we denoted each player with an index from 0 to 1550. For the outputs to the network, we needed to encode the possible outcomes of each play. In order to encourage learning, we only considered key outcomes, omitting rebounds and defensive plays. We chose to use 23 distinct outcomes, some examples of which are included in Table 1. The provided raw play-by-play data was more specific on outcomes of plays, but we grouped many of these plays (e.g. "driving layup shot, dunk shot, reverse dunk shot, hook shot" were all considered "close-up shots") together for simplicity. This preprocessing resulted in 4.5 million plays, of which we used 3.7 million for a training set and the remainder as a validation set. We used the Pandas library to preprocess the data [4].
Fig. 1: (a) Skip-gram word embedding model. The embedding is extracted from the hidden layer after training to a target of \(n\) context words. \(v\) represents the vocabulary size, or length of the word vectors, while \(h\) represents the embedding length. (b) CBOW model. A given word’s embedding is computed by averaging the hidden layer representations of all contexts in which it appears.
### _NBA2Vec: Player Embedding Model_
To train informative embeddings for players in the NBA, we created a neural network architecture that predicts the distribution of play outcomes given a particular offensive and defensive lineup (Figure 2). For each play, we first embed the 10 players on the court using an 8 dimensional shared player embedding. We then separately average the 5 offensive and 5 defensive player embedding vectors. These two mean player embeddings (i.e. an offensive and a defensive lineup embedding) are concatenated and fed through one additional hidden layer of size 128 with a ReLU activation before outputting 23 outcome scores. Applying a softmax activation to the scores produces probabilities that we interpret as the distribution of play outcomes (Table 1). The entire network is trained end-to-end with a cross entropy loss function that stochastically minimizes the K-L divergence between the true play outcomes from the data and the predicted distribution from the model. This model was built and trained using the PyTorch framework [5].
### _Validation and Post-processing_
#### Ii-C1 Validation of NBA2Vec
To evaluate the efficacy of the NBA2Vec model used to generate the embeddings, we characterized the difference between the predicted and empirical distributions of play outcomes. The Kullback-Leibler (K-L) divergence was used as the metric to compare the distributions. K-L divergence is given by
\[D_{KL}\left(p\|q\right)=\sum_{i=1}^{N}p(x_{i})\log\frac{p(x_{i})}{q(x_{i})}. \tag{1}\]
This measures the number of encoded bits lost when modeling a target distribution \(p(x)\) (in this case, the empirical distribution) with some approximate distribution \(q(x)\) (in this case, our predictive model). Thus, a low K-L divergence value (\(D_{KL}\approx 0\)) implies a good approximation of the target distribution, while a larger value (\(D_{KL}\gg 0\)) implies poor approximation.
Due to the large number of unique lineup-matchup combinations, some of which do not appear enough for a proper empirical distribution to be generated, we decided to only look at K-L divergences for lineup-matchup combinations with more than 15 plays. This analysis was performed on the last 25 games of the data set (corresponding to the last 25 playoff games in the 2018 NBA playoffs, and 5102 plays).
#### Ii-C2 Embedding Analysis
After training the model, we extracted the shared embedding layer and use dimensionality reduction, clustering, and visualization methods to explore the learned player embeddings. In particular, we used t-SNE--a dimensionality reduction method based on local neighbor structure--to visualize our 8 dimensional embeddings in 2 dimensions [6]. We also used 2D principal component analysis (PCA) for dimensionality reduction before performing k-means clustering. PCA is a statistical method that uses orthogonal decomposition to transform a set of observations of correlated variables into a set of observations of uncorrelated variables, or principal components. Each successive principal component explains the maximal variance in the data while remaining orthogonal to the preceding component. K-means clustering is a simple clustering method that aims to partition n observations into k clusters, where each observation belongs to the cluster with the nearest mean. Our approaches are further described in section III-A. These dimensionality and clustering techniques were conducted using implementations from the Scikit-learn library [7].
#### Ii-C3 Exploring Lineup Combinations
To further explore the macroscopic predictive nature allowed by these embeddings, we used the neural network model to predict the outcomes of games based on each team's most frequent 5-player lineup. For each pair of teams, the model would output the distribution of possible play outcomes. We would then sample these distributions to determine which plays would occur in a given game, and based on this, predict a game score. Assuming 100 possessions for each team and that no substitutions are ever made, we ran the model on various playoff series match-ups from the 2016-17 season, simulating 1000 best-of-7 series between each pair of teams. Certain playoff series were not simulated because the most frequent lineups contained players that were not among the raw data's most common 500 players.
## III Results and Discussion
### _Embedding Analysis_
In order to better understand the generated player representations, we used t-SNE (described in section II-C2) to visualize the 8 dimensional feature vector in 2D. As depicted in the 2D t-SNE plot (described in section II-C2) 3, we see that the centers and guards separate nicely, while forwards (yellow) are scattered throughout both groups. This is consistent with our intuition about the roles and play-styles of these classes of players--guards and centers fulfill very distinct roles while forwards can be more center-like or guard-like and fulfill multiple roles (i.e. LeBron James). Despite t-SNE's utility in preserving high dimensional local structure, it is not effective at preserving the global structure of data, which along with the high dimensionality of our player representations, may help explain why it is difficult to visualize clear separation between different groups of players.
To further characterize the learned embeddings, we k-means clustered 8 scaled and centered embedded dimensions. We decided to select 3 clusters for our initial analysis after observing a decreased rate of decline in the variance as a function of the number of clusters, as seen in the elbow plot in Figure 4. In Figure 5 we see that k-means yields 3 distinct clusters, which roughly group players with similar roles/playing styles. For example, the yellow cluster seems to correspond to guards, the green cluster with centers/forwards, and the red cluster with forwards. Note that because of the high dimensional nature of our player representations and the fact that we are projecting players onto two dimensions, the euclidean distance between points on the shown plot is not entirely representative of player similarity.
The observed clusters also seem to suggest that as opposed to fitting neatly within the traditional 5 positions of basket
## 3 Results
Fig. 2: NBA2Vec model architecture. We have \(v=1551\) players, which are mapped to \(h=8\) dimensional embeddings. After the \(n=5\) offensive and defensive player embeddings are separately averaged and then concatenated, they are fed through an \(i=128\) hidden layer with a ReLU activation. The final output layer with a softmax activation predicts a probability distribution over \(o=23\) play outcomes.
ball, players actually perform diverse roles that would place them in multiple categories. As seen in figure 5, each of the clusters comprises multiple groups of players--clusters 1 and 2 comprise positions 1-5, while cluster 2 corresponds mostly to centers/forwards. Roughly, this may reflect that successful players are rarely one trick ponies--centers must be able to shoot, and point guards must be able to score. In general, we can observe that the learned embeddings roughly correspond to general basketball intuition. However, the embeddings are also capturing player characteristics that may not be entirely reflected in traditional metrics such as box score.
Exploring the structure of the embedded space by calculating the nearest neighbors by distance for various players further validates the learned player representations. For example, Chris Paul, a canonical point guard, has nearest neighbors including other point guards such as Steve Nash, Jose Calderon, and Jason Terry. Shaquille O'Neal, a classic big man, has nearest neighbors including centers such as Dwight Howard, Roy Hibbert, Tiago Splitter, and Rudy Gobert.
Next, we calculated Pearson's correlation between the top two PCA dimensions and player metrics including minute adjusted rates for field goals made, three pointers, assists, rebounds, and plus--minus, as shown in figure 7. Rudimentary analysis revealed that PCA dimension 1 correlated at a significant level (corrected \(\alpha=5\times 10^{-4}\)) with rebounds, assists, and three pointers, while PCA dimension 2 correlated at a significant level with rebounds and assists (corrected \(\alpha=5\times 10^{-4}\)). For both dimensions, the noted correlations remained significant even with Bonferroni adjustment.
Our exploratory analysis of the embeddings reveals that the learned player representations encode meaningful information that correspond roughly to our intuition about various players. Through a rudimentary analysis, the embedded dimensions seem to correspond to a complex combination of player characteristics as well as real player performance metrics.
### _Validation of NBA2Vec_
Validation of the NBA2Vec network was performed on plays in the final 25 games of the data set, and a mean K-L divergence of \(0.301\pm 0.162\) was achieved. Some example predicted vs. empirical distributions of play outcomes are shown in Figure 9, showing that the model is able to closely approximate the target distribution.
We also wanted to determine the minimum number of plays needed to create an accurate empirical distribution that can be modeled by the predictive network. Plotting the K-L divergence vs. number of plays used in the empirical distribution (Figure 8), we can estimate the minimum number of plays to reach the minimum K-L divergence to be around 30.
### _Exploring Lineup Combinations_
The results (Table 2) show that even with some crude assumptions, the winner of any given 7-game series and the average margin of victory can be approximated using NBA2Vec embeddings and this neural network model. More accurate game outcomes would require more precise sequence modeling of game-by-game dynamics instead of our current play-by-play treatment; however, this demonstrates
Fig. 4: Elbow method showing variance in data as a function of the number of clusters in order to optimize \(k\).
Fig. 5: Points projected on to first two principal components, colored by clusters identified by k-means clustering with \(k=3\).
Fig. 3: Two-dimensional t-SNE visualization of player representations, colored by position. (G = guards, C = centers, F = forwards, G-F = guard-forwards, F-C = forward-centers).
the potential of NBA2Vec embeddings and the play outcome predictive network.
While serviceable as a predictor of game and series outcomes, there is also potential use for NBA2Vec as a lineup optimizer. Given an opposing lineup, this model facilitates selection of a corresponding optimally matched lineup for both offense and defense. This optimization can be accomplished by sampling the model's predicted distribution many times for a given pair of lineups. As an example, we optimize a lineup to face the Golden State Warriors' "death lineup" (Stephen Curry, Klay Thompson, Andre Iguodala, Kevin Durant, and Draymond Green) for the Houston Rockets, where we fix the first four players (James Harden, Chris Paul, Eric Gordon, Clint Capela) and vary the fifth. From this analysis, we can predict the Rockets' best possible 5th man, and also compare his performance
Fig. 8: K–L divergence vs. Number of plays used in empirical distribution. The K–L divergence reaches a minimum plateau after about 30 plays.
Fig. 6: Different types of players comprising each identified cluster in Figure 5.
Fig. 7: Correlations and p-values of different metrics with the two top PCA dimensions for each player. Each raw metric is summed for each player and normalized by the total number of minutes played. dim1 = PCA dimension 1, dim2 = PCA dimension 2, fg = field goal. Reported p-values are not Bonferroni corrected but, after correction, remain significant at \(\alpha=5\times 10^{-4}\) with 5 comparisons.
to that of previous starting forward Trevor Ariza. As the simulated win percentages show (Table 3), the ideal 5th man that is currently on the Rockets roster--and is also among the data's 500 most common players--for combating this Warriors lineup is Nene. Compared to Trevor Ariza, only Nene is predicted to add more value. Interestingly, offseason acquisition and superstar Carmelo Anthony is projected to add slightly less value for the Rockets when facing the Warriors than Trevor Ariza (Table 3).
tions, this could be extrapolated to predictive algorithms for projections of each team's win-loss record given the players on its roster.
In addition to applications in predictive tasks, we have shown that the generated NBA2Vec embeddings are able to reveal underlying features of players without using aggregate statistics such as points, FG%, and assists. Clustering on the embeddings generally groups players in agreement with their position and our priors about their play style/characteristics. Furthermore, the embeddings in part reflect traditional performance metrics, as we are able to show that they correlate at a significant level with box score statistics including rebounds, assists, and field goal rate. Given enough G-League and NCAA training data, player embeddings for potential recruits could also be generated. By examining the nearest neighbor embeddings in the NBA player space, the recruit's "equivalent NBA player" representation could aid scouts in characterizing him and how he would contribute to a given NBA roster.
There are various improvements that can be made to potentially extract better player embeddings. Instead of training to predict a singular outcome to every play, a more complex model would train to predict a series of outcomes to each play (e.g. missed shot followed by defensive rebound, shooting foul followed by 2/2 free throws made). To further increase the richness of the embeddings, the network could also be modified to predict the player who commits each action. Finally, with the appropriate player tracking data, a recurrent neural network could be used to take as input time series of player spatial positions and attempt to predict play outcomes and later player spatial positions. Similar to the embeddings generated in this study, these improvements would use only raw data to capture each player's features and "identity." Ultimately, we envision a future for basketball analytics in which player embeddings allow for unprecedented levels of player characterization, driving predictive models that revolutionize the way front office and coaching decisions are made.
## V Acknowledgements
We would like to thank the NBA for organizing the 2018 NBA Hackathon and providing the data for this analysis. We would also like to extend our thanks to Caitlin Chen for her generosity during our trip.
|
2305.18274 | Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning
and Diffusion Priors | We present MindEye, a novel fMRI-to-image approach to retrieve and
reconstruct viewed images from brain activity. Our model comprises two parallel
submodules that are specialized for retrieval (using contrastive learning) and
reconstruction (using a diffusion prior). MindEye can map fMRI brain activity
to any high dimensional multimodal latent space, like CLIP image space,
enabling image reconstruction using generative models that accept embeddings
from this latent space. We comprehensively compare our approach with other
existing methods, using both qualitative side-by-side comparisons and
quantitative evaluations, and show that MindEye achieves state-of-the-art
performance in both reconstruction and retrieval tasks. In particular, MindEye
can retrieve the exact original image even among highly similar candidates
indicating that its brain embeddings retain fine-grained image-specific
information. This allows us to accurately retrieve images even from large-scale
databases like LAION-5B. We demonstrate through ablations that MindEye's
performance improvements over previous methods result from specialized
submodules for retrieval and reconstruction, improved training techniques, and
training models with orders of magnitude more parameters. Furthermore, we show
that MindEye can better preserve low-level image features in the
reconstructions by using img2img, with outputs from a separate autoencoder. All
code is available on GitHub. | Paul S. Scotti, Atmadeep Banerjee, Jimmie Goode, Stepan Shabalin, Alex Nguyen, Ethan Cohen, Aidan J. Dempster, Nathalie Verlinde, Elad Yundler, David Weisberg, Kenneth A. Norman, Tanishq Mathew Abraham | 2023-05-29T17:49:00Z | http://arxiv.org/abs/2305.18274v2 | # Reconstructing the Mind's Eye: fMRI-to-Image with Contrastive Learning and Diffusion Priors
###### Abstract
We present MindEye, a novel fMRI-to-image approach to retrieve and reconstruct viewed images from brain activity. Our model comprises two parallel submodules that are specialized for retrieval (using contrastive learning) and reconstruction (using a diffusion prior). MindEye can map fMRI brain activity to any high dimensional multimodal latent space, like CLIP image space, enabling image reconstruction using generative models that accept embeddings from this latent space. We comprehensively compare our approach with other existing methods, using both qualitative side-by-side comparisons and quantitative evaluations, and show that MindEye achieves state-of-the-art performance in both reconstruction and retrieval tasks. In particular, MindEye can retrieve the exact original image even among highly similar candidates indicating that its brain embeddings retain fine-grained image-specific information. This allows us to accurately retrieve images even from large-scale databases like LAION-5B. We demonstrate through ablations that MindEye's performance improvements over previous methods result from specialized submodules for retrieval and reconstruction, improved training techniques, and training models with orders of magnitude more parameters. Furthermore, we show that MindEye can better preserve low-level image features in the reconstructions by using img2img, with outputs from a separate autoencoder. All code is available on GitHub.
## 1 Introduction
The problem of decoding environmental inputs and cognitive states from brain activity is fundamental to the field of neuroscience, where improved computational approaches allow for further understanding of brain mechanisms [1]. A neuroimaging methodology that has seen significant success in this domain is functional magnetic resonance imaging (fMRI), where neural activity is measured by detecting changes in blood oxygenation. fMRI decoding is already being used in
real-time clinical domains [2] and has potential for novel mind reading applications in brain-computer interfaces. Previous works mapped fMRI activity to the embeddings of image generation models via relatively simple mappings, usually ridge regression [3; 4; 5]. Here we propose MindEye, a novel approach that involves mapping via large-scale multilayer perceptrons (MLPs), contrastive learning, and diffusion models to achieve state-of-the-art image reconstruction. See Figure 1 for select samples of reconstructions.1
Footnote 1: Images containing each subject’s 982 test set reconstructions and retrievals are available on GitHub.
MindEye learns to map flattened spatial patterns of fMRI activity across voxels (3-dimensional cubes of cortical tissue) to the image embedding latent space of a pretrained CLIP [7] model. MindEye has an MLP backbone and 2 specialized submodules for retrieval and reconstruction. The retrieval submodule is contrastively trained and produces "disjointed CLIP fMRI" embeddings that have high cosine similarity with the corresponding image embeddings but differ in magnitude. To reconstruct images, we train a diffusion prior [8] to take in the outputs from the MLP backbone and produce aligned embeddings suitable as inputs to any pretrained image generation model that accepts CLIP image embeddings. In order to ensure that our reconstructions also match the original images' low-level features (e.g., color, texture, spatial position), we train a separate encoder that directly maps voxels to the embedding space of Stable Diffusion's [9] variational autoencoder (VAE), obtaining blurry image reconstructions that lack high-level semantic content but perform state-of-the-art on low-level image metrics. Combining the high-level "semantic" pipeline with the low-level "perceptual" pipeline in an img2img [10] setting allows MindEye to output state-of-the-art reconstructions across both low- and high-level image metrics.
In addition to image reconstruction metrics, our disjointed CLIP fMRI embeddings attain state-of-the-art performance on image retrieval and brain retrieval metrics. Image retrieval refers to finding the original seen image out of a pool of other images given a brain sample, while brain retrieval refers to finding the brain sample given an image. MindEye finds exact (top-1) matches in the pool of NSD test samples with >90% accuracy for both image and brain retrieval, outperforming previous state-of-the-art [11; 4] which showed <50% retrieval accuracies. These results demonstrate that MindEye brain embeddings possess fine-grained exemplar-level signal.
Our main findings are: (1) Specialized submodules for retrieval (using contrastive learning) and reconstruction (using a diffusion prior) enable a single model to achieve state-of-the-art results across both tasks even though the objectives exhibit a tradeoff. (2) Mapping to a deep MLP with a parameter count orders of magnitude higher than previous methods does not produce overfitting and instead directly benefits model performance. (3) A novel bidirectional version of mixup contrastive data augmentation further improves model performance in this low-sample setting. (4) State-of-the-art reconstructions for low-level image metrics can be obtained by independently mapping to Stable Diffusion's VAE latent space. (5) fMRI-to-image retrieval can find the exact original image even among highly similar candidates, suggesting that fine-grained image-specific information is contained in brain embeddings, thus allowing retrieval to be scaled up to large-scale databases like LAION-5B to output images without generative models.
Figure 1: Example images reconstructed from human brain activity corresponding to passive viewing of natural scenes. Reconstructions depict outputs from Versatile Diffusion [6] given CLIP fMRI embeddings generated by MindEye for Subject 1. See Figure 4 and Appendix A.3 for more samples.
## 2 MindEye
MindEye consists of two pipelines (see Figure 2), a high-level (semantic) pipeline where fMRI voxels are mapped to the CLIP ViT-L/14 image space and a low-level (perceptual) pipeline where the voxels are mapped to the image embedding space of a VAE. Both pipelines follow a common structure: a residual MLP backbone followed by two task-specific submodules. For the high-level pipeline the submodules are an MLP projector and a diffusion prior. For the low-level pipeline the submodules are an MLP projector and a CNN decoder that performs 4x upsampling. For both pipelines we observe that training the projector submodule with a contrastive loss and the second submodule with mean squared error (MSE) loss gives best performance.
### High-Level (Semantic) Pipeline
The high-level pipeline is the core of MindEye as it maps voxels to CLIP image space to be fed through pretrained image generation models. We refer to it as a "high-level" pipeline because CLIP embeddings are inherently more semantic than perceptual, since CLIP image encoders were trained to maximize similarity with text captions (low-level features like color and object location are not typically preserved in these captions). MindEye can be used without the low-level pipeline, which simply aids to better preserve low-level image features during reconstruction.
The MLP backbone for our high-level pipeline maps flattened voxels to an intermediate space of size \(257\times 768\), corresponding to the last hidden layer of CLIP ViT/L-14 (see Appendix 1 for PyTorch model code). The backbone consists of a linear layer followed by 4 residual blocks and a final linear projector. The embeddings from the backbone are fed to an MLP projector and a diffusion prior in parallel. The whole model is trained end-to-end with the prior getting an MSE loss and the projector getting a bidirectional CLIP loss. The projector outputs can be used for retrieval tasks and the diffusion prior outputs can be used by generative models to reconstruct images.
**Contrastive Learning:** Contrastive learning is an effective method for learning representations across modalities by maximizing cosine similarity for positive pairs while minimizing similarity for negative pairs. CLIP [7] is a multimodal contrastive model that maps images and text captions to a shared embedding space. MindEye is trained to introduce fMRI as an additional modality to the
Figure 2: MindEye overall schematic. A high-level “semantic” pipeline maps voxels to CLIP embeddings for image reconstruction (outputs from a diffusion prior feed through generative models like Versatile Diffusion) or retrieval tasks (such as K-nearest neighbor querying of brain embeddings to the CLIP embeddings of LAION-5B images). A low-level “perceptual” pipeline maps voxels to the variational autoencoder used by Stable Diffusion to obtain blurry reconstructions, which are used as the initialization for subsequent diffusion-based image generation. The contrastive loss for the low-level pipeline is omitted for simplicity; see Appendix A.2.2 for details.
embedding space of a pretrained CLIP model, keeping the CLIP image space frozen as done with locked-image text tuning (LiT) [12]. We use the CLIP loss [7] as our contrastive objective. This loss is bidirectional and helps improve both image and brain retrieval.
Recent work [13; 14; 15; 16] has explored novel data augmentation techniques that offer several benefits like improving performance, increasing robustness, and reducing training data requirements. Mixup [13] is one such technique which trains models on synthetic data created through convex combinations of two datapoint-label pairs [17]. Kim et al. [18] introduce MixCo, an extension of mixup that uses the InfoNCE loss, and show that MixCo improves classification performance in a semi-supervised setting. Based on the same principle, we modify the bidirectional CLIP loss to use MixCo. While Kim et al. [18] observed that MixCo gives largest performance benefit for smaller models, we observe that it also helps large models in low data regimes.
To combine MixCo with CLIP loss, we mix voxels using a factor \(\lambda\) sampled from the Beta distribution with \(\alpha=\beta=0.15\).
\[x_{\text{mix}_{i,k_{i}}}=\lambda_{i}\cdot x_{i}+(1-\lambda_{i})\cdot x_{k_{i}},\quad p_{i}^{*}=f(x_{\text{mix}_{i,k_{i}}}),\quad p_{i}=f(x_{i}),\quad t_{i}= \text{CLIP}_{\text{Image}}(y_{i}) \tag{1}\]
Here, \(x_{i}\) and \(y_{i}\) represent the \(i\)-th fMRI sample and image respectively. \(k_{i}\in[1,N]\) is an arbitrary mixing index for the \(i\)-th datapoint and \(f\) represents the combined MLP and projector. \(p^{*}\), \(p\) and \(t\) are L2-normalized. The CLIP loss with MixCo is defined as:
\[\begin{split}\mathcal{L}_{\text{BiMixCo}}=-\sum_{i=1}^{N}\left[ \lambda_{i}\cdot\log\left(\frac{\exp\left(\frac{p_{i}^{*}\cdot t_{i}}{\tau} \right)}{\sum_{m=1}^{N}\exp\left(\frac{p_{i}^{*}\cdot t_{m}}{\tau}\right)} \right)+(1-\lambda_{i})\cdot\log\left(\frac{\exp\left(\frac{p_{i}^{*}\cdot t_ {k_{i}}}{\tau}\right)}{\sum_{m=1}^{N}\exp\left(\frac{p_{i}^{*}\cdot t_{m}}{ \tau}\right)}\right)\right]\\ -\sum_{j=1}^{N}\left[\lambda_{j}\cdot\log\left(\frac{\exp\left( \frac{p_{j}^{*}\cdot t_{j}}{\tau}\right)}{\sum_{m=1}^{N}\exp\left(\frac{p_{m}^ {*}\cdot t_{j}}{\tau}\right)}\right)+\sum_{\{l\}|k_{l}=j\}(1-\lambda_{l})\cdot \log\left(\frac{\exp\left(\frac{p_{j}^{*}\cdot t_{j}}{\tau}\right)}{\sum_{m=1} ^{N}\exp\left(\frac{p_{m}^{*}\cdot t_{j}}{\tau}\right)}\right)\right]\end{split} \tag{2}\]
We term this bidirectional loss as BiMixCo. Here \(\tau\) is a temperature hyperparameter, and \(N\) is the batch size.
Recent works [19; 20] have shown that stopping mixup augmentation after a certain number of epochs leads to better classification performance. As per these findings, we stop using mixup and switch from a hard contrastive loss to a soft contrastive loss one-third of the way through training. This improves our reconstructions without harming our retrieval performance (refer Table 4).
Our soft contrastive loss is inspired by knowledge distillation [21], where the authors argue that the softmax probability distribution produced by a powerful teacher model acts as a better teaching signal for a student than hard labels. To generate the soft labels we take the dot product of CLIP image embeddings in a batch with themselves. The loss (with bidirectional component omitted for brevity) is calculated between CLIP-CLIP and Brain-CLIP matrices as:
\[\mathcal{L}_{\text{SoftCLIP}}=-\sum_{i=1}^{N}\sum_{j=1}^{N}\left[\frac{\exp \left(\frac{t_{i}\cdot t_{j}}{\tau}\right)}{\sum_{m=1}^{N}\exp\left(\frac{t_{i} \cdot t_{m}}{\tau}\right)}\cdot\log\left(\frac{\exp\left(\frac{p_{i}\cdot t_ {j}}{\tau}\right)}{\sum_{m=1}^{N}\exp\left(\frac{p_{i}\cdot t_{m}}{\tau} \right)}\right)\right] \tag{3}\]
**Diffusion Prior:** Using a diffusion model to align the outputs of a contrastive learning model was inspired by DALL-E 2 [8], where a "diffusion prior" was used to map CLIP text embeddings to CLIP image space before using an unCLIP decoder to reconstruct images. We modify an open-source implementation of the DALL-E 2 diffusion prior available on GitHub (see Appendix A.2.1). We use the same prior loss as Ramesh et al. [8]. Our total end-to-end loss is defined as:
\[\mathcal{L}=\mathcal{L}_{\text{BiMixCo}|\text{SoftCLIP}}+\alpha\cdot\mathcal{L }_{\text{prior}} \tag{4}\]
We use \(\alpha=0.3\) and switch from BiMixCo to SoftCLIP after one-third of the train cycle. All our models are trained on a single A100 GPU for 240 epochs with a batch size of 32.
The diffusion prior is critical for reconstruction because contrastive learning only incentivizes the CLIP fMRI embeddings to match the vector direction of the associated CLIP image embeddings. This generates disjointed embeddings as observed by Ramesh et al. [8]. To rectify this issue, the diffusion prior learns a distribution of CLIP image embeddings conditioned on CLIP fMRI embeddings. UMAP [22] plots of disjointed CLIP fMRI embeddings next to aligned CLIP fMRI embeddings in Appendix A.4 show how the diffusion prior addresses the disjointed embedding spaces problem. We observe that the prior's role cannot be fulfilled by simply adding MSE loss to the MLP projector in Table 4. This is because there is a tradeoff between reconstruction and retrieval objectives and a model cannot effectively learn a single embedding space that does well on both.
### Low-Level (Perceptual) Pipeline
The low-level pipeline maps voxels to the embedding space of Stable Diffusion's VAE. The output of this pipeline can be fed into the VAE decoder to produce blurry image reconstructions that lack high-level semantic content but exhibit state-of-the-art low-level image metrics. We use img2img [10] to improve our final image reconstructions in terms of low-level metrics, with minimal impairment to high-level metrics, such that we start the diffusion process from the noised encodings of our blurry reconstructions rather than pure noise.
The MLP backbone for our low-level pipeline follows the same architecure as our high-level pipeline, except that the final outputs are of size \((16,16,64)\). These are upsampled to \((64,64,4)\) by a CNN upsampler. An MLP projector projects the backbone outputs to a \(512\) dimensional space where an auxiliary contrastive loss is applied. For more information on the low-level pipeline see Appendix A.2.2. See Appendix Figure 7 for example blurry reconstructions and Appendix Table 5 to see the effect of changing img2img strength on subsequent reconstruction metrics.
## 3 Results
For all experiments, we used the Natural Scenes Dataset (NSD) [23], a public fMRI dataset containing the brain responses of human participants passively viewing natural scenes from MS-COCO [24]. By utilizing MS-COCO, this dataset provides measured brain responses to rich naturalistic stimuli, allowing us to study how well low- and high-level image features are reconstructed by MindEye. We used the same standardized train/test splits as other NSD reconstruction papers [3; 4; 25], training subject-specific models for each of 4 participants. We averaged across three same-image repetitions for the test set (leaving 982 test samples) but not the training set (24,980 training samples), similar to Takagi and Nishimoto [3]. For more information on NSD and data preprocessing see Appendix A.1; for single-trial reconstructions see Appendix A.8.
### Image/Brain Retrieval
Image retrieval evaluations reveal the level of fine-grained image-specific information contained in the predicted brain embeddings. For example, if the model is given a dozen pictures of zebras and the brain sample corresponding to viewing one of those zebras, can the model correctly find the corresponding zebra? If the model can correctly deduce that the brain sample corresponds to an image of a zebra but cannot deduce the specific image amongst various candidates, this would suggest that category-level information but not exemplar-specific information is preserved in the CLIP fMRI embedding. MindEye not only succeeds in this zebra example but also demonstrates 93.2% overall accuracy for Subject 1 in finding the exact original image within the 982 test images (see Figure 3).
Although we use the full test dataset for retrievals in Figure 3, to compare our retrieval performance to other papers we average top-1 performance across batches of 300 random test samples. For image retrieval we compute cosine similarity in CLIP space between a given brain sample and each of a random batch of 300 image candidates from the test set. This process is repeated for each of the 982 brain samples in the test set, and we average the overall accuracy across all samples and across 30 loops of this process to account for the variability in random sampling of batches. Each sample is marked as correct if the correct corresponding paired image sample yielded the highest cosine similarity, such that chance performance would be 1/300. For brain retrieval, the same process is
used except image and brain samples are flipped such that the goal is to find the corresponding paired brain sample for a given image out of 300 brain samples. MindEye outperforms similar models by a large margin on both image retrieval and brain retrieval evaluations (see Table 1).
We can scale up image retrieval using a pool of billions of image candidates. In Figure 3 we show results querying the LAION-5B dataset [26] using our CLIP fMRI embeddings. The final layer CLIP ViT-L/14 embeddings for all 5 billion images are available at knn.laion.ai, and can be queried for K-nearest neighbor lookup via the CLIP Retrieval client [27]. For each test sample, we first retrieve 16 candidate images using this method (using a variant of MindEye that maps voxels to the final layer of CLIP, see Appendix A.6). The best image is then selected based on having the highest CLIP embedding cosine similarity to the CLIP fMRI embedding. This image retrieval approach is especially well-suited for tasks involving fine-grained classification, and can be used as an alternative to image reconstruction without a generative model (evaluations in Table 1).
### fMRI-to-Image Reconstruction
The diffusion prior outputs from MindEye are aligned CLIP fMRI embeddings that can be used with any pretrained image generation model that accepts latents from CLIP image space. We evaluate the outputs of MindEye reconstructions across several models including Versatile Diffusion [6], Stable Diffusion (Image Variations) [28], and Lafite [29; 11]. Here we report results from Versatile Diffusion since it yielded the best results, and we report results from the other models in Appendix A.6. We qualitatively compare our reconstructions side-by-side with outputs from other fMRI-to-image reconstruction models in Figure 4 and quantitatively compare against other models in Table 1, demonstrating state-of-the-art MindEye reconstructions.
For each subject, for each test brain sample, we output 16 CLIP image embeddings from MindEye and feed these embeddings through the image variations pipeline of Versatile Diffusion. This produces 16 image reconstructions per brain sample. For our reconstructions we use 20 denoising timesteps with UniPCMultistep noise scheduling [30] and start the denoising process from the noised output of our low-level pipeline (img2img). We then select the best of 16 reconstructions by computing last hidden
Figure 3: MindEye image retrieval. Given a pool of candidate images, nearest neighbor search in CLIP space enables searching for the original image based on brain activity. Top section depicts how, given 982 test NSD images (many containing very similar looking images, e.g., over a dozen zebras), MindEye top-1 performance is 93.2% for Subject 1. The ability to distinguish among confusable candidates suggests brain embeddings retain fine-grained, image-specific information. Bottom section depicts scaling up to the LAION-5B dataset (see Appendix A.3 for more examples). Even with billions of images, MindEye finds images similar to the original.
layer CLIP embeddings and picking the image with the highest cosine similarity to the disjointed CLIP fMRI embedding. This automatic second-order selection was inspired by DALL-E 2 [8], which used a similar process of selecting the best of 2 generated samples.
Two-way identification refers to percent correct across comparisons gauging if the original image embedding is more similar to its paired brain embedding or a randomly selected brain embedding. Comparison was performed for AlexNet [34] (second and fifth layers), InceptionV3 [35] (last pooling layer), and CLIP (final layer of ViT-L/14). We use the same settings as Ozcelik and VanRullen [4] for our metrics. For more details refer to Appendix A.5.
### Ablations
In this subsection we try to explain where MindEye performance improvements come from through ablations. To study the effects of architectural changes and training strategies we train only the retrieval pipeline (no diffusion prior) for 120 epochs with batch size 300. All models in this section are trained on Subject 1. Table entries with * correspond to the final version of MindEye's settings.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{Low-Level} & \multicolumn{4}{c}{High-Level} & Retrieval \\ \cline{2-10} & \multicolumn{3}{c}{PixCorr \(\uparrow\) SSIM \(\uparrow\) Alex(2) \(\uparrow\) Alex(5) \(\uparrow\)} & \multicolumn{3}{c}{Incep \(\uparrow\) CLIP \(\uparrow\) Eff \(\downarrow\) SwAV \(\downarrow\)} & \multicolumn{3}{c}{Image \(\uparrow\) Brain \(\uparrow\)} \\ \hline Lin et al. [11] & \(-\) & \(-\) & \(-\) & \(-\) & \(78.2\%\) & \(-\) & \(-\) & \(-\) & \(11.0\%\) & \(49.0\%\) \\ Takagi... [3] & \(-\) & \(-\) & \(83.0\%\) & \(83.0\%\) & \(76.0\%\) & \(77.0\%\) & \(-\) & \(-\) & \(-\) & \(-\) \\ Gu et al. [25] & \(.150\) & \(.325\) & \(-\) & \(-\) & \(-\) & \(-\) & \(.862\) & \(.465\) & \(-\) & \(-\) \\ Ozcelik... [4] & \(.254\) & \(\mathbf{.356}\) & \(94.2\%\) & \(96.2\%\) & \(87.2\%\) & \(91.5\%\) & \(.775\) & \(.423\) & \(21.1\%\) & \(30.3\%\) \\ MindEye & \(\mathbf{.309}\) & \(.323\) & \(\mathbf{94.7}\%\) & \(\mathbf{97.8}\%\) & \(\mathbf{93.8}\%\) & \(\mathbf{94.1}\%\) & \(\mathbf{.645}\) & \(\mathbf{.367}\) & \(\mathbf{93.6}\%\) & \(\mathbf{90.1}\%\) \\ \hline MindEye (Low-Level) & \(\mathbf{.360}\) & \(\mathbf{.479}\) & \(78.1\%\) & \(74.8\%\) & \(58.7\%\) & \(59.2\%\) & \(1.00\) & \(.663\) & \(-\) & \(-\) \\ MindEye (High-Level) & \(.194\) & \(.308\) & \(\mathbf{91.7}\%\) & \(\mathbf{97.4}\%\) & \(\mathbf{93.6}\%\) & \(\mathbf{94.2}\%\) & \(\mathbf{.645}\) & \(\mathbf{.369}\) & \(\mathbf{93.6}\%\) & \(\mathbf{90.1}\%\) \\ MindEye (LAION) & \(.130\) & \(.308\) & \(84.0\%\) & \(92.6\%\) & \(86.9\%\) & \(86.1\%\) & \(.778\) & \(-\) & \(-\) \\ \hline Ozcelik... (Low-, S1) & \(.358\) & \(.437\) & \(\mathbf{97.7}\%\) & \(\mathbf{97.6}\%\) & \(\mathbf{77.0}\%\) & \(\mathbf{71.1}\%\) & \(\mathbf{906}\) & \(\mathbf{.581}\) & \(-\) & \(-\) \\ MindEye (Low-, S1) & \(\mathbf{.456}\) & \(\mathbf{.493}\) & \(87.1\%\) & \(84.1\%\) & \(61.6\%\) & \(62.4\%\) & \(.992\) & \(.638\) & \(-\) & \(-\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison of MindEye retrieval and reconstruction performance against other models. Top and middle sections average across the same 4 participants (see Appendix A.7 for individual subject models), except Lin et al. [11] which only analyzed Subject 1. Middle section reflects outputs from only the high- or low-level pipeline, and metrics when evaluating images retrieved from LAION-5B. Bottom section compares our low-level reconstructions to the low-level reconstructions from Ozcelik and VanRullen [4] which only reported metrics for Subject 1. Image retrieval refers to the percent of the time the correct image was retrieved out of 300 candidates, given the associated brain sample (chance=0.3%); vice-versa for brain retrieval. PixCorr=pixelwise correlation between ground truth and reconstructions; SSIM=structural similarity index metric [31]; EfficientNet-B1 (“Eff”) [32] and SwAV-ResNet50 (“SwAV”) [33] refer to average correlation distance; all other metrics refer to two-way identification (chance = 50%). Missing values are from papers not reporting all metrics or metrics being non-applicable. We followed the same image preprocessing as Ozcelik and VanRullen [4]. Previous state-of-the-art Ozcelik and VanRullen [4] results are directly comparable to MindEye as the same test set and Versatile Diffusion model were used. Bold indicates best performance within sections.
Figure 4: Side-by-side comparison of reconstructions from fMRI-to-Image NSD papers. The same test set was used across papers. All reconstructions come from Subject 1.
**Architectural Improvements:** To study the effect of model depth and parameter count we train multiple MLPs of various sizes (Table 2). Among models that map to the last hidden layer of CLIP ViT-L/14, we observe a clear trend of increased performance with added residual blocks. For 2 blocks, the effect of skip connections is not too significant but at 4 blocks the model does significantly worse without them, indicating that skip connections are important for training deeper models.
We also show a comparison with a 4-resblock model that maps to the final layer of CLIP (only the CLS classification token). This model has \(7\times\) fewer parameters and does much worse than all other models. This indicates two things: (1) MindEye strongly benefits from a large parameter count MLP backbone and does not overfit even in the sample constrained settings of the NSD dataset, and (2) the fMRI voxels contain fine-grained information about images, allowing us to effectively predict all \(257\) CLIP image embeddings instead of just the CLS token.
**Training Strategies (Losses and Data Augmentations):** We observe that with InfoNCE, MindEye only does well on brain retrieval (Table 3). A similar trend was observed in Lin et al. [11]. We attribute this to InfoNCE being a one-sided loss that only optimizes for one retrieval objective. Simply replacing InfoNCE with CLIP loss significantly improves image retrieval. MixCo augmentation helps both unidirectional and bidirectional losses.
We also show the effect of training with our SoftCLIP loss. SoftCLIP improves over hard CLIP loss for brain retrieval but performs worse than BiMixCo. Our training regime combining SoftCLIP with BiMixCo gives the best image retrieval performance.
**Reconstruction Strategies:** To demonstrate the need for a separate diffusion prior, we train a version of MindEye where both contrastive and MSE losses are applied to the ouputs of the MLP backbone. We observe that this model does poorly in terms of retrieval metrics, with a tradeoff between retrieval and reconstruction objectives where it is difficult to learn a single embedding space. Inspired by recent works in self-supervised learning [36; 37; 38; 39], we decouple these losses using a separate MLP projector, where MSE loss is applied to the outputs of the MLP backbone and contrastive loss is applied to the outputs of the projector. This model does slightly worse in terms of reconstruction but is much better at retrieval. Finally, we train a model with a diffusion prior but no MLP projector. Contrastive loss is computed for the MLP backbone and MSE loss is computed for the diffusion prior. This model is comparable to high-level MindEye in terms of reconstruction but does worse in retrieval, giving further evidence of a tradeoff. Example reconstructions for these models are in Appendix Figure 8.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Param Count & Image Retrieval & Brain Retrieval \\ \hline No ResBlocks & \(873\)M & \(0.880\) & \(0.820\) \\
2 ResBlocks + No Skip & \(907\)M & \(0.881\) & \(0.822\) \\
2 ResBlocks & \(907\)M & \(0.886\) & \(\mathbf{0.837}\) \\
4 ResBlocks + No Skip & \(940\)M & \(0.836\) & \(0.767\) \\
4 ResBlocks* & \(940\)M & \(\mathbf{0.896}\) & \(0.822\) \\
4 ResBlocks + Only CLS & \(135\)M & \(0.611\) & \(0.576\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Effects of varying the architecture of the MLP backbone on retrieval accuracy.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & Image Retrieval & Brain Retrieval \\ \hline InfoNCE & \(0.237\) & \(0.784\) \\ CLIP Loss & \(0.837\) & \(0.791\) \\ InfoNCE + MixCo & \(0.303\) & \(\mathbf{0.856}\) \\ CLIP Loss + MixCo (BiMixCo) & \(0.884\) & \(0.841\) \\ SoftCLIP Loss & \(0.837\) & \(0.816\) \\ BiMixCo + SoftCLIP (MindEye)* & \(\mathbf{0.896}\) & \(0.822\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effects of different losses and MixCo augmentation on MLP retrieval performance.
## 4 Related Work
In the 2000s, researchers demonstrated that visual information, such as spatial position [40], orientation [41; 42], and coarse image category [43; 44] could be decoded from fMRI signals using linear classifiers. With the introduction of generative adversarial networks [45], more sophisticated decoding became feasible as researchers mapped brain activity to the latent space of these models to reconstruct handwritten digits [46], human faces [47; 48], and natural scenes [49; 5; 50]. More recently, with the release of multimodal contrastive models like CLIP [7], diffusion models [51; 52] like Stable Diffusion [9], and new large-scale fMRI datasets [23], fMRI-to-image reconstructions have reached an unprecedented level of quality [4; 3; 25].
Lin et al. [11] reconstructed NSD images by mapping voxels to CLIP space (see also Wang et al. [53]) and fed outputs through a fine-tuned Lafite [29] GAN (MindEye reconstructions using Lafite in Appendix A.6). Differences from MindEye include using a convolutional model, no projector to separate contrastive loss from MSE loss, InfoNCE instead of CLIP loss, fine-tuning of a pretrained GAN, no diffusion prior, and mapping to both CLIP image and text space. Ozcelik and VanRullen [4] used a low- and high-level pipeline with Versatile Diffusion [6]. Differences include mapping to CLIP space via ridge regression, no contrastive learning or diffusion prior, and mapping to a VDVAE [54] for low-level reconstructions. Gu et al. [25] used a low- and high-level pipeline and extended on Ozcelik et al. [5] by reconstructing with IC-GAN [55]; they did not flatten voxels and mapped to SwAV [33] features with surface-based convolutional networks. Takagi and Nishimoto [3] used ridge regression to map to Stable Diffusion latents and CLIP text latents, using different voxel selections for different components. Overall, MindEye is unique in its use of reconstruction and retrieval submodules, a deep MLP backbone with 940 million parameters, and a diffusion prior for more accurate translation across brain and image modalities.
## 5 Conclusions
We present MindEye, a novel mental decoding approach that achieves state-of-the-art reconstructions of natural scenes presented to humans in the MRI machine. These reconstructions retain semantic meaning and perceptual similarity to the original images due to the use of a combined high-level and low-level pipeline. The novel use of specialized submodules for contrastive-based retrieval and diffusion-based reconstruction allow MindEye to learn mappings for both tasks in parallel. MindEye can select the ground truth image out of a set of nearly 1,000 possible images (many easily confusable, see Figure 3) with >90% accuracy, suggesting fine-grained image-specific signal contained in the brain embeddings. MindEye retrieval can also be used when the original image is unknown by querying large image databases such as LAION-5B. The diffusion prior submodule allows for accurate translation of brain embeddings into pretrained CLIP space such that any model that accepts CLIP image embeddings can be provided with CLIP fMRI embeddings without fine-tuning. This flexibility suggests that MindEye reconstructions will continue to improve as newer, more powerful image generation models are released.
**Privacy Concerns and Societal Benefits:** The ability to accurately reconstruct perception from brain activity prompts questions about broader societal impacts. For instance, it should be possible to generalize current reconstruction models from perception to mental imagery without training a new model [56; 57; 58; 59]. However, current models are not capable of across-subject decoding and each NSD participant spent up to 40 hours in the MRI machine to procure sufficient training data. Furthermore, non-invasive neuroimaging methods in general require compliance because participants can easily
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline Method & \multicolumn{3}{c}{Low-Level} & \multicolumn{3}{c}{High-Level} & \multicolumn{2}{c}{Retrieval} \\ \cline{2-10} & PixCor & SSIM & Alex(2) & Alex(5) & Incep & CLIP & Image & Brain \\ \hline Only MLP Backbone & \(0.119\) & \(0.346\) & \(73.8\%\) & \(84.1\%\) & \(81.5\%\) & \(82.6\%\) & \(0.133\) & \(0.631\) \\ Backbone + Projector & \(0.154\) & \(0.296\) & \(73.2\%\) & \(85.2\%\) & \(75.2\%\) & \(77.3\%\) & \(0.888\) & \(0.849\) \\ Backbone + Prior & \(\mathbf{0.206}\) & \(\mathbf{0.303}\) & \(\mathbf{92.1\%}\) & \(\mathbf{97.2\%}\) & \(\mathbf{94.8\%}\) & \(\mathbf{95.1\%}\) & \(0.934\) & \(0.901\) \\ MindEye (only BiMixCo) & \(0.195\) & \(0.290\) & \(91.1\%\) & \(96.6\%\) & \(93.7\%\) & \(94.4\%\) & \(\mathbf{0.974}\) & \(0.942\) \\ MindEye (0.33 BiMixCo)* & \(0.198\) & \(0.302\) & \(91.6\%\) & \(96.8\%\) & \(94.6\%\) & \(95.0\%\) & \(0.972\) & \(\mathbf{0.960}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Effects of diffusion prior and MLP projector on reconstruction and retrieval metrics.
resist decoding by moving their head or thinking about unrelated information [60]. MindEye is also limited to natural scenes such as those in MS-COCO; for other image distributions additional data collection and specialized generative models would be required. While high-quality image reconstruction via non-invasive neuroimaging is not currently practical for real-world applications, technology is constantly improving and it is important that brain data be carefully protected and companies collecting such data be transparent with their use.
Image reconstruction from brain activity can enable various potential societal benefits. Reconstructions are expected to be systematically distorted due to mental state, neurological conditions, etc. This could potentially enable novel clinical diagnosis and assessment approaches. For example, patients suffering from major depressive disorder might produce reconstructions where emotionally negative aspects of images are more salient [61]. MindEye results also suggest potential for improved locked-in (pseudocoma) patient communication via fine-grained visual communication beyond simple classification [62], as well as brain-computer interface performance if adapted to real-time fMRI analysis [63] or non-fMRI neuroimaging modalities.
## 6 Open Research: 100% Transparent Volunteer-Driven Science
MindEye was openly developed through volunteer contributions in the MedARC Discord server. Source code was always accessible via a public GitHub repository throughout the lifespan of the project. Research discussions were held via public Discord channels, and weekly video conference calls were recorded and shared publicly. We continue to extend a global invitation to contribute to MedARC Mind Reading Lab projects to cultivate an internationally diversified, volunteer-driven research team composed of members from varied backgrounds possessing a wide array of expertise. We contend that fully transparent open-research initiatives such as this and others like EleutherAI, LAION, OpenBioML, and ML Collective could redefine the traditional framework of scientific research, democratizing entry into machine learning and medical research through the harnessing of crowd-sourced collective intelligence and community collaboration.
## 7 Acknowledgements
Thanks to the MedARC community, including Jeremy Howard, Tommaso Furlanello, Mihir Tripathy, and Cesar Torrico for useful discussion and reviewing the manuscript. Thank you to Furkan Ozcelik, author of Brain-Diffuser, for sharing his code and expert knowledge with our group. We thank LAION for being the initial community where this project developed, and thank Romain Beaumont and Zion English for useful discussion during that time. We thank Stability AI for sharing their high-performance computing workplace and giving us the computational resources necessary to develop MindEye. Thank you to Richard Vencu for help navigating the Stability HPC. Collection of the Natural Scenes Dataset was supported by NSF IIS-1822683 and NSF IIS-1822929.
## 8 Author Contributions
PSS devised the project, led the team, developed the models, drafted the manuscript, and otherwise contributed to all parts of MindEye development. AB drafted the manuscript, developed the models, and contributed to all parts of MindEye development including creating the low-level pipeline, conception of BiMixCo and soft CLIP loss, and modification of the DALL-E 2 diffusion prior. JG developed the models, tracked/compared model variants, and significantly contributed to the MindEye codebase. SS conceived of and implemented LAION-5B retrieval using the CLIP Retrieval client and conducted various exploratory experiments. AN implemented the Lafite pipeline for MindEye reconstructions. EC conducted various initial explorations into using a diffusion prior for aligning voxels to CLIP space. AJD created the initial weddatasets used to train MindEye and created various model architectures to compare different mapping approaches. NV conducted various exploratory experiments mapping voxels to StyleGAN-XL [64] latent space. EY shared code to automatically identify identical images for qualitative comparisons and added code to ensure LAION-5B retrieval did not retrieve ground truth images. DW conducted various exploratory experiments and helped with project discussions. KAN oversaw the project and contributed valuable feedback. TMA oversaw the project, conducted initial explorations using VQGAN [65], and helped keep the project on-track through MedARC and Stability AI communication. |
2309.00936 | Fabrication of low-loss III-V Bragg-reflection waveguides for parametric
down-conversion | Entangled photon pairs are an important resource for quantum cryptography
schemes that go beyond point-to-point communication. Semiconductor
Bragg-reflection waveguides are a promising photon-pair source due to mature
fabrication, integrability, large transparency window in the telecom wavelength
range, integration capabilities for electro-optical devices as well as a high
second-order nonlinear coefficient. To increase performance we improved the
fabrication of Bragg-reflection waveguides by employing fixed-beam-moving-stage
optical lithography, low pressure and low chlorine concentration etching, and
resist reflow. The reduction in sidewall roughness yields a low optical loss
coefficient for telecom wavelength light of alpha_reflow = 0.08(6)mm^(-1).
Owing to the decreased losses, we achieved a photon pair production rate of
8800(300)(mW*s*mm)^(-1) which is 15-fold higher than in previous samples. | Hannah Thiel, Marita Wagner, Bianca Nardi, Alexander Schlager, Robert J. Chapman, Stefan Frick, Holger Suchomel, Martin Kamp, Sven Höfling, Christian Schneider, Gregor Weihs | 2023-09-02T13:22:50Z | http://arxiv.org/abs/2309.00936v1 | # Fabrication of low-loss III-V Bragg-reflection waveguides for parametric down-conversion
###### Abstract
Entangled photon pairs are an important resource for quantum cryptography schemes that go beyond point-to-point communication. Semiconductor Bragg-reflection waveguides are a promising photon-pair source due to mature fabrication, integrability, large transparency window in the telecom wavelength range, integration capabilities for electro-optical devices as well as a high second-order nonlinear coefficient. To increase performance we improved the fabrication of Bragg-reflection waveguides by employing fixed-beam-moving-stage optical lithography, low pressure and low chlorine concentration etching, and resist reflow. The reduction in sidewall roughness yields a low optical loss coefficient for telecom wavelength light of \(\alpha_{\mathrm{reflow}}=0.08\,(6)\,\) mm\({}^{-1}\). Owing to the decreased losses, we achieved a photon pair production rate of 8800 (300) \(\,\)(mW\(\cdot\)s\(\cdot\)mm)\({}^{-1}\) which is 15-fold higher than in previous samples.
## 1 Introduction
Nonlinear optics has a multitude of applications, such as all-optical switching, efficient detection, and multiplexing for increased data transfer rates in fiber networks. In addition to its classical applications, nonlinear optics has strong applications in quantum communication. Protocols that go beyond point-to-point quantum cryptography, like device-independent schemes and quantum repeaters, rely on entangled photon pairs [1, 2]. A leading method to produce entangled photons is by parametric down-conversion. As the need for mass-deployable devices with increased complexity grows, integrable on-chip systems become the only practical solution. While the significance of miniaturized systems is undisputed, it remains uncertain what would constitute the ideal platform.
An ideal photon-pair source has to offer integration capabilities in monolithic or hybrid approaches and be scalable as well as affordable in fabrication [3]. The silicon platform and the maturity of its fabrication make it a natural choice for integration. However, for optical applications in communication, silicon has multiple drawbacks, including the lack of a direct bandgap, low transmission across the telecom C-band at high powers due to two-photon absorption and the lack of a second-order nonlinear coefficient. These limitations make silicon a difficult choice for quantum communication systems using entangled photons [4, 5].
The ideal material parameters for an integrated quantum optical device are a direct bandgap, a high \(\chi^{(2)}\) nonlinear coefficient, high index contrast, and low-loss waveguiding in the telecom C-band. Commonly used material platforms are silicon nitride, which, except for the transparency window, inherits most of the drawbacks of silicon, lithium niobate and KTP, which lack the potential for active components, and indium phosphide and gallium arsenide, which suffer from
losses due to a bandgap around \(900\,\mathrm{n}\mathrm{m}\) wavelength. A material platform without these drawbacks is aluminum gallium arsenide (AlGaAs), which has a large transparency window in the telecom wavelength range, integration capabilities for electro-optical devices like light-emitting diodes, lasers and modulators as well as a high second order nonlinear coefficient [6].
In the form of Bragg-reflection waveguides (BRWs), AlGaAs simultaneously offers waveguiding and nonlinear conversion via modal phase matching [7, 8, 9]. This enables the production of polarization, energy-time, and time-bin entangled photon pairs in the telecom wavelength range [10, 11, 12, 13, 14]. Additionally, owing to the direct bandgap of AlGaAs, a pump laser for nonlinear conversion processes or quantum dots for the creation of single photons can be directly integrated on chip [15, 16, 17, 18]. As such, AlGaAs BRWs offer an ideal resource for quantum communication schemes in existing fiber networks.
While low loss is important in classical optical communication, it is paramount in entanglement-based quantum applications, as both photons of an entangled pair need to be preserved. The fabrication of both classical AlGaAs devices, like lasers, photodetectors and modulators, as well as more complex quantum devices has seen significant advancements and has reached the level of sophistication required for large scale integration [19, 20, 21, 22].
In this article, we report on the fabrication of low-loss AlGaAs BRWs, which serve as sources of correlated photon pairs. By optimizing the etching recipe for near-vertical waveguide sidewalls and using resist reflow, we reduce the root-mean-square area sidewall roughness of our BRWs from \(16.05(2)\,\mathrm{n}\mathrm{m}\) to \(4.736(7)\,\mathrm{n}\mathrm{m}\) and the corresponding optical loss coefficient from between \(\alpha_{\mathrm{n}\mathrm{o}\,\mathrm{r}\mathrm{e}\mathrm{f}\mathrm{o}}=0.23 \,(9)\,\mathrm{m}\mathrm{m}^{-1}\) and \(0.32\,(9)\,\mathrm{m}\mathrm{m}^{-1}\) to \(\alpha_{\mathrm{r}\mathrm{e}\mathrm{f}\mathrm{o}\mathrm{f}\mathrm{o}}=0.08 \,(6)\,\mathrm{m}\mathrm{m}^{-1}\). Resist reflow increases the photon pair production rate by around six-fold. Overall, the optimized fabrication recipe leads to a 15-fold increase in photon pair production rate compared to previous samples.
The rest of this paper is organized as follows. We first introduce BRWs and their layer structure needed for mode guiding and phase matching. The second part of the article details the fabrication steps and highlights the measures taken to handle the challenges introduced by the layer stack. The third part focuses on the characterization of BRWs including measurements of the sidewall roughness and the loss coefficients via Fourier transforming the Fabry-Perot transmission spectrum [23]. Finally, we demonstrate the increased correlated photon pair production rates.
## 2 Bragg-reflection waveguides
The AlGaAs material platform features a high \(\chi^{(2)}\) coefficient that allows nonlinear processes like second harmonic generation, difference-frequency generation, and parametric down-conversion (PDC). The efficiency of these processes hinges on the momentum conservation between input and output photons as well as the low-loss guiding of the respective modes in the material. We achieve low loss and phase matching in a BRW made of AlGaAs layers with varying aluminum concentrations. Fig. 1 shows the layer structure and guided modes in a BRW.
A high index core realized via a low aluminum fraction guides two fundamental modes, one TE, the other TM polarized, at \(1550\,\mathrm{n}\mathrm{m}\) wavelength via total internal reflection. Two Bragg mirror stacks above and below the waveguide core confine the so-called Bragg mode at \(775\,\mathrm{n}\mathrm{m}\) wavelength in the vertical direction. These consist of six mirror pairs each. We optimize the confinement by sizing the layer thicknesses at one quarter of the wavelength of the transverse component of the electric field at \(775\,\mathrm{n}\mathrm{m}\) and by maximizing the difference in refractive index between the layers [7]. The ridge structure of the waveguide ensures the confinement of the modes in the horizontal direction. The BRW supports multiple spatial modes and the Bragg mode at \(775\,\mathrm{n}\mathrm{m}\) wavelength is actually a higher order spatial mode.
Adjusting the refractive indices and layer thicknesses of the AlGaAs layers enables modal phase matching between the NIR Bragg and telecom fundamental modes of the waveguide. For optimization, we take into account the increased nonlinear conversion efficiency with decreasing
aluminum fraction as well as the simultaneous decrease in band gap and resulting pump absorption. For a comprehensive guide to BRW design, refer to Ref. [9]. While the elaborate layer structure of the BRWs affords wavelength flexibility and phase matching optimization in nonlinear processes, it poses a challenge for fabrication.
## 3 Fabrication
In the following section, three different fabrication recipes are described. One that we call the "previous" recipe uses e-beam lithography with stitching of write fields and metal mask deposition. It is our starting point for improving the fabrication process of BRWs. The new recipe is detailed in this section and is used in two versions: with and without reflowing the photoresist. The epitaxial layers we use to fabricate BRWs are grown via molecular beam epitaxy on (100) GaAs substrates and are subsequently processed as shown in Fig. 2. We first clean the wafer surface in an oxygen plasma in an inductively coupled plasma reactive ion etcher (ICP-RIE) (Sentech SI 500), the same machine that is later used for etching the waveguides.
A photo lithography machine (Microtech LaserWriter) exposes the waveguide pattern into the photoresist. Using a direct laser writer and etch resistant resist allows us to readily adapt our design to different requirements. Another advantage of this method is that it does not require an e-beam lithography machine and the more problematic chemicals needed for the deposition
Figure 1: Top: Schematic of a BRW with layers of different aluminum concentration. The core (bright red) with a composition of Al\({}_{0.43}\)Ga\({}_{0.57}\)As lies between two optical matching layers and six Bragg mirror pairs above and below the core which are made up of Al\({}_{0.20}\)Ga\({}_{0.80}\)As (orange) and Al\({}_{0.63}\)Ga\({}_{0.37}\)As (dark red). Bottom: Simulations of the electric field absolute value for the TE-polarized Bragg mode at 775 nm wavelength (left) and the total internal reflection mode at 1550 nm wavelength for TE polarization (right). The image for the TM polarization at the latter wavelength is not shown as it is very similar to the TE one.
of a metal mask and lift-off. Due to diffraction, the 405 nm wavelength of the UV laser writer limits the feature size. This, however, does not pose any problem for straight waveguides. The lithography mode we employ is fixed-beam-moving-stage (FBMS) to avoid stitching errors between write fields. This is the first step towards achieving smooth sidewalls and decreasing losses due to scattering.
We spin the plasma-resistant photoresist AR-P3740 (Allresist) at 4000 rpm to obtain a 1.4 um thick layer. This allows us to etch the 3.34 um trench depth required for the BRWs with sufficient resist left to protect the surface of the wafer. The photoresist is positive, enabling us to realize the structure shown in Fig. 4 (top), which facilitates integration with other components and systems. Each waveguide is defined by trenches etched on either side with most of the material between waveguides remaining intact. This results in a shorter etch time, as less material needs to be removed, and more stable plasma, as less of the etched-away material accumulates in the etching chamber. Removing less material during the etch also means that the chip can be handled more easily in semi-automatic assembly and flip-chip coupling.
After development of the photoresist in AR 300-35 (Allresist), we reflow the resist in a convection oven for 25 minutes at 140\({}^{\circ}\)C. As the fluidity of the resist increases, its surface smoothens leading to a significant reduction in line edge roughness. Any imperfections present in the photoresist transfer directly to the waveguide sidewall during the etching step and are evidenced by vertical striations. Therefore, the reflow step is the most important in reducing sidewall roughness of the waveguides.
After reflow, we transfer the pattern in the resist to the AlGaAs in the ICP-RIE in a plasma mixture of argon and chlorine species. The etch recipe needs to be finely tuned to achieve near-vertical, smooth sidewalls as well as non-selective etching in the lateral direction with respect to layers of different compositions.
Vertical sidewalls are usually obtained in predominantly chemical etch processes where material is removed mainly via reactions with radicals in the plasma. As radicals gather on the surface of the sample, etching occurs in the lateral direction removing material from the base of a waveguide ridge. While such a chemical etch is fast and results in vertical sidewalls, the major drawback is the anisotropy of the etch at the sidewalls. Layers with lower aluminum concentrations etch faster than those with a higher aluminum fraction, as shown in Fig. 3. The aluminum, having a lower electronegativity than the gallium, tends to react with other components available in the plasma. This could be resist material that has been removed from the surface by the etch gases. The resulting aluminum oxide layers then protect the high aluminum content layers from further chemical etching [24]. This roughness is not homogeneous along the length of the waveguide and thus contributes immensely to scattering losses.
Oxidation films on high aluminum fraction layers can best be avoided in physical etch processes.
Figure 2: BRW fabrication steps include oxygen plasma cleaning of the AlGaAs wafer surface (a), spinning of photoresist (b), photolithographic exposure (c), development (d), photoresist reflow (e), and etching (f).
Here, charged atoms and molecules impinging on the sample surface remove material. This role is mainly played by the argon atoms in our etch recipe. This sputter component is increased by replacing the chlorine with boron trichloride. Boron trichloride undergoes dissociation and recombination processes in the etch chamber and thereby reduces the amount of free chlorine available for the etch. In addition, it reacts with aluminum oxide and water vapor lessening the inhomogeneity of the sidewalls [24].
The chemical mixture of the etch gas alone, however, is not enough to attain vertical and non-selectively etched sidewalls at the same time. We therefore tune the other settings of the ICP-RIE. We flood the etch chamber with 40 SCCM of argon and 4 SCCM of boron trichloride at a low pressure of 0.3 Pa. While the effects of the pressure on the etch strongly depend on all other parameters, we find in this regime that the low pressure avoids the accumulation of reactive species on the sample surface.
The plasma is ignited via an RF-coupled induction coil at the top of the etch chamber. The ICP power is set to a commonly employed value of 400 W. In order to accelerate the ions in the plasma towards the sample, we apply an RF signal that leads to a self-biasing of the sample holder. We obtain high-momentum particles impinging on the sample surface anisotropically by setting the self-bias to a high value of 150 V. This makes the etch sufficiently physical for the removal of oxidation products, leading to homogeneous sidewalls. It also means that the ions reach the lower parts of the already etched sidewall, leading to an almost vertical sidewall otherwise not possible in such a physical etch. Two scanning electron micrographs of a BRW fabricated in this way are shown in Fig. 4. The SEM images show no noticeable inhomogeneity between the different layers on the sidewall. Small vertical striations are visible and will be analyzed in the following section.
## 4 Characterization
We assess the quality of the BRWs by determining the sidewall roughness, optical loss coefficient, and photon pair production rate in PDC. In order to quantify the improvement of the fabrication after adding the reflow step into the recipe, we employ two different methods. We first determine the sidewall roughness using an atomic force microscope (AFM) and then measure the optical loss in the BRW via a Fourier transform analysis of the transmission spectrum for the wavelength range around 1550 nm.
Assessment of the sidewall roughness is done using an AFM (Nanosurf NaioAFM) with
Figure 3: Inhomogeneous lateral etch of layers with different aluminum fraction in high chlorine concentration recipes. Note that this is not a BRW sample but the aluminum fractions in the layers are comparable to BRW wafers.
a \(<10\,\)nm radius cantilever tip (BudgetSensors Tap190GD-G). As we would like to separate the effect of the reflow from the inhomogeneity introduced by the layer stack, we perform this analysis on GaAs waveguides etched in the same recipe as the BRWs.
We cleave the sample along the trench on one side of the waveguide and then mount it in
Figure 4: BRWs with homogeneously etched sidewalls in argon-borontrichloride plasma. A waveguide is defined by a trench on either side (top). The close-up of the sidewall shows the smooth corner between waveguide facet and sidewall (bottom).
Figure 5: After cleaving along a trench between waveguides, we rotate our sample such that the AFM tip has access to the full width and height of the waveguide’s sidewall.
the AFM sample holder with the waveguide sidewall facing upward. This offers access for the AFM tip to the entire length and width of the sidewall, as illustrated in Fig. 5. We scan squares of 8 \(\mathrm{\SIUnitSymbolMicro m}\) width on the samples fabricated with and without reflow and compare them. Fig. 6 shows cutouts of two such profiles corresponding to the middle parts of the sidewalls. These can be reproducibly imaged by the AFM as opposed to the sections near the substrate or the upper edge of the sidewall. In order to improve the illustration of the sidewall roughness in this figure, we enhance both images by removing the same low spatial frequency components from each horizontal line profile. These low components result from the way the cantilever moves across the sample, moving slightly upward in the center of a scanned section. For the quantitative investigation of the sidewall roughness, we use the original AFM data. From the AFM profiles, we calculate the root-mean-square area roughness of the sidewalls
\[\sigma_{\mathrm{RMS}}=\sqrt{\frac{\sum_{N}(z-\bar{z})^{2}}{N-1}} \tag{1}\]
where \(N\) is the number of pixels, \(z\) is the height value recorded for a certain pixel and \(\bar{z}\) is the mean height. We obtain values of 16.05(2) \(\mathrm{nm}\) for the no-reflow samples and 4.736(7) \(\mathrm{nm}\) for the samples fabricated using the reflow method. Absolute area roughness values depend strongly on the cantilever tip and measurement mode, but the relative roughness can be estimated. Here, the area roughness is reduced by a factor of more than three when employing the reflow method.
From the findings in the AFM studies on GaAs, we expect the scattering losses in the reflow BRW sample to be substantially lower than in previous samples. This is because we consider the sidewall roughness a variation in the waveguide width, and thus a grating coupler to the radiation modes [25, 26, 27, 28]. The decreased scattering losses are confirmed by measuring the transmission spectrum of the BRW in the telecom wavelength range, where the photons are produced during PDC. We couple a tunable telecom laser (Santec TSL-710) into the waveguide via a 100x microscope objective and collect the output light using an aspheric lens. A powermeter (Thorlabs S122C) measures the transmitted light. The facets of the waveguide have a reflectivity of 35(4)%, meaning the waveguide acts as a weak cavity [23]. As light from multiple reflections
Figure 6: Cutouts of the full AFM images, with low Fourier components omitted. The sample made without the reflow step shows clear vertical striations near the top of the waveguide and a honeycomb-like structure towards the substrate (top). Reflowing the resist smoothens the sidewall (bottom).
interferes at the output facet of the waveguide, the power meter detects interference fringes when the wavelength of the input light is scanned. In a single mode system, one could readily determine the losses in the waveguide from the visibility of these Fabry-Perot fringes. The BRW, however, supports higher order spatial modes including the Bragg mode. Due to the multimodal nature of the waveguide, the fringes exhibit a beat pattern that is more difficult to interpret. In order to modally resolve this transmissivity measurement, we Fourier-transform the transmission spectrum as detailed in Ref. [23].
The resulting Fourier spectra, shown in Fig. 7, can be interpreted as follows. Light in the waveguide experiences a time delay at every pass through the waveguide and at every reflection from the facets. This time delay corresponds to a translation in Fourier space and manifests as peaks at integer multiples of the resonator length for each mode. The Fourier spectra shown here
Figure 7: Fourier transform of the transmission spectrum (inset) for TE polarized input light in the telecom wavelength range for the BRW fabricated without (top) and with reflow (bottom). For the latter, the peak height ratios of early peaks (red line) differ from those of later peaks (orange line). An early peak refers to a peak farther to the left in the plot and corresponds to light that has only survived one or two passes through the waveguide cavity. Light that contributes to a late peak farther to the right has passed through the waveguide four or five times.
feature a pattern of one strong peak and smaller side peaks that repeat again after a distance of one resonator length. This corresponds to one or more strong modes at similar effective refractive index and weaker modes with different effective refractive indices. The ratio of heights \(\tilde{R}\) of subsequent peaks belonging to one mode thus contains information about the loss during one pass through the waveguide for that specific mode. For a known reflectivity of the facets \(R\) and resonator length \(L\), the loss coefficient can be readily calculated:
\[\alpha=-\frac{1}{L}\mathrm{ln}\left(\frac{\tilde{R}}{R}\right). \tag{2}\]
The optical length \(d\) is proportional to \(L\) via the factor \(\pi/n\), where \(n\) is the group refractive index. We determine the loss coefficient for each ratio of neighboring peak heights separately. Fig. 8 shows a comparison of the loss coefficients of two BRWs made with (orange) and without (grey) the reflow step but coming from the same wafer. The loss coefficient of the latter varies between \(\alpha_{\mathrm{no\,reflow}}=0.23\,(9)\) mm\({}^{-1}\) and \(\alpha_{\mathrm{no\,reflow}}=0.32\,(9)\) mm\({}^{-1}\) without any upward or downward trend. The loss coefficient of the BRW fabricated with reflow is higher than that of the no-reflow sample for the first two peaks in the Fourier spectrum. This is counterintuitive, as we would expect a smoother sidewall leading to lower loss coefficients. However, for later peaks, the loss coefficient drops below that of the BRW made without reflowed resist. It also follows a clear trend, decreasing down to a loss coefficient of \(\alpha_{\mathrm{reflow}}=0.08\,(6)\) mm\({}^{-1}\).
All but the fundamental telecom mode are strongly dampened within the waveguide, which was manufactured without reflow. Mode simulations show that higher-order modes exhibit higher field strengths closer to the waveguide surface and are thus more easily scattered with increased surface roughness. Thus, mainly the fundamental mode survives. When the sidewall roughness is reduced by reflowing the resist, another, higher order mode with higher loss can also propagate within the waveguide. This mode has an effective refractive index difference to the fundamental mode of only about 0.01 at 1535 nm wavelength. This higher-loss mode dominates the early peak height ratios, causing high loss coefficients. As its contribution diminishes, the losses go down, revealing how much the reflow step improved the propagation of the original mode. Overall, the losses for TE polarized light in the telecom wavelength range were reduced from between \(\alpha_{\mathrm{no\,reflow}}=0.23\,(9)\) mm\({}^{-1}\) and \(0.32\,(9)\) mm\({}^{-1}\) to \(\alpha_{\mathrm{reflow}}=0.08\,(6)\) mm\({}^{-1}\).
Reducing the losses on this scale has a significant effect on nonlinear conversion processes.
Figure 8: Loss coefficients \(\alpha\) derived from the Fourier transform of the transmission spectrum of two BRWs fabricated with reflow shown in orange and those fabricated without in grey. The bars represent the loss coefficients calculated from the ratios of neighboring peaks in the Fourier spectrum.
We test this by measuring the coincidence rates between the signal and idler photons produced in PDC. To do so, we pump the sample with a NIR laser (MSquared SolsTis 6W PSX-R) set to the phase-matching wavelength. A dichroic mirror placed at the output of the BRW filters out the remaining pump light. The down-converted photon pair is orthogonally polarized and can therefore be split up at a polarizing beam splitter and relayed to two superconducting nanowire single photon detectors (Single Quantum Eos 720 CS). We measure the coincidence rate between the two detectors for a pump power of 1 mW and normalize to the length of the waveguide sample. Table 1 lists the results for the BRWs fabricated using the recipe with and without reflow detailed above, and the previous recipe employing e-beam lithography and a metal mask for etching. The coincidence rate and therefore the conversion efficiency is highest in the sample fabricated using the reflow step. Both of the recipes employing FBMS optical lithography perform better than the recipe using e-beam lithography and metal mask deposition. By adding the reflow step to the recipe, the coincidence rate could be increased by around six-fold. Note that the measured coincidence rates depend on the coupling efficiencies in the setup. The values shown here were measured at similar conditions in the setup to allow a comparison. However, much higher coincidence rates of 89 (5) \(\,\mathrm{Hz}/\mu\mathrm{W}\) have been achieved in the meantime using the reflow-sample in an optimized setup [29]. Such an improvement in performance through careful optimization of the fabrication recipe means that BRWs are a reliable option where an integrable source for photon pairs is needed.
## 5 Conclusion and outlook
We optimized the fabrication recipe for BRWs to yield samples with lower optical loss and therefore higher photon pair production rates. By employing FBMS optical lithography, low pressure and low chlorine concentration etching, and resist reflow, we reduced the RMS area sidewall roughness in test GaAs sample waveguides from 16.05(2) nm to 4.736(7) nm. For our BRWs, the improved etch recipe results in a low optical loss coefficient for telecom wavelength light of \(\alpha_{\mathrm{reflow}}=0.08\,(6)\) mm\({}^{-1}\). The lower optical losses lead to higher coincidence rates between the signal and idler photons created in PDC. The rate of 8800 (300) \(\,\mathrm{(mW\cdot s\cdot mm)}^{-1}\) is around a six-fold increase compared to samples produced without the reflow step and a factor of around 15 better than previous samples. This shows that a fabrication recipe benefits from optimization of every manufacturing step. Resist reflow is a particularly important step in reducing the sidewall roughness of waveguides and is one of the main factors contributing to optical losses in miniaturized optical devices. The improved fabrication and resulting low loss BRWs testify to the adequacy of the AlGaAs platform for quantum communication applications.
Funding.The authors acknowledge funding by the Uniqorn project (Horizon 2020 grant agreement no. 820474) and the BeyondC project (FWF project no. F7114).
Acknowledgments.We thank Felix Laimer and Elisabeth Gruber for help with atomic force microscopy and Markus Weiss for support and fruitful discussions in the cleanroom.
Author contributions.Conceptualization, H.T., R.J.C., S.F., G.W.; Formal analysis, H.T., M.W., S.F.; Methodology, H.T., M.W., R.J.C., S.F.; Investigation, H.T., M.W., B.N., A.S.; Resources, H.S., M.K., S.H.,
\begin{table}
\begin{tabular}{c c c c} \hline \hline Recipe & reflow & no reflow & previous \\ \hline Coincidences & 8800(300) & 1490(60) & 600(100) \\ \hline \end{tabular}
\end{table}
Table 1: Comparison of coincidence counts measured per second per mm waveguide length at 1 mW external pump power for the three different fabrication recipes. The value for the previous recipe is taken from Ref. [30].
C.S.; Software, H.T., S.F.; Supervision, R.J.C., S.F., C.S., G.W.; Writing - original draft, H.T.; Writing - review & editing, All Authors; Funding acquisition, C.S., G.W.
## Disclosures
The authors have nothing to disclose.
Data availability.Data underlying the results presented in this paper are available at 10.5281/zenodo.7702404.
|
2304.00712 | Taylor Polynomials of Rational Functions | A Taylor variety consists of all fixed order Taylor polynomials of rational
functions, where the number of variables and degrees of numerators and
denominators are fixed. In one variable, Taylor varieties are given by rank
constraints on Hankel matrices. Inversion of the natural parametrization is
known as Pad\'e approximation. We study the dimension and defining ideals of
Taylor varieties. Taylor hypersurfaces are interesting for projective geometry,
since their Hessians tend to vanish. In three and more variables, there exist
defective Taylor varieties whose dimension is smaller than the number of
parameters. We explain this with Fr\"oberg's Conjecture in commutative algebra. | Aldo Conca, Simone Naldi, Giorgio Ottaviani, Bernd Sturmfels | 2023-04-03T04:07:58Z | http://arxiv.org/abs/2304.00712v1 | # Taylor Polynomials of Rational Functions
###### Abstract
A Taylor variety consists of all fixed order Taylor polynomials of rational functions, where the number of variables and degrees of numerators and denominators are fixed. In one variable, Taylor varieties are given by rank constraints on Hankel matrices. Inversion of the natural parametrization is known as Pade approximation. We study the dimension and defining ideals of Taylor varieties. Taylor hypersurfaces are interesting for projective geometry, since their Hessians tend to vanish. In three and more variables, there exist defective Taylor varieties whose dimension is smaller than the number of parameters. We explain this with Froberg's Conjecture in commutative algebra.
## 1 Introduction
Given two polynomials \(P\) and \(Q\) whose constant term is \(1\), the rational function \(P/Q\) has a Taylor series expansion with constant term \(1\). Truncating that series at terms of degree \(m\), we obtain the \(m\)th Taylor polynomial of \(P/Q\). Its coefficients are polynomials in the coefficients of \(P\) and \(Q\). For example, consider the fifth Taylor polynomial of two univariate quadrics:
\[\begin{array}{rcl}\frac{1+p_{1}x+p_{2}x^{2}}{1+q_{1}x+q_{2}x^{2}}&=&\begin{array} []{l}1+(p_{1}-q_{1})x-(p_{1}q_{1}-q_{1}^{2}-p_{2}+q_{2})x^{2}+(p_{1}q_{1}^{2}- q_{1}^{3}-p_{1}q_{2}-p_{2}q_{1}+2q_{1}q_{2})x^{3}\\ -\,(p_{1}q_{1}^{3}-q_{1}^{4}-2p_{1}q_{1}q_{2}-p_{2}q_{1}^{2}+3q_{1}^{2}q_{2}+p_ {2}q_{2}-q_{2}^{2})x^{4}+(p_{1}q_{1}^{4}-q_{1}^{5}\\ -3p_{1}q_{1}^{2}q_{2}-p_{2}q_{1}^{3}+4q_{1}^{3}q_{2}+p_{1}q_{2}^{2}+2p_{2}q_{1} q_{2}-3q_{1}q_{2}^{2})x^{5}\,+\,\ldots\end{array} \tag{1}\]
This rational function has four parameters \(p_{1},p_{2},q_{1},q_{2}\), but the quintic has five coefficients:
\[1\,+\,c_{1}x\,+\,c_{2}x^{2}\,+\,c_{3}x^{3}\,+\,c_{4}x^{4}\,+\,c_{5}x^{5}\,+\, \cdots\,. \tag{2}\]
Therefore, if (2) is a Taylor polynomial (1), then its coefficients \(c_{i}\) must satisfy a constraint.
To find this constraint, we equate (2) with the right hand side of (1), and we extract the coefficients of all \(x\)-monomials. This yields a system of five polynomial equations. From that system we eliminate the four unknowns \(p_{1},p_{2},q_{1},q_{2}\). This process leads to the cubic equation
\[\det\begin{bmatrix}c_{5}&c_{4}&c_{3}\\ c_{4}&c_{3}&c_{2}\\ c_{3}&c_{2}&c_{1}\end{bmatrix}\ =\ c_{1}c_{3}c_{5}-c_{1}c_{4}^{2}-c_{2}^{2}c_{5}+2 c_{2}c_{3}c_{4}-c_{3}^{3}\quad=\quad 0. \tag{3}\]
This is the condition for a quintic to be a Taylor polynomial for the ratio of two quadrics.
For a geometric view, note that (1) defines a polynomial map from the 4-space with coordinates \((p_{1},p_{2},q_{1},q_{2})\) into the 5-space with coordinates \((c_{1},c_{2},c_{3},c_{4},c_{5})\). The closure of
the image of this map is called Taylor variety (see Definition 1.1). For the example in (1), the Taylor variety is the hypersurface in projective 4-space \(\mathbb{P}^{4}\) defined by the equation (3). In this article we study such algebraic varieties of truncated rational series in \(n\geq 1\) variables.
The general case concerns a rational function \(P/Q\) where \(P\) and \(Q\) are polynomials in \(x=(x_{1},\ldots,x_{n})\) that satisfy \(P(0)=Q(0)=1\). We assume that \(P\) has degree \(\leq d\) and \(Q\) has degree \(\leq e\). We consider the Taylor series of this rational function, expanded up to order \(m\):
\[\frac{P(x)}{Q(x)}\quad=\quad\sum_{|\gamma|\leq m}c_{\gamma}\,x^{\gamma}\qquad+ \quad\text{terms of order}\,\geq m+1. \tag{4}\]
Here \(x^{\gamma}\) denotes the monomial \(x_{1}^{\gamma_{1}}\cdots x_{n}^{\gamma_{n}}\) and \(|\gamma|=\gamma_{1}+\cdots+\gamma_{n}\) is the total degree of \(x^{\gamma}\).
The main point of this paper is the characterization of all polynomials \(T=\sum c_{\gamma}x^{\gamma}\) that admit such an approximation. The numerator \(P\) and the denominator \(Q\) in (4) have degrees \(d\) and \(e\) respectively. We call (4) a _Pade approximation_ of type \((d,e)\). Such approximations of analytic functions originated in work of Hermite [11] and Pade [15]. This topic belongs to numerical analysis, where it is studied mostly in the univariate case [2]. The computation of a single Pade approximant is an instance of a block Toeplitz linear algebra problem [4]. Pade approximation is also a classical question of computer algebra; see _e.g._[3]. The Pade approximation problem for \(n\geq 2\) can be interpreted as a problem of computing syzygies [12].
If the number of parameters (_i.e._ the coefficients of \(P\) and \(Q\)) is small compared to the approximation order \(m\), and if \(n=1\), then the rational function \(P/Q\) is uniquely determined by its Taylor polynomial \(T\). This is noted _e.g._ in [2, Section 1.1]. For us, this means that the parametrization \((P,Q)\mapsto T\) is an injective map. This holds for \(n=1\) but fails for \(n\geq 3\). We shall see that the fibers of the map \((P,Q)\mapsto T\) can be positive-dimensional, _i.e._ the uniqueness of the Pade approximation breaks down. The geometric study in this article thus represents a foundational contribution to the theory of multivariate Pade approximations.
All students of calculus are familiar with Taylor series expansions. We therefore chose the name Taylor instead of the name Pade for the geometric object we shall investigate.
**Definition 1.1**.: The _Taylor variety_\(\mathcal{T}^{n}_{d,e,m}\) is defined as the closure in \(\mathbb{P}^{n+m\choose n}-1\) of the set of Taylor polynomials of degree \(\leq m\) of rational functions (4) of degree \((d,e)\) in \(n\) variables.
The projective space \(\mathbb{P}^{n+m\choose n}-1\) comprises polynomials in \(n\) variables of degree \(\leq m\), up to scaling. Polynomials with \(c_{0}=1\) form an affine open chart \(\mathbb{C}^{n+m\choose n}-1\). We here work over \(\mathbb{C}\), but our theory extends to all fields. Our computations are done over the rational numbers \(\mathbb{Q}\).
The Taylor variety \(\mathcal{T}^{n}_{d,e,m}\) is irreducible since it arises from the image of a polynomial map. A natural first question is: what is its dimension? Let's start by counting parameters. The polynomial \(P\) has \({n+d\choose n}-1\) free coefficients, and the polynomial \(Q\) has \({n+e\choose n}-1\) free coefficients. Since the dimension can never increase under a polynomial map, we conclude
\[\dim\bigl{(}\mathcal{T}^{n}_{d,e,m}\bigr{)}\ \leq\ \min\bigl{\{}\,{d+n\choose n}+{e+n \choose n}-2\,,\,{m+n\choose n}-1\,\bigr{\}}. \tag{5}\]
The quantity on the right hand side is the _expected dimension_ of the Taylor variety \(\mathcal{T}^{n}_{d,e,m}\). In our example (1), which is the case \(n=1,d=e=2,m=5\), the expected dimension is 4. And, indeed, \(\mathcal{T}^{1}_{2,2,5}\) is the cubic hypersurface in \(\mathbb{P}^{5}\) defined by (3), so its dimension equals 4.
We now discuss the organization of this paper and we summarize our results. Section 2 resolves the univariate case (\(n=1\)). Here the dimension equals the expected dimension. The variety \(\mathcal{T}^{1}_{d,e,m}\) lives in \(\mathbb{P}^{m}\) and it has dimension \(d+e\), provided \(d+e<m\). Its prime ideal is generated by the maximal minors of a Hankel matrix with \(m-d\) rows and \(e+1\) columns (Theorem 2.3). Computing the kernel of this matrix is a key step for Pade approximation.
In Section 3 we turn to \(n\geq 2\). We introduce the Pade matrix, which has a block Hankel structure. Its ideal of maximal minors is generally not radical and can have multiple irreducible components. Among them is the Taylor variety \(\mathcal{T}^{n}_{d,e,m}\). In Theorem 3.4 we identify its prime ideal \(\mathcal{I}^{n}_{d,e,m}\). Computationally-minded readers can jump to Example 3.7 right now.
Our key finding is that Taylor varieties can be _defective_, _i.e._ the inequality in (5) is strict. The smallest instance (\(n=3,d=e=2,m=3\)) is worked out in detail in Proposition 4.1.
In Section 4 we focus on Taylor varieties that are defined by a single polynomial, so they have codimension one in their ambient space. Some of these _Taylor hypersurfaces_ exhibit a property that is of interest in projective geometry, namely their Hessian vanishes identically.
In Section 5 we derive a general formula for the dimension of the Taylor variety \(\mathcal{T}^{n}_{d,e,m}\). This enables the computations, reported in Table 1, which culminate in Conjecture 5.5.
In Section 6 we recast our dimension formula in terms of Hilbert functions of ideals of generic forms. This yields a link to Froberg's Conjecture, which is a longstanding open problem in commutative algebra. In Theorem 6.2 and in Corollary 6.4, this connection is used to prove Conjecture 5.5 in some special cases. We are grateful to Christian Krattenthaler for suggesting the proof of Theorem 6.2 and for allowing us to include it in our paper.
## 2 One Variable
We consider polynomials in one variable \(x\) of the form \(T=1+c_{1}x+c_{2}x^{2}+\cdots+c_{m}x^{m}\). The set of these polynomials is identified with the vector space \(\mathbb{C}^{m}\) with coordinates \((c_{1},c_{2},\ldots,c_{m})\). Similarly, we identify \(\mathbb{C}^{d}\) and \(\mathbb{C}^{e}\) respectively with the spaces of polynomials \(P\) of degree \(\leq d\) and \(Q\) of degree \(\leq e\) such that \(P(0)=Q(0)=1\). We are interested in the polynomial map
\[\psi\,:\,\mathbb{C}^{d}\times\mathbb{C}^{e}\,\to\,\mathbb{C}^{m}\,,\,\,(P,Q) \,\mapsto\,\,\text{the order $m$ Taylor polynomial of }\,P/Q. \tag{6}\]
We fix the projective space \(\mathbb{P}^{m}\) with coordinates \((c_{0}:c_{1}:\cdots:c_{m})\). The Taylor variety \(\mathcal{T}^{1}_{d,e,m}\) is the closure in \(\mathbb{P}^{m}\) of the image of \(\psi\). In words, \(\mathcal{T}^{1}_{d,e,m}\) is the smallest projective variety containing all polynomials \(T\) of degree \(m\) whose Pade approximation of type \((d,e)\) is exact. We first exclude the trivial case when the Taylor variety fills its ambient projective space.
**Lemma 2.1**.: _Assume \(d+e\geq m\). Then \(\mathcal{T}^{1}_{d,e,m}=\mathbb{P}^{m}\)._
Proof.: We assume \(d+e=m\) and claim that the map \(\psi\) in (6) is dominant. This covers the case \(d+e\geq m\) since \(d\) and \(e\) are upper bounds on the degrees. We note that the product \(QT=(1+\sum_{i=1}^{e}q_{i}x^{i})(\sum_{i=0}^{m}c_{i}x^{i})\) is a polynomial of degree \(\leq d\) modulo \(x^{m+1}\) if and only if
\[\begin{bmatrix}c_{m-1}&c_{m-2}&\cdots&c_{d}\\ c_{m-2}&c_{m-3}&\cdots&c_{d-1}\\ \vdots&\vdots&\ddots&\vdots\\ c_{d}&c_{d-1}&\cdots&c_{d-e+1}\end{bmatrix}\cdot\begin{bmatrix}q_{1}\\ q_{2}\\ \vdots\\ q_{e}\end{bmatrix}\ \ =\ \ -\begin{bmatrix}c_{m}\\ c_{m-1}\\ \vdots\\ c_{d+1}\end{bmatrix}, \tag{7}\]
where \(c_{i}=0\) if \(i<0\). Let \(\mathcal{U}\subset\mathbb{P}^{m}\) be the non-empty Zariski open set of all polynomials \(T\) such that the matrix on the left side of (7) is non-singular. Every \(T\in\mathcal{U}\) has a unique exact Pade approximation \(P/Q\) of type \((d,e)\). This shows that \(\mathcal{U}\subseteq\psi(\mathbb{C}^{d}\times\mathbb{C}^{e})\), as claimed.
From now on we assume \(d+e<m\). From the proof of Lemma 2.1, one deduces that the Pade approximation is unique when \(T\in\mathcal{T}^{1}_{d,e,m}\) is generic. The map \(\psi\) in (6) is birational onto its image. In particular, the Taylor variety \(\mathcal{T}^{1}_{d,e,m}\) has the expected dimension \(d+e\).
Fix the set of monomials \(M_{d+1,m}=\{x^{d+1},\ldots,x^{m}\}\). Multiplication by a polynomial \(T=1+c_{1}x+\cdots+c_{m}x^{m}\) defines a linear map from \(\mathbb{C}[x]_{\leq e}\) to \(\mathbb{C}[x]_{\leq e+m}\). Composing this map with the projection onto the linear span of \(M_{d+1,m}\) in the polynomial ring \(\mathbb{C}[x]\) yields
\[\begin{array}{ccccc}\varphi_{T}:&\mathbb{C}[x]_{\leq e}&\to&\mathbb{C}[x]_{ \leq e+m}&\to&\mathbb{C}\{M_{d+1,m}\},\\ &Q&\mapsto&QT&\mapsto&QT\,\text{ restricted to }M_{d+1,m}.\end{array} \tag{8}\]
We fix the monomial bases for both the domain and the image. For the domain we order that monomial basis by increasing degree, and for the image we order it by decreasing degree. With this convention, the \((m-d)\times(e+1)\) matrix that represents the linear map \(\varphi_{T}\) equals
\[P_{T}\ =\ \begin{bmatrix}c_{m}&c_{m-1}&\cdots&c_{m-e}\\ c_{m-1}&c_{m-2}&\cdots&c_{m-e-1}\\ \vdots&\vdots&\ddots&\vdots\\ c_{d+2}&c_{d+1}&\cdots&c_{d-e+2}\\ c_{d+1}&c_{d}&\cdots&c_{d-e+1}\end{bmatrix}. \tag{9}\]
Here \(d-e+1\) is allowed to be negative, and we set \(c_{i}=0\) whenever \(i<0\). By our assumption, the number \(m-d\) of rows of \(P_{T}\) is greater or equal to the number \(e+1\) of columns of \(P_{T}\).
**Example 2.2** (\(d=0,e=m-1\)).: This extreme case is the variety of reciprocals of polynomials, expanded up to one order beyond their degree. Here the matrix \(P_{T}\) is square, and the last row and the column have only two non-zero entries. For instance, for \(m=4\) we have
\[P_{T}\ =\ \begin{bmatrix}c_{4}&c_{3}&c_{2}&c_{1}\\ c_{3}&c_{2}&c_{1}&c_{0}\\ c_{2}&c_{1}&c_{0}&0\\ c_{1}&c_{0}&0&0\end{bmatrix}.\]
The determinant of this \(m\times m\) matrix is irreducible. Our next result generalizes this.
We now turn to some commutative algebra in the polynomial ring \(\mathbb{C}[c]=\mathbb{C}[c_{0},c_{1},\ldots,c_{m}]\). For a polynomial matrix \(X\) and \(s\in\mathbb{N}\) we will denote by \(I_{s}(X)\) the ideal generated by all \(s\times s\) minors of \(X\), and we set \(I_{\max}(X)=I_{r}(X)\) when \(r\) denotes the rank of \(X\).
**Theorem 2.3**.: _The ideal \(I_{e+1}(P_{T})\) generated by the \((e+1)\times(e+1)\) minors of the matrix in (9) is prime in \(\mathbb{C}[c]\). It defines an irreducible projective variety of dimension \(d+e\) in \(\mathbb{P}^{m}\)._
Proof.: A generic Hankel matrix is \(1\)-generic in the sense of Eisenbud [7, 8]. Indeed, in [7, Theorem 1], Eisenbud proves a beautiful result which states that the ideal of maximal minors of a \(1\)-generic matrix of any format \(a\times b\) (with \(a\geq b\)) is prime of the expected codimension \(a-b+1\), and the same statement is true for any linear section of codimension \(\leq b-2\).
If \(e\leq d+1\) then \(P_{T}\) is a generic Hankel matrix and our conclusion follows immediately. Suppose now that \(e>d+1\). The \((m-d)\times(e+1)\) matrix \(P_{T}\) is obtained from the generic Hankel matrix by setting to \(0\) at most \(e-1\) of the variables in that matrix. Hence the result mentioned above applies, and we conclude that \(I_{e+1}(P_{T})\) is prime of codimension \(m-d-e\). This particular application of Eisenbud's result on \(1\)-generic matrices to special coordinate sections of generic Hankel matrices appears in more explicit form in [6, Proposition 2.4].
**Corollary 2.4**.: _The Taylor variety \(\mathcal{T}^{1}_{d,e,m}\subset\mathbb{P}^{m}\) is irreducible of dimension \(\min\{d+e,m\}\). If \(d+e<m\), then \(\mathcal{T}^{1}_{d,e,m}=\{\,T\in\mathbb{P}^{m}\;:\;\operatorname{rank}(P_{T}) \leq e\,\}\) and its prime ideal equals \(I_{e+1}(P_{T})\)._
Proof.: The variety \(\mathcal{T}^{1}_{d,e,m}\) is irreducible, as it is the closure of the image of a polynomial map. If \(d+e\geq m\), then \(\mathcal{T}^{1}_{d,e,m}=\mathbb{P}^{m}\) by Lemma 2.1, and we are done. We thus assume \(d+e<m\).
Theorem 2.3 tells us that \(V(I_{e+1}(P_{T}))\) is irreducible. We claim that it equals \(\mathcal{T}^{1}_{d,e,m}\). If \(T\in\psi(\mathbb{C}^{d}\times\mathbb{C}^{e})\) then there exists \(Q\in\mathbb{C}[x]_{\leq e}\) with \(Q(0)=1\) and the terms of \(QT\) of degree from \(d+1\) to \(m\) vanish. In matrix notation, \(P_{T}\,(1,q_{1},\ldots,q_{e})^{t}=\mathbf{0}^{t}\). Hence the \((e+1)\times(e+1)\) minors of \(P_{T}\) are zero. Therefore, \(V(I_{e+1}(P_{T}))\) contains \(\psi(\mathbb{C}^{d}\times\mathbb{C}^{e})\) and its closure \(\mathcal{T}^{1}_{d,e,m}\).
For the converse, we will show that \(\mathcal{T}^{1}_{d,e,m}\) contains an open subset of \(V(I_{e+1}(P_{T}))\). Let \(\mathcal{C}\subset V(I_{e+1}(P_{T}))\) be the closed set of all \(T\) such that the last \(e\) columns of \(P_{T}\) are linearly dependent. Then \(\mathcal{V}=V(I_{e+1}(P_{T}))\backslash\mathcal{C}\) is non-empty and open in \(V(I_{e+1}(P_{T}))\). If \(T\in\mathcal{V}\) then the kernel of \(P_{T}\) has a vector with first coordinate \(1\). Hence \(T\in\psi(\mathbb{C}^{d}\times\mathbb{C}^{e})\), and \(\mathcal{V}\subset\mathcal{T}^{1}_{d,e,m}\). We conclude that \(V(I_{e+1}(P_{T}))=\mathcal{T}^{1}_{d,e,m}\), and hence \(\,I(\mathcal{T}^{1}_{d,e,m})=\sqrt{I_{e+11}(P_{T})}=I_{e+1}(P_{T})\).
**Example 2.5** (\(e=d+1\)).: The degree of the numerator is one less than that of the denominator. Here the Taylor variety \(\mathcal{T}^{1}_{d,d+1,m}\) is a secant variety of the rational normal curve, and
\[P_{T}\;=\;\begin{bmatrix}c_{m}&c_{m-1}&\cdots&c_{m-d-1}\\ c_{m-1}&c_{m-2}&\cdots&c_{m-d-2}\\ \vdots&\vdots&\ddots&\vdots\\ c_{d+1}&c_{d}&\cdots&c_{0}\end{bmatrix}\]
is the catalecticant in degrees \((m-d-1,d+1)\) of the binary form \(f=\sum_{i=0}^{m}\binom{m}{i}c_{i}X^{i}Y^{m-i}\). In symbols, \(\mathcal{T}^{1}_{d,d+1,m}=\sigma_{d+1}(v_{1,m}(\mathbb{P}^{1}))\,=\,\{T: \operatorname{rank}(P_{T})\leq d+1\}\). This variety comprises binary forms \(f\) of degree \(m\) that are sums of \(d+1\)\(m\)-th powers of linear forms in \(X\) and \(Y\).
When \(m=2d+2\), we get a hypersurface. For instance, for \(m=4,d=1\), the hypersurface \(\mathcal{T}^{1}_{1,2,4}\subset\mathbb{P}^{4}\) is defined by the \(3\times 3\) Hankel matrix. It comprises quartics \(f=c_{0}Y^{4}+4c_{1}XY^{3}+6c_{2}X^{2}Y^{2}+4c_{3}X^{3}Y+c_{4}X^{4}\) that are sums of at most two fourth powers of linear forms.
We conclude by summarizing the four cases of what the Taylor variety for \(n=1\) can be:
* If \(e+d\geq m\) then \(\mathcal{T}^{1}_{d,e,m}\) is equal to \(\mathbb{P}^{m}\).
* If \(e+d<m\) and \(e=d+1\) then \(\,\mathcal{T}^{1}_{d,d+1,m}=\sigma_{d+1}(v_{1,m}(\mathbb{P}^{1}))\subset\mathbb{P} ^{m}\,\) is the \((d+1)\)-st secant variety to the rational normal curve of degree \(m\).
* If \(e+d<m\) and \(e<d+1\), then our Taylor variety is a cone with apex \(\mathbb{P}^{d-e}\), namely \[\mathcal{T}^{1}_{d,e,m}\ =\ \mathcal{T}^{1}_{e-1,e,D}\,\star\,\mathbb{P}^{d-e} \ =\ \sigma_{e}(v_{1,D}(\mathbb{P}^{1}))\,\star\,\mathbb{P}^{d-e}\ \subset\ \mathbb{P}^{m}\] with \(D=m-d+e-1\). This is like the last case, with coordinates \(c_{0},\ldots,c_{d-e}\) missing.
* If \(e+d<m\) and \(e>d+1\), then the Taylor variety is a linear section of the secant variety above. Namely, setting \(D^{\prime}=m+d-e+1\), we have \(\mathcal{T}^{1}_{d,e,m}=\sigma_{e}(v_{1,D^{\prime}}(\mathbb{P}^{1}))\cap H\) where \(H\subset\mathbb{P}^{D^{\prime}}\) is the linear space defined by the vanishing of \(e-d-1\) coordinates.
We conclude that the Taylor variety for \(n=1\) is a classical object that is well-understood. In the next sections we shall see that the situation is different and more interesting for \(n\geq 2\).
## 3 Pade Matrix
The Taylor variety \(\mathcal{T}^{n}_{d,e,m}\subset\mathbb{P}^{\binom{n+m}{n}-1}\) was defined by the parametrization in (4). Section 2 offered an implicit representation for \(n=1\). In what follows, we generalize this to \(n\geq 2\). Let \(\mathbb{C}[x]\) denote the polynomial ring in \(n\) variables \(x=(x_{1},\ldots,x_{n})\). Fix integers \(d,e,m\geq 0\). Write \(\mathbb{C}[x]_{\leq e}\) for the \(\binom{n+e}{n}\)-dimensional space of polynomials of degree at most \(e\), and \(M_{d+1,m}\) for the set of monomials in \(\mathbb{C}[x]_{\leq m}\) but not in \(\mathbb{C}[x]_{\leq d}\). We have \(|M_{d+1,m}|=\binom{n+m}{n}-\binom{n+d}{n}\).
Consider a polynomial \(T=\sum_{|\gamma|\leq m}c_{\gamma}x^{\gamma}\) in \(\mathbb{C}[x]_{\leq m}\) with \(T(0)=c_{0}=1\). Let \(\varphi_{T}\) denote the \(\mathbb{C}\)-linear map defined by the same formula as in (8). But that formula is now meant for a polynomial ring \(\mathbb{C}[x]\) in \(n\) variables \(x=(x_{1},\ldots,x_{n})\). The _Pade matrix_\(P_{T}\) represents \(\varphi_{T}\) with respect to the monomial bases. The rows of \(P_{T}\) are indexed by \(M_{d+1,m}\), ordered decreasingly by total degree, and the columns of \(P_{T}\) are indexed by \(M_{0,e}\), ordered increasingly by total degree. The entry of \(P_{T}\) in row \(x^{\alpha}\in M_{d+1,m}\) and column \(x^{\beta}\in M_{0,e}\) equals \(c_{\alpha-\beta}\) if \(\beta\leq\alpha\) and \(0\) otherwise. The Pade matrix has format \((\binom{m+n}{n}-\binom{d+n}{n})\times\binom{e+n}{n}\), and it has a block Hankel structure. That structure looks like (9) but now the blocks have different sizes.
**Example 3.1** (\(n=2,d=e=2,m=5\)).: Just like in (1), let us consider the fifth Taylor polynomial \(T\) of two quadrics, but now in two variables. The Pade matrix has the structure
\[P_{T}\ =\ \begin{bmatrix}C_{5}&C^{\prime}_{4}&C^{\prime\prime}_{3}\\ C_{4}&C^{\prime}_{3}&C^{\prime}_{2}\\ C_{3}&C_{2}&C_{1}\end{bmatrix}. \tag{10}\]
This looks like the matrix in (3), but \(P_{T}\) has \(15=6+5+4\) rows and \(6=1+2+3\) columns. Each block with index \(i\) represents multiplication with the degree \(i\) component of \(T\), _e.g._ the \(4\times 2\) matrix \(C_{2}:\mathbb{C}[x]_{1}\to\mathbb{C}[x]_{3}\) and the \(5\times 3\) matrix \(C^{\prime}_{2}:\mathbb{C}[x]_{2}\to\mathbb{C}[x]_{4}\) are multiplication with the quadratic part of \(T\). The column \((C_{5},C_{4},C_{3})^{t}\) represents \(\mathbb{C}\to\mathbb{C}\{M_{2,4}\},c\mapsto cT\). \(\blacksquare\)
We now describe the block structure in general, since this will be important later on. Let \(T_{i}\) denote the homogeneous component in degree \(i\) of the inhomogeneous polynomial \(T\in\mathbb{C}[x]_{\leq m}\). For every degree \(j\), multiplication by \(T_{i}\) defines a \(\mathbb{C}\)-linear map \(\mathbb{C}[x]_{j}\to\mathbb{C}[x]_{i+j}\). Let \(C_{i}\) denote the matrix which represents the linear map \(T_{i}\) with respect to the monomial bases. From now on, we do not record the index \(j\), but we assume that \(j\) is clear from the context. The Pade matrix \(P_{T}\) is the aggregate of these blocks. Its block structure is independent of \(n\). With this convention, the matrix \(P_{T}\) in (10) now has \(C^{\prime}_{4}\mapsto C_{4}\), \(C^{\prime}_{3},C^{\prime\prime}_{3}\mapsto C_{3}\) and \(C^{\prime}_{2}\mapsto C_{2}\).
**Example 3.2** (\(d=3,e=4,m=5\)).: For the fifth Taylor polynomial of cubic over quartic,
\[P_{T}\ \ =\ \ \begin{bmatrix}C_{5}&C_{4}&C_{3}&C_{2}&C_{1}\\ C_{4}&C_{3}&C_{2}&C_{1}&C_{0}\end{bmatrix}. \tag{11}\]
The number of rows is \(\binom{n+4}{5}+\binom{n+3}{4}\), and the number of columns is \(\,1+n+\binom{n+1}{2}+\binom{n+2}{3}+\binom{n+3}{4}\). This Pade matrix has a distinguished column vector in its kernel, written schematically as
\[\left[\begin{array}{cccc}0&0&T_{2}T_{0}-T_{1}^{2}&T_{2}T_{1}-T_{3}T_{0}&T_{ 3}T_{1}-T_{2}^{2}\end{array}\right]^{t}. \tag{12}\]
The five entries in (12) are the coefficient vectors of the homogeneous components of the polynomial \(\,(T_{2}T_{0}-T_{1}^{2})+(T_{2}T_{1}-T_{3}T_{0})+(T_{3}T_{1}-T_{2}^{2})\). Do check that this is in the kernel of \(\varphi_{T}\). Our notation alludes to Cramer's rule for the \(2\times 3\) submatrix on the right of \(P_{T}\). \(\blacksquare\)
Let \(\mathcal{I}^{n}_{d,e,m}\) denote the homogeneous prime ideal that defines the irreducible variety \(\mathcal{T}^{n}_{d,e,m}\). This is an ideal in the polynomial ring \(\mathbb{C}[c]=\mathbb{C}\big{[}c_{\gamma}:|\gamma|\leq m\big{]}\). Our goal in this section is to determine the ideal \(\mathcal{I}^{n}_{d,e,m}\) from the Pade matrix \(P_{T}\). We begin with some examples.
**Example 3.3** (\(n=2,m=3\)).: Here the ambient space is \(\mathbb{P}^{9}\) with coordinates \(c_{00},c_{01},\ldots,c_{30}\). The first two cases to consider are \((d,e)=(2,1)\) and \((d,e)=(1,2)\). The Pade matrices are
\[P_{T}=\begin{bmatrix}C_{3}&C_{2}\end{bmatrix}=\begin{bmatrix}c_{30}&0&c_{20} \\ c_{21}&c_{20}&c_{11}\\ c_{12}&c_{11}&c_{02}\\ c_{03}&c_{02}&0\end{bmatrix}\ \ \text{and}\ \ \ P_{T}=\begin{bmatrix}C_{3}&C_{2}&C_{1}\\ C_{2}&C_{1}&C_{0}\end{bmatrix}=\begin{bmatrix}c_{30}&0&c_{20}&0&0&c_{10}\\ c_{21}&c_{20}&c_{11}&0&c_{10}&c_{01}\\ c_{12}&c_{11}&c_{02}&c_{10}&c_{01}&0\\ c_{03}&c_{02}&0&c_{01}&0&0&0\\ c_{20}&0&c_{10}&0&0&c_{00}\\ c_{11}&c_{10}&c_{01}&0&c_{00}&0\\ c_{02}&c_{01}&0&c_{00}&0\end{bmatrix}.\]
In both cases, the prime ideal \(\mathcal{I}^{2}_{d,e,3}\) is generated by the maximal minors of \(P_{T}\), and it is Cohen-Macaulay of codimension \(2\). We conclude that \(\mathcal{T}^{2}_{2,1,3}\) has degree \(6\) and its ideal \(\mathcal{I}^{2}_{2,1,3}\) is generated by four cubics, while \(\mathcal{T}^{2}_{1,2,3}\) has degree \(21\) and \(\mathcal{I}^{2}_{1,2,3}\) is generated by seven sextics.
The case \((d,e)=(1,1)\) is more interesting. The \(4\)-dimensional Taylor variety \(\mathcal{T}^{2}_{1,1,3}\) represents Taylor cubics for ratios of two bivariate linear polynomials. Its Pade matrix equals
\[P_{T}\ =\ \begin{bmatrix}C_{3}&C_{2}\\ C_{2}&C_{1}\end{bmatrix}\ =\ \begin{bmatrix}c_{30}&0&c_{20}\\ c_{21}&c_{20}&c_{11}\\ c_{12}&c_{11}&c_{02}\\ c_{03}&c_{02}&0\\ c_{20}&0&c_{10}\\ c_{11}&c_{10}&c_{01}\\ c_{02}&c_{01}&0\end{bmatrix}.\]
The ideal of maximal minors of \(P_{T}\) is Cohen-Macaulay of expected codimension \(5\) and degree \(21\). But this ideal is not prime. It is the intersection of the prime ideal \(\mathcal{I}_{1,1,3}^{2}\) with a primary ideal of degree \(9\) whose radical is \(\langle c_{20},c_{11},c_{02},c_{10},c_{01}\rangle\). Hence \(\mathcal{T}_{1,1,3}^{2}\) has degree \(12\) in \(\mathbb{P}^{9}\). The ideal \(\mathcal{I}_{1,1,3}^{2}\) is not Cohen-Macaulay. It is generated by \(5\) quadrics, \(16\) cubics and \(1\) quartic. \(\blacksquare\)
Let \(q=(q_{\beta}:\beta\in M_{0,e})\) be the column vector of coefficients of \(Q\). Then \(S=\mathbb{C}[c,q]\) is a polynomial ring in \(\binom{n+m}{n}+\binom{n+e}{n}\) unknowns. Let \(J\subset S\) be the ideal generated by the entries of the column vector \(P_{T}\cdot q\). The variety \(V(J)\) consists of pairs of polynomials \((T,Q)\) such that the product \(TQ\) contains no monomials in \(M_{d+1,m}\). The ideal saturation \((J:q_{0}^{\infty})\) describes the closure of the pairs \((T,Q)\) such that \(Q(0)\neq 0\) and \(TQ\) has no monomials in \(M_{d+1,m}\) The projection of this variety \(V\big{(}(J:q_{0}^{\infty})\big{)}\) onto the \(T\)-coordinates is the Taylor variety \(\mathcal{T}_{d,e,m}^{n}\). However, more is true: the projection gives the prime ideal that defines \(\mathcal{T}_{d,e,m}^{n}\).
**Theorem 3.4**.: _The elimination ideal \((J:q_{0}^{\infty})\cap\mathbb{C}[c]\) coincides with the prime ideal \(\mathcal{I}_{d,e,m}^{n}\)._
Proof.: We already know that \((J:q_{0}^{\infty})\cap\mathbb{C}[c]\) defines \(\mathcal{T}_{d,e,m}^{n}\) as a set. Since elimination ideals of prime ideals are prime, it suffices to prove that \((J:q_{0}^{\infty})\) is prime. Since \((J:q_{0}^{\infty})\) is the contraction to \(S\) of the localization \(JS_{q_{0}}\), we may as well prove that \(JS_{q_{0}}\) is prime in \(S_{q_{0}}\). The generators of \(J\) correspond to the rows of the Pade matrix \(P_{T}\), so they are indexed by \(\alpha\in M_{d+1,m}\). More precisely, for each \(\alpha\in M_{d+1,m}\) we have a generator of \(J\) of the form
\[q_{0}c_{\alpha}+\sum q_{\beta}c_{\gamma}\qquad\text{ where the sum is over }\{(\beta,\gamma)\in M_{1,e}\times M_{0,m}\,:\,\beta+\gamma=\alpha\}. \tag{13}\]
In the quotient ring \(S_{q_{0}}/JS_{q_{0}}\) we may use (13) to write \(c_{\alpha}\) as a \(\mathbb{C}[q]\)-linear combination of \(c_{\gamma}\) with \(|\gamma|<|\alpha|\). Therefore \(S_{q_{0}}/JS_{q_{0}}\) is isomorphic to the polynomial ring \(\mathbb{C}[c_{\alpha}:\alpha\in M_{0,d}][q]\) localized at \(q_{0}\). Since this localization is a domain, we conclude that \(JS_{q_{0}}\) is a prime ideal.
In our situation, the saturation can be carried out by computing with non-homogeneous ideals, as follows. Let \(q^{\prime}\) be the vector obtained from \(q\) by setting the first coordinate \(q_{0}\) to \(1\).
**Corollary 3.5**.: _The prime ideal of the Taylor variety \(\mathcal{T}_{d,e,m}^{n}\) can be obtained as follows:_
\[\mathcal{I}_{d,e,m}^{n}\ =\ \big{\langle}P_{T}\cdot q^{\prime}\big{\rangle}\ \cap\ \mathbb{C}[c]. \tag{14}\]
Proof.: Both ideals involve only \(c\)-variables, and they are homogeneous in these variables. We must therefore show that a homogeneous polynomial \(f(c)\) lies in \(\mathcal{I}_{d,e,m}^{n}\) if and only if \(f(c)\) is in the right hand side of (14). We write \(q^{\prime}=[1,q^{\prime\prime}]^{t}\). Suppose \(f(c)\) lies in \(\mathcal{I}_{d,e,m}^{n}\). By Theorem 3.4, there exists \(m\in\mathbb{N}\) and a polynomial vector \(g(c,q)\) such that \(f(c)q_{0}^{m}=g(c,q)P_{T}(c)q\). Setting \(q=1\) in this identity, we see that \(f(c)=g(c,1,q^{\prime\prime})P_{T}(c)q^{\prime}\) lies in \(\big{\langle}P_{T}\cdot q^{\prime}\big{\rangle}\ \cap\ \mathbb{C}[c]\).
Conversely, let \(f(c)\) be in \(\big{\langle}P_{T}\cdot q^{\prime}\big{\rangle}\ \cap\ \mathbb{C}[c]\). There is a row vector \(h(c,q^{\prime})\) such that \(f(c)=h(c,q^{\prime})P_{T}(c)q^{\prime}\). We divide each variable in \(q^{\prime}\) by \(q_{0}\), and obtain \(f(c)=h(c,q^{\prime}/q_{0})P_{T}(c)(q^{\prime}/q_{0})\). By clearing denominators, we obtain an identity \(f(c)q_{0}^{\nu}=g(c,q)P_{T}(c)q\) for some \(\nu\in\mathbb{N}\), where \(g\) is the homogenization of \(h\). This shows that \(f(c)\) lies in \(\mathcal{I}_{d,e,m}^{n}\), by Theorem 3.4.
Theorem 3.4 and Corollary 3.5 show us how to get \(\mathcal{I}_{d,e,m}^{n}\) in a computer algebra system. In light of Example 3.3, one suspects that \(\mathcal{I}_{d,e,m}^{n}\) can be obtained from the ideal \(I_{\max}(P_{T})\) of maximal non-vanishing minors of \(P_{T}\) by saturation. This is presently only a conjecture. To state it precisely, let \(\hat{P}_{T}\) be the matrix obtained from \(P_{T}\) by deleting the leftmost column. The corresponding linear map \(\hat{\varphi}_{T}\) is the restriction of \(\varphi_{T}\) to the subspace \(\{Q\in\mathbb{C}[x]_{\leq e}:Q(0)=0\}\).
**Conjecture 3.6**.: The prime ideal of the variety \({\cal T}^{n}_{d,e,m}\) in the polynomial ring \(\mathbb{C}[c]\) satisfies
\[{\cal I}^{n}_{d,e,m}\ =\ \big{(}\,I_{\rm max}(P_{T})\,:\,I_{\rm max}(\hat{P}_{T})^{ \infty}\,\big{)}. \tag{15}\]
This is the ideal of maximal non-vanishing minors of the Pade matrix \(P_{T}\), saturated by the ideal of maximal non-vanishing minors of the reduced Pade matrix \(\hat{P}_{T}\). See also Theorem 5.1.
We close this section with an illustration of (15) that sets the stage for what is to come.
**Example 3.7** (\(n=2,d=1,e=2,m=4\)).: We consider rational functions in two variables \(x\) and \(y\) that are given as the ratio of a linear polynomial and a quadratic polynomial:
\[\frac{P(x,y)}{Q(x,y)}\ =\ \frac{p_{00}+p_{10}x+p_{01}y}{q_{00}+q_{10}x+q_{01}y+ q_{20}x^{2}+q_{11}xy+q_{02}y^{2}}\,.\]
Our aim is to characterize the quartic Taylor polynomials arising from such rational functions:
\[\begin{array}{rcl}T(x,y)&=&c_{00}\,+\,c_{10}x+c_{01}y\,+\,c_{20}x^{2}+c_{11} xy+c_{02}y^{2}+c_{30}x^{3}+c_{21}x^{2}y+c_{12}xy^{2}\\ &\cdot&\hskip 28.452756pt+\,c_{03}y^{3}\,+\,c_{40}x^{4}+c_{31}x^{3}y+c_{22}x^{2} y^{2}+c_{13}xy^{3}+c_{04}y^{4}.\end{array}\]
The passage from \((P,Q)\) to \(T\) defines a map \(\,\mathbb{P}^{2}\times\mathbb{P}^{5}\dashrightarrow\mathbb{P}^{14}\) that is birational onto its image, which is the \(7\)-dimensional Taylor variety \({\cal T}^{2}_{1,2,4}\) in \(\mathbb{P}^{14}\). We are interested in its ideal \({\cal I}^{2}_{1,2,4}\).
The Pade matrix for this problem has format \(12\times 6\). It equals
\[P_{T}\ =\ \begin{bmatrix}C_{4}&C_{3}&C_{2}\\ C_{3}&C_{2}&C_{1}\\ C_{2}&C_{1}&C_{0}\end{bmatrix}\ =\ \begin{bmatrix}c_{40}&c_{31}&c_{22}&c_{13}&c_{04}&c_{ 30}&c_{21}&c_{12}&c_{03}&c_{20}&c_{11}&c_{02}\\ c_{30}&c_{21}&c_{12}&c_{03}&0&c_{20}&c_{11}&c_{02}&0&c_{10}&c_{01}&0\\ 0&c_{30}&c_{21}&c_{12}&c_{03}&0&c_{20}&c_{11}&c_{02}&0&c_{10}&c_{01}\\ c_{20}&c_{11}&c_{02}&0&0&c_{10}&c_{01}&0&0&c_{00}&0\\ 0&c_{20}&c_{11}&c_{02}&0&0&c_{10}&c_{01}&0&0&c_{00}&0\\ 0&0&c_{20}&c_{11}&c_{02}&0&0&c_{10}&c_{01}&0&0&c_{00}\end{bmatrix}^{t}.\]
This matrix has \(924\) maximal minors of which \(896\) are linearly independent. Thus, the determinantal ideal \(I_{\rm max}(P_{T})\) is generated by \(896\) sextics in the \(15\) unknowns \(c_{ij}\). This ideal has three associated primes. One of them is the desired prime ideal \({\cal I}^{2}_{1,2,4}\) of dimension \(7\).
The other two are nonreduced extraneous components. The first one has dimension \(8\), which exceeds \(\dim({\cal T}^{2}_{1,2,4})=7\). It is the linear subspace \(\mathbb{P}^{8}\) defined by \(\langle c_{00},c_{10},c_{01},c_{20},c_{11},c_{02}\rangle\). This arises from the \(3\times 3\) minors in the last three columns of \(P_{T}\). The other extraneous component comes from the Veronese surface \({\cal T}^{2}_{0,1,3}\subset\mathbb{P}^{9}\) given by the third Taylor polynomial of \(\,1/(q_{00}+q_{10}x+q_{01}y)\). It is the join of the surface \({\cal T}^{2}_{0,1,3}\) with the \(\mathbb{P}^{4}\) of binary quartics \(c_{40}x^{4}+c_{31}x^{3}y+c_{22}x^{2}y^{2}+c_{13}xy^{3}+c_{04}y^{4}\). This join is a variety of dimension \(7\) and degree \(9\).
We can use Corollary 3.5 to compute the ideal \({\cal I}^{2}_{1,2,4}\), but this is a challenging computation. We find that \({\cal I}^{2}_{1,2,4}\) has at least \(1392\) minimal generators, namely seven cubics and respectively \(365,754,266\) polynomials in degrees \(6,7,8\). We do not know whether \({\cal I}^{2}_{1,2,4}\) requires additional generators. Numerical computations show that the Taylor variety \({\cal T}^{2}_{1,2,4}\) has degree \(326\).
Hypersurfaces
We here study Taylor varieties that are hypersurfaces in their ambient projective space. We start out with a pair of Taylor varieties that are _defective_, _i.e._ the inequality (5) is strict, thus setting the stage for our study of the dimensions of Taylor varieties in Section 5. Thereafter, we turn to the main topic in Section 4, namely Hessians of Taylor hypersurfaces. We show that many of them vanish identically, thereby contributing to a circle of ideas that has a long history in algebraic geometry, going back to Gordan and Noether in the 19th century.
We begin with defective instances. By "expected" we will mean that equality holds in (5).
**Proposition 4.1**.: _For \(n=3\), there exists a Taylor variety, namely \(\mathcal{T}^{3}_{2,2,3}\), that is expected to be a hypersurface but has codimension two, and there also exists a Taylor variety, namely \(\mathcal{T}^{3}_{8,5,9}\), that is expected to fill its ambient projective space but turns out to be a hypersurface._
Proof.: For the proof we present explicit instances with \(n=3\), and we verify the asserted properties. Our first example has parameters \(d=e=2\) and \(m=3\). The Taylor variety \(\mathcal{T}^{3}_{2,2,3}\) has expected dimension \(18\) and it lives in the \(\mathbb{P}^{19}\) of cubic surfaces. Its Pade matrix equals
\[P_{T}\ =\ \begin{bmatrix}C_{3}&C_{2}&C_{1}\end{bmatrix}\ =\ \begin{bmatrix}c_{300}&0&0&c_{200}&0&0&0&0&0&c_{100}\\ c_{210}&0&c_{200}&c_{110}&0&0&0&0&c_{100}&c_{010}\\ c_{201}&c_{200}&0&c_{101}&0&0&0&c_{100}&0&c_{001}\\ c_{120}&0&c_{110}&c_{020}&0&0&c_{100}&0&c_{010}&0\\ c_{111}&c_{110}&c_{101}&c_{011}&0&c_{100}&0&c_{010}&c_{001}&0\\ c_{102}&c_{101}&0&c_{002}&c_{100}&0&0&c_{001}&0&0\\ c_{030}&0&c_{020}&0&0&0&c_{010}&0&0&0\\ c_{021}&c_{020}&c_{011}&0&0&c_{010}&c_{001}&0&0&0\\ c_{012}&c_{011}&c_{002}&0&c_{010}&c_{001}&0&0&0&0\\ c_{003}&c_{002}&0&0&c_{001}&0&0&0&0&0\end{bmatrix}. \tag{16}\]
The matrix is square of format \(10\times 10\), so we expect its determinant to define a hypersurface. But the matrix has rank \(9\), so \(f=\det(P_{T})\) is zero. Its kernel of \(P_{T}\) is generated by the vector
\[\begin{bmatrix}0&-C_{1}&C_{2}\end{bmatrix}^{t}\ =\ \begin{bmatrix}0\,,\,-c_{001},-c_{010},- c_{100},\,\,c_{002},\,c_{011},\,c_{020},\,c_{101},\,c_{110},\,c_{200}\end{bmatrix}^{t}. \tag{17}\]
This kernel is explained by Theorem 6.1. Note that its first coordinate is zero. The existence of a kernel vector with nonzero first coordinate imposes a codimension \(2\) constraint on \(P_{T}\). The variety \(\mathcal{T}^{3}_{2,2,3}\) has dimension \(17\) and degree \(35\) in \(\mathbb{P}^{19}\). Its ideal \(\mathcal{I}^{3}_{2,2,3}\) can be computed via Theorem 3.4, and it also verifies Conjecture 3.6. Here \(I_{\max}(P_{T})\) is the ideal of \(9\times 9\) minors of the \(10\times 10\) matrix \(P_{T}\), which has \(81\) minimal generators, and \(I_{\max}(\hat{P}_{T})\) is the ideal of \(8\times 8\) minors of the \(10\times 9\) matrix \(\hat{P}_{T}\), which has \(315\) minimal generators. The prime ideal \(\mathcal{I}^{3}_{2,2,3}\) is generated by ten octics. It is not Cohen-Macaulay, but has Betti sequence \(10,10,1\).
We next prove the second assertion by exhibiting an unexpected Taylor hypersurface. Let \(d=8,e=5\) and \(m=9\). The two numbers on the right hand side of (5) are both \(219\), so we expect \(\mathcal{T}^{3}_{8,5,9}\) to precisely fill \(\mathbb{P}^{219}\). The Pade matrix \(P_{T}\) has \(55\) rows and \(56\) columns, and its generic rank is \(55\). The kernel of \(P_{T}\) is spanned by a vector whose whose first \(20\) coordinates are zero and whose last \(36\) coordinates are linear in the \(c_{ij}\). In particular, the
determinant of the \(55\times 55\) matrix \(\hat{P}_{T}\) is zero. Among the remaining \(55\) maximal minors of \(P_{T}\), the first \(19\) are zero and the other \(36\) are non-zero. The latter share a common factor \(f\), which is an irreducible polynomial of degree \(54\). Each such maximal minor equals \(f\) times a linear form, namely the corresponding coordinate in the kernel vector. In conclusion, the Taylor hypersurface \(\mathcal{T}^{3}_{8,5,9}\) has degree \(54\) in \(\mathbb{P}^{219}\), and it is defined by the equation \(f=0\).
We obtain a second unexpected hypersurface by swapping \(d\) and \(e\). This operation is the duality of reciprocal pairs, to be explained in Proposition 5.2. We now take \(d=5,e=8\) and \(m=9\), where the Pade matrix has \(164\) rows and \(165\) columns. Again, the inequality in (5) is strict. We have \(\dim(\mathcal{T}^{3}_{5,8,9})=218\), while the expected dimension is \(\min\{219,219\}=219\).
For the rest of this section we assume that \(n,d,e,m\) are parameters for which the matrix \(P_{T}\) is square and non-singular. We abbreviate \(f=\det(P_{T})\), we write \(H_{f}\) for the _Hessian matrix_ of \(f\), and \(h_{f}=\det(H_{f})\) for the _Hessian_. Already for \(n=1\), these Hessians are special.
**Example 4.2** (Hankel cubic revisited).: Let \(n=1,d=1,e=2\) and \(m=4\). Following Example 2.5, the Taylor variety \(\mathcal{T}^{1}_{1,2,4}\) is the cubic threefold in \(\mathbb{P}^{4}\) with defining polynomial
\[f\ =\ \det\begin{bmatrix}c_{4}&c_{3}&c_{2}\\ c_{3}&c_{2}&c_{1}\\ c_{2}&c_{1}&c_{0}\end{bmatrix}.\]
The Hessian of \(f\) is the quintic \(h_{f}=-8(c_{0}c_{4}-4c_{1}c_{3}+3c_{2}^{2})f\). Hence the Hessian vanishes on the hypersurface \(V(f)=\mathcal{T}^{1}_{1,2,4}\). By [18, Proposition 7.2.3], this shows that this cubic has zero Gaussian curvature, a noteworthy property in differential geometry. Note that \(\mathcal{T}^{1}_{d,2,d+3}\) has the same Pade matrix as \(\mathcal{T}^{1}_{1,2,4}\), for all \(d\geq 1\), so it also has zero Gaussian curvature.
The property in Example 4.2 is shared by many Taylor varieties for \(n=1\). We conjecture that \(\mathcal{T}^{1}_{d,e,d+e+1}\) has zero Gaussian curvature for every \((d,e)\geq(1,2)\), see also [6, Conjecture 3.3]. Here we may assume \(e\geq d+1\), because the varieties \(\mathcal{T}^{1}_{d,d+1,2d+2}\) (\(e=d+1\)) and \(\mathcal{T}^{1}_{d+t,d+1,2(d+t)+2}\) have the same Pade matrix up to renaming variables, for every \(t\geq 0\). The case \(d=0\) for \(n=1\) is called _sub-Hankel_ in [5]: here, by [5, Theorem 4.4(iii)], the Hessian \(h_{f}\) is a multiple of a power of \(c_{0}\), so the Hessian of \(\mathcal{T}^{1}_{0,e,e+1}\) vanishes only at infinity. It is shown in [6, Theorem 3.1] that the Hessian of \(\mathcal{T}^{1}_{d,e,d+e+1}\) does not vanish identically.
We now turn to the case \(n\geq 2\). Here the situation is quite different: Taylor hypersurfaces can have vanishing Hessian, _i.e._\(h_{f}\equiv 0\). This holds when \(m=d+1\), as shown in Theorem 4.4, but it holds in other cases as well. The minimal example of a Taylor hypersurface with vanishing Hessian is a cone over the cubic surface found in 1900 by Perazzo [17].
**Example 4.3** (\(n=2,d=1,e=1,m=2\)).: The Taylor variety \(\mathcal{T}^{2}_{1,1,2}\) is a cubic hypersurface. The Pade matrix \(P_{T}\) and the Hessian matrix of \(f=\det(P_{T})\) are
\[P_{T}\,=\,\begin{bmatrix}c_{20}&c_{10}&0\\ c_{11}&c_{01}&c_{10}\\ c_{02}&0&c_{01}\end{bmatrix}\quad\text{and}\quad H_{f}\,=\,\begin{bmatrix}2c_ {20}&-c_{11}&0&-c_{10}&2c_{01}\\ -c_{11}&2c_{02}&2c_{10}&-c_{01}&0\\ 0&2c_{10}&0&0&0\\ -c_{10}&-c_{01}&0&0&0\\ 2c_{01}&0&0&0&0\end{bmatrix}.\]
We see that the rank of \(H_{f}\) is \(4\). Hence \(h_{f}\) is the zero polynomial. The Taylor variety \({\cal T}^{2}_{1,1,2}\) is a cone over the _Perazzo variety_[17]. We note that \({\cal T}^{2}_{1,1,2}\) is the orbit closure of the action of a \(5\)-dimensional group, consisting of \(\mathrm{SL}(2)\) and the affine group \((\mathbb{C}^{2},+)\), which acts via
\[c_{02}\mapsto c_{02}+\alpha c_{01},\,c_{11}\mapsto c_{11}+\alpha c_{10}+\beta c _{01},\,c_{20}\mapsto c_{20}+\beta c_{10}\qquad\text{for}\;\;(\alpha,\beta)\in \mathbb{C}^{2}.\]
Such an action exists for a large class of Taylor hypersurfaces, as shown in Theorem 4.4. \(\blacksquare\)
**Theorem 4.4**.: _Fix \(n,d,e\in\mathbb{N}\) with \(n\geq 2\). Let \(w\) be the vector of coefficients \(c_{\gamma}\) seen in \(P_{T}\). Suppose \({\cal T}^{n}_{d,e,d+1}=V(f)\), where \(f=\det P_{T}\), and set \(f_{\gamma}=\frac{\partial f}{\partial c_{\gamma}}\). The image of the polar map \(w\mapsto(f_{\gamma}:c_{\gamma}\in w)\) is not dense in \(\mathbb{P}^{|w|-1}\), so the hypersurface \({\cal T}^{n}_{d,e,d+1}\) has vanishing Hessian._
Proof.: Let \(T=\sum_{|\gamma|\leq d+1}c_{\gamma}x^{\gamma}=T_{0}+T_{1}+\cdots+T_{d+1}\). The Pade matrix has the form
\[P_{T}\ =\ [C_{d+1}\ \ \ C_{d}\ \ \ C_{d-1}\ \cdots\ C_{d-e+1}],\]
where \(C_{j}:\mathbb{C}[x]_{d+1-j}\to\mathbb{C}[x]_{d+1}\) is multiplication by \(T_{j}\). The columns of \(P_{T}\) are labeled by the monomials in \(Q\), and these are sorted in ascending degree lexicographic term order.
For \(j=d-e+2,\ldots,d+1\), let \(\lambda^{j}=(\lambda^{j}_{\alpha})_{\alpha\in M_{d_{j},d_{j}}}\) be a vector of new variables whose entries are indexed by the monomials of degree \(d_{j}=j-(d-e+1)\) in \(n\) variables. We now replace the first column \(C^{1}_{j}\) of \(C_{j}\) with \(C^{1}_{j}+C_{d+1-e}\ [\ \lambda^{j}\ {\bf 0}\ ]^{\tilde{T}}\), where \({\bf 0}\) is a row vector of zeroes of length \(\binom{n+e-1}{e}-\binom{n+d_{j}-1}{d_{j}}\). This arises from the following affine action on the vector \(w\):
\[\begin{array}{rcl}c_{\gamma}&\mapsto&c_{\gamma}&\text{ for }|\gamma|=d+1-e,\\ c_{\gamma}&\mapsto&c_{\gamma}\,+\sum_{\alpha+\beta=\gamma}\lambda^{|\gamma|}_{ \alpha}c_{\beta}&\text{ for }|\gamma|>d+1-e.\end{array}\]
In this sum we have \(\alpha\geq 0\) and \(|\beta|=d-e+1\). The other columns of \(C_{j}\) are modified by this action, for every \(j\). By the structure of the block \(C_{j}\), this again corresponds to column operations where \(C^{i}_{j}\) is replaced by \(C^{i}_{j}+C_{d+1-e}\ [\ {\bf 0}_{1}\ \lambda^{j}\ {\bf 0}_{2}\ ]^{T}\), for zero row vectors \({\bf 0}_{1}\) and \({\bf 0}_{2}\).
Let \(f^{\prime}\in\mathbb{C}[x,\lambda]\) be the determinant of the matrix obtained by these elementary operations. We have \(f^{\prime}=f\). Taking derivatives with respect to \(\lambda^{j}\), where \(j=d-e+2,\ldots,d+1\), one gets that the partial derivatives \(f_{\gamma}\) of \(f=\det(P_{T})\) satisfy the following linear equations:
\[0\ =\ \frac{\partial f}{\partial\lambda^{j}_{\alpha}}\ =\ \frac{\partial f^{\prime}}{ \partial\lambda^{j}_{\alpha}}\ =\sum_{c_{\beta}\in C_{d-e+1}}c_{\beta}\cdot f_{\alpha+\beta}\quad \text{ for }d-e+2\leq j\leq d+1\text{ and }\alpha\in M_{d_{j},d_{j}}. \tag{18}\]
For \(d-e+2\leq j\leq d+1\), we define a Hankel matrix \(M_{j}\) as follows: the \((\alpha,\beta)\) entry of \(M_{j}\) is \(f_{\alpha+\beta}\) for \(c_{\beta}\in C_{d-e+1}\) and \(\alpha\) such that \(|\alpha+\beta|=j\). Next let \(M=[M_{d+1}\,M_{d}\,\cdots\,M_{d-e+2}]^{T}\) be the block-Hankel matrix obtained by stacking all matrices \(M_{j}\). By (18), the right kernel of \(M\) is nonzero, so the columns are linearly dependent. Note that \(M\) has exactly
\[\sum_{j=d-e+2}^{d+1}\binom{n+d_{j}-1}{d_{j}}\ =\ \sum_{j=d-e+2}^{d+1}\binom{n+j-d+e-2}{j -d+e-1}\ =\ \sum_{j=0}^{e-1}\binom{n+j}{n-1}\ =\ \binom{n+e}{n}-1\]
rows and \(\binom{n+d-e+1-1}{d-e+1}=\binom{n+d-e}{n-1}\) columns. Since \(\mathcal{T}^{n}_{d,e,d+1}\) is a hypersurface, (5) implies
\[\dim\mathcal{T}^{n}_{d,e,d+1}\ =\ \binom{d+1+n}{n}-2\ \leq\ \binom{d+n}{n}+\binom{e+n}{n}-2.\]
From this we obtain \(\binom{e+n}{n}\geq\binom{d+1+n}{n}-\binom{d+n}{n}=\binom{d+n}{n-1}\). In particular, we find that \(e\geq 1\), and hence \(\binom{e+n}{n}-1\geq\binom{d+n}{n-1}-1>\binom{d-e+n}{n-1}-1.\) We deduce that the number of rows of the block-Hankel matrix \(M\) is greater than or equal to the number of columns of \(M\).
Since \(M\) has a nonzero right kernel, its maximal minors must vanish. Thus the image of the polar map \(w\mapsto(f_{\gamma}:\gamma\in w)\) lies in the variety \(V(I_{\max}(M))\subset\mathbb{P}^{|w|-1}\). Since the Hessian matrix \(H_{f}\) is the Jacobian of the polar map, its determinant \(h_{f}\) vanishes identically.
**Example 4.5** (\(n=2,d=4,e=2,m=5\)).: This concerns quintic Taylor polynomials of bivariate rational functions of the type quartic divided by quadric. The Taylor variety \(\mathcal{T}^{2}_{4,2,5}\) is the sextic hypersurface in \(\mathbb{P}^{20}\) defined by the determinant \(f\) of the Pade matrix
\[P_{T}\ =\ \begin{bmatrix}C_{5}&C_{4}&C_{3}\end{bmatrix}\ =\ \begin{bmatrix}c_{50}&c_{40}&0&c_{30}&0&0\\ c_{41}&c_{31}&c_{40}&c_{21}&c_{30}&0\\ c_{32}&c_{22}&c_{31}&c_{12}&c_{21}&c_{30}\\ c_{23}&c_{13}&c_{22}&c_{03}&c_{12}&c_{21}\\ c_{14}&c_{04}&c_{13}&0&c_{03}&c_{12}\\ c_{05}&0&c_{04}&0&0&c_{03}\end{bmatrix} \tag{19}\]
Only \(15\) of the \(21\) unknowns \(c_{ij}\) appear in \(P_{T}\). The reduced \(15\times 15\) Hessian matrix \(H_{f}\) has generic corank two, and hence \(h_{f}\equiv 0\). To illustrate the proof of Theorem 4.4, we write
\[f\ =\ \det(P_{T}) = f^{\prime}\ =\ \det\left[\,C_{5}+C_{3}\begin{bmatrix}\lambda_{20}^{ \mathcal{S}_{0}}\\ \lambda_{11}^{\mathcal{S}_{0}}\end{bmatrix}\ \ C_{4}+C_{3}\begin{bmatrix}\lambda_{10}^{ \mathcal{S}_{0}}&0\\ \lambda_{01}^{\mathcal{S}_{0}}&\lambda_{10}^{\mathcal{S}_{0}}\\ 0&\lambda_{01}^{\mathcal{S}_{0}}\end{bmatrix}\ \ C_{3}\,\right].\]
By taking partical derivatives with respect to \(\lambda^{5}=(\lambda_{20}^{\mathcal{S}},\lambda_{11}^{\mathcal{S}},\lambda_{0 2}^{\mathcal{S}})\) and \(\lambda^{4}=(\lambda_{10}^{\mathcal{S}},\lambda_{01}^{\mathcal{S}})\), we deduce in (18) that the maximal minors of the following block-Hankel matrix must vanish:
\[M\,:=\,\left[\frac{M_{5}}{M_{4}}\right]\ =\ \left[\begin{array}{cccc}f_{50}&f_{41}&f_ {32}&f_{23}\\ f_{41}&f_{32}&f_{23}&f_{14}\\ f_{32}&f_{23}&f_{14}&f_{05}\\ \hline f_{40}&f_{31}&f_{22}&f_{13}\\ f_{31}&f_{22}&f_{13}&f_{04}\end{array}\right]. \tag{20}\]
Note that the codimension of \(V(I_{\max}(M))\) in \(\mathbb{P}^{14}\) is equal to the generic corank of \(H_{f}\).
We believe that the last statement is always true in the cases covered by Theorem 4.4. To be precise, we conjecture that the generic corank of \(H_{f}\) equals \(\binom{n+e}{n}-\binom{n+d-e}{n-1}\). This is the expected codimension of the ideal of maximal minors of the block-Hankel matrix \(M\).
In Proposition 5.2 we shall present a general duality statement for Taylor varieties under swapping \(d\) and \(e\). That duality seems to be compatible with vanishing Hessians.
**Example 4.6** (\(n=2,d=2,e=4,m=5\)).: Applying duality to Example 4.5 leads us to study quintic Taylor polynomials for quadrics divided by quartics. The Taylor variety \(\mathcal{T}^{2}_{2,4,5}\) is a hypersurface of degree \(15\) in \(\mathbb{P}^{20}\). It is defined by the determinant of the \(15\times 15\) Pade matrix \(P_{T}\). All \(21\) coefficients \(c_{ij}\) of the quintic appear in \(P_{T}\), so the Hessian \(H_{f}\) is a \(21\times 21\) matrix. This Hessian has generic rank \(19\), so its determinant \(h_{f}\) vanishes to second order. \(\blacksquare\)
The study of hypersurfaces with vanishing Hessian has a long tradition. Hesse conjectured that all such hypersurfaces are cones, but Gordan and Noether found a counterexample. Perazzo [17] achieved substantial progress in the case of cubics. The topic continues to fascinate geometers until the present day. We refer to Chapter 7 in the prize-winning monograph [18], to the recent article [10], and to the references in these sources. Our study of Taylor polynomials led us naturally to the novel instances that are reported here. However, it seems to us that Theorem 4.4 is just the tip of an iceberg that remains to be explored.
## 5 Dimension
The Taylor variety \(\mathcal{T}^{n}_{d,e,m}\) is the closure of the image of the rational map given by (4), which is
\[\psi\,:\,\mathbb{P}^{\binom{d+n}{n}-1}\times\mathbb{P}^{\binom{e+n}{n}-1}\,\, \dashb\,\,\mathbb{P}^{\binom{n+m}{n}-1},\,\,(P,Q)\,\mapsto\,T. \tag{21}\]
The three spaces in (21) parametrize polynomials of degrees \(d,e,m\) in \(\mathbb{C}[x]=\mathbb{C}[x_{1},\ldots,x_{n}]\). We shall present a formula for the dimension of \(\mathcal{T}^{n}_{d,e,n}\). To this end, we revisit the multiplication map \(\hat{\varphi}_{T}\) from \(\{Q\in\mathbb{C}[x]_{\leq e}:Q(0)=0\}\) to \(\mathbb{C}\{M_{d+1,m}\}\) that is defined by (8). The matrix that represents the \(\mathbb{C}\)-linear map \(\hat{\varphi}_{T}\) is the _reduced Pade matrix_\(\hat{P}_{T}\), which we obtain from \(P_{T}\) by deleting the first column. Thus \(\hat{P}_{T}\) has \(\binom{m+n}{n}-\binom{d+n}{n}\) rows and \(\binom{e+n}{n}-1\) columns.
**Theorem 5.1**.: _The dimension of the Taylor variety \(\mathcal{T}^{n}_{d,e,m}\) equals \(\binom{d+n}{n}-1\) plus the rank of the reduced Pade matrix \(\hat{P}_{T}\) at a generic point of \(\mathcal{T}^{n}_{d,e,m}\), provided this sum is less than \(\binom{n+m}{n}\)._
Proof.: The dimension of \(\mathcal{T}^{n}_{d,e,m}\) is \(\binom{d+n}{n}+\binom{e+n}{n}-2\) minus the dimension of the generic fiber of the map \(\psi\). Since \(\hat{P}_{T}\) has \(\binom{e+n}{n}-1\) columns, Theorem 5.1 is equivalent to the claim that this fiber dimension equals the dimension of \(\,\text{kernel}(\hat{P}_{T})\,\) at a generic point of \(\mathcal{T}^{n}_{d,e,m}\).
Fix a generic point \((P,Q)\) in the domain of (21) and set \(T=\psi(P,Q)\). We work in the affine chart where \(P(0)=Q(0)=1\). Let \((P^{\prime},Q^{\prime})\) be any point in the fiber \(\psi^{-1}(T)\). Since \(P^{\prime}(0)=Q^{\prime}(0)=1\), the constant term of \(Q-Q^{\prime}\) is zero, and the product \(T(Q-Q^{\prime})=P-P^{\prime}\) contains no monomials in \(M_{d+1,m}\). Therefore, \(Q-Q^{\prime}\) lies in the kernel of \(\hat{\varphi}_{T}\), so that \(Q^{\prime}\) is a point in the affine space \(Q+\text{kernel}(\hat{\varphi}_{T})\). Conversely, for any \(Q^{\prime}\) in \(Q+\text{kernel}(\hat{\varphi}_{T})\), there is a unique \(P^{\prime}\) such that \((P^{\prime},Q^{\prime})\in(\psi^{n}_{d,e,n})^{-1}(T)\). Namely, define \(P^{\prime}\) to be the order \(m\) truncation of \(Q^{\prime}T\). We conclude that the fiber of \(\psi\) over \(T\) is birationally isomorphic to \(\text{kernel}(\hat{\varphi}_{T})\).
Here is one more useful fact, namely the reciprocal duality of Taylor varieties.
**Proposition 5.2**.: _The Taylor varieties \(\mathcal{T}^{n}_{d,e,m}\) and \(\mathcal{T}^{n}_{e,d,m}\) are birationally isomorphic. In particular, they have the same dimension inside their common ambient space \(\,\mathbb{P}^{\binom{n+m}{n}-1}\)._
Proof.: We obtain a rational map \(\mathbb{P}^{\binom{n+m}{n}-1}\dashrightarrow\mathbb{P}^{\binom{n+m}{n}-1},\,T \mapsto 1/T\) by taking reciprocals and truncating at order \(m\). This map is a morphism on the affine chart given by \(T(0)=1\). Clearly, this reciprocal map \(T\mapsto 1/T\) is an involution, so it is birational. Moreover, this involution takes \(\mathcal{T}_{d,e,m}^{n}\) onto \(\mathcal{T}_{e,d,m}^{n}\), as it sends \(P/Q\) to \(Q/P\). This completes the proof.
Theorem 5.1 tells us that \(\mathcal{T}_{d,e,m}^{n}\) is defective if and only if the reduced Pade matrix \(\hat{P}_{T}\) has lower than expected rank, and Proposition 5.2 implies that defective Taylor varieties come in pairs under swapping \(d\) and \(e\). The specific varieties seen in the proof of Proposition 4.1 serve to illustrate both results. In what follows, various families of defective Taylor varieties will be identified and explained. This will lead us to a famous problem in commutative algebra, known as Froberg's Conjecture. We begin by presenting a census of small defective cases.
**Proposition 5.3**.: _Table 1 lists all defective Taylor varieties for \(n\leq 5\) and \(m\leq 12-n\)._
Proof.: The \(34=7+14+13\) defective Taylor varieties for \(n=3,4,5\) that are seen in Table 1 were found by exhaustive computation. In each case, the dimension was computed as the rank of the Jacobian of the parametrization (21), and it was verified using Theorem 5.1. In particular, we found experimentally that no defective Taylor varieties exist for \(n=2\).
Table 1 suggests that defectivity persists as the dimension grows: if \(\mathcal{T}_{d,e,m}^{n}\) is defective then so is \(\mathcal{T}_{d,e,m}^{n+1}\). We also see five cases of Taylor varieties where the number of parameters exceeds the ambient dimension. The smallest example is in the first row of the middle box.
**Example 5.4** (\(n=d=e=4,m=5\)).: The Pade matrix \(P_{T}=\begin{bmatrix}C_{5}&C_{4}&C_{3}&C_{2}&C_{1}\end{bmatrix}\) has \(56\) rows and \(70\) columns. Its rank is \(54\), so the defect is \(16\). Subtracting \(16\) from the expected dimension \(138\), we obtain \(\dim(\mathcal{T}_{4,4,5}^{4})=122\). This equals \(\binom{8}{4}-1+53\), as in Theorem 5.1. Thus, in the space \(\mathbb{P}^{125}\) of quartic fourfolds, our Taylor variety \(\mathcal{T}_{4,4,5}^{4}\) has codimension \(3\).
\begin{table}
\begin{tabular}{||c c c||} \hline \(n\) & \(d,e,m\) & dimensions \\ \hline \hline \(3\) & 2,2,3 & 17,18,19 \\ \hline \(3\) & 3,4,5 & 52,53,55 \\ \hline \(3\) & 4,3,5 & 52,53,55 \\ \hline \(3\) & 4,6,7 & 116,117,119 \\ \hline \(3\) & 6,4,7 & 116,117,119 \\ \hline \(3\) & 5,8,9 & 218,219,219 \\ \hline \(3\) & 8,5,9 & 218,219,219 \\ \hline \(4\) & 2,2,3 & 27,28,34 \\ \hline \(4\) & 3,3,4 & 63,68,69 \\ \hline \(4\) & 3,4,5 & 102,103,125 \\ \hline \(4\) & 4,3,5 & 102,103,125 \\ \hline \end{tabular} \begin{tabular}{||c c c||} \hline \(n\) & \(d,e,m\) & dimensions \\ \hline \hline \(4\) & 4,4,5 & 122,138,125 \\ \hline \(4\) & 4,5,6 & 189,194,209 \\ \hline \(4\) & 4,6,7 & 277,278,329 \\ \hline \(4\) & 5,6,7 & 318,334,329 \\ \hline \(4\) & 6,4,7 & 277,278,329 \\ \hline \(4\) & 6,5,7 & 318,334,329 \\ \hline \(4\) & 5,7,8 & 449,454,494 \\ \hline \(4\) & 6,6,8 & 417,418,494 \\ \hline \(4\) & 7,5,8 & 449,454,494 \\ \hline \(5\) & 2,2,3 & 39,40,55 \\ \hline \(5\) & 3,3,4 & 104,110,125 \\ \hline \end{tabular}
\begin{tabular}{||c c c||} \hline \(n\) & \(d,e,m\) & dimensions \\ \hline \hline \(5\) & 3,4,5 & 179,180,251 \\ \hline \(5\) & 4,3,5 & 179,180,251 \\ \hline \(5\) & 4,4,5 & 228,250,251 \\ \hline \(5\) & 4,5,6 & 370,376,461 \\ \hline \(5\) & 5,4,6 & 370,376,461 \\ \hline \(5\) & 5,5,6 & 441,502,461 \\ \hline \(5\) & 4,6,7 & 585,586,791 \\ \hline \(5\) & 5,6,7 & 690,712,791 \\ \hline \(5\) & 6,6,7 & 780,922,791 \\ \hline \end{tabular}
\end{table}
Table 1: Defective Taylor varieties: the third column lists the dimension and number of parameters of \(\mathcal{T}_{d,e,n}^{n}\), followed by the dimension \(\binom{n+m}{n}-1\) of the ambient projective space.
Based on our computational experiments, we now formulate a general conjecture.
**Conjecture 5.5**.: The following holds concerning the defectivity of Taylor varieties:
1. For \(n=2\), all Taylor varieties \(\mathcal{T}^{2}_{d,e,m}\) are non-defective.
2. For \(n=3\), there are only seven defective Taylor varieties, namely those listed in Table 1.
3. For \(n\geq 3\) fixed, there are only finitely many triples \((d,e,m)\) such that \(\mathcal{T}^{n}_{d,e,m}\) is defective.
## 6 Froberg's Conjecture
We now turn to the promised relationship with commutative algebra. This will lend support to Conjecture 5.5. We examine the case \(m=d+1\), which also appeared in equation (17).
**Theorem 6.1**.: _Consider the ideal generated by \(e\) general forms of degrees \(d,d-1,\ldots,d-e+1\) in \(n\) variables. The value of its Hilbert function in degree \(d+1\) is the codimension of \(\,\mathcal{T}^{n}_{d,e,d+1}\)._
Proof.: The ideal we are referring to is generated by the homogeneous components of \(T\):
\[I\,:=\,\langle\,T_{d}\,,\,T_{d-1}\,,\,T_{d-2},\,\ldots,\,T_{d-e+1}\,\rangle\ \ \ \subset\ \ \mathbb{C}[x]\,=\,\mathbb{C}[x_{1},\ldots,x_{n}].\]
We claim that \(\binom{n+m}{n}-1-\dim(\mathcal{T}^{n}_{d,e,d+1})\) equals the Hilbert function value \(\,\dim_{\mathbb{C}}\bigl{(}\mathbb{C}[x]_{d+1}/I_{d+1}\bigr{)}\).
In the case \(m=d+1\), the Pade matrix consists of only one row of blocks, namely it equals
\[\hat{P}_{T}\ =\ \bigl{[}C_{d}\ \ C_{d-1}\ \ \cdots\ \ C_{d-e+3}\ \ C_{d-e+2}\ \ C_{d-e+1}\bigr{]}\,.\]
The entries of \(\hat{P}_{T}\) are the coefficients of general forms \(T_{d},T_{d-1},\ldots,T_{d-e+1}\). We claim that the rank of this generic \(\hat{P}_{T}\) equals the rank of \(\hat{P}_{T}\) at a generic point of \(\mathcal{T}^{n}_{d,e,m}\). This holds because the polynomial \((T_{e}T_{d-e+1}+T_{d}+T_{d-1}+\cdots+T_{2}+T_{1}+1)(1-T_{e})\) has no terms in degree \(d+1\), which means that the left factor defines a point of \(\mathcal{T}^{n}_{d,e,d+1}\), even for general \(T\).
Combining the claim with Theorem 5.1, we conclude that the dimension of \(\mathcal{T}^{n}_{d,e,d+1}\) equals \(\binom{d+n}{n}-1+\mathrm{rank}(\hat{P}_{T})\) for general \(T\). Now, any vector in the image of \(\hat{P}_{T}\) corresponds to a homogeneous polynomial \(T_{d+1}\) that is in the ideal \(I\). In symbols, \(\mathrm{rank}(\hat{P}_{T})=\dim_{\mathbb{C}}(I_{d+1})\). Hence the codimension of the Taylor variety \(\mathcal{T}^{n}_{d,e,d+1}\) in its ambient space \(\mathbb{P}^{\binom{n+d+1}{n}-1}\) is equal to
\[\binom{n+d+1}{n}-1-\binom{d+n}{n}+1-\mathrm{rank}(\hat{P}_{T})\,=\,\binom{d+n }{n-1}-\mathrm{rank}(\hat{P}_{T})\,=\,\dim_{\mathbb{C}}(\mathbb{C}[x]_{d+1})- \dim_{\mathbb{C}}(I_{d+1}).\]
The right hand side is the value of the Hilbert function in the assertion.
The Hilbert function of an ideal of generic forms is a prominent thread in commutative algebra, initiated by Froberg's article [9]. In our case, the Hilbert series is believed to be
\[\left[\frac{\prod_{i=1}^{e}(1-t^{d+1-i})}{(1-t)^{n}}\right]_{>0}. \tag{22}\]
The operator \([\dots]_{>0}\) applied to a power series \(\sum_{i\geq 0}a_{i}t^{i}\) returns \(\sum_{i\geq 0}b_{i}t^{i}\) with \(b_{i}=a_{i}\) if \(a_{j}>0\) for all \(j\leq i\) and \(b_{i}=0\) otherwise. We are interested in the coefficient \(t^{d+1}\) in (22). _Froberg's Conjecture_ states that this coefficient equals the Hilbert function value in Theorem 6.1.
The conjecture has been proved for several special cases. For \(n=2\) it is due to Froberg [9, Section 3, page 129]. A simpler proof by Valla [19] rests on generic initial ideals. For \(n=3\) it was proved by Anick [1]. Pardue [16] showed that the conjecture is implied by the Moreno-Socias Conjecture about the revlex generic initial ideal of generic polynomials. We refer to recent papers by Nenashev [13] and Nicklasson [14] for the state of the art and many references. These known results allow for the derivation of special cases of Conjecture 5.5.
**Theorem 6.2**.: _Froberg's conjecture implies part (3) of Conjecture 5.5 when \(m=d+1\)._
The proof to be presented is based on suggestions of Christian Krattenthaler. We warmly thank him for allowing us to include his nice combinatorial arguments in this paper.
Proof.: Fix \(n\geq 2\). For any \(1\leq e\leq d\) we abbreviate \(W(d,e)=\binom{d+n}{n-1}-\binom{e+n}{n}+1\) and \(F(d,e)=(1-t)^{-n}\prod_{i=1}^{e}(1-t^{d+1-i})\). Assuming the correctness of Froberg's Conjecture, by Theorem 6.1, the codimension of the Taylor variety \(\mathcal{T}^{n}_{d,e,d+1}\) is
\[\alpha(d,e)\ :=\ \text{coeff. of $t^{d+1}$ in $[F(d,e)]_{>0}$.}\]
On the other hand, the expected codimension of \(\mathcal{T}^{n}_{d,e,d+1}\) equals
\[\beta(d,e)\ :=\ \max(0,W(d,e)).\]
In particular, the following inequality holds for all values of \(d\) and \(e\):
\[\beta(d,e)\ \leq\ \alpha(d,e). \tag{23}\]
We claim that \(\alpha(d,e)=\beta(d,e)\) holds, with only finitely many exceptional pairs \((d,e)\). Recall that \(n\) is fixed. We shall now prove this claim. Our argument proceeds in five steps:
1. If \(\beta(d,e)=0\) then \(\beta(d,e+1)=0\).
2. If \(\alpha(d,e)=0\) then \(\alpha(d,e+1)=0\).
3. If \(\beta(d,e)=\alpha(d,e)=0\) then \(\beta(d,e_{1})=\alpha(d,e_{1})=0\) for all \(e_{1}\geq e\).
4. If \(e<(d/2)+1\) then \(\beta(d,e)=\alpha(d,e)\).
5. There exists \(d_{0}\) such that \(\alpha(d,e)=\beta(d,e)\) for every \(d\geq d_{0}\) and for every \(e\leq d\).
Assertion (1) is obvious since \(W(d,e)\) is a decreasing function in \(e\). For (2), by assumption there is a \(c\leq d+1\) such that the coefficient of \(t^{c}\) in \(F(d,e)\) is \(\leq 0\). Let \(c\) be the smallest integer with that property. Then \(\,F(d,e)=u_{0}+u_{1}t+\dots+u_{c}t^{c}+\dots\,\) with \(u_{c}\leq 0\) and \(u_{i}>0\) for \(i=0,1,\dots,c-1\). Since \(F(d,e+1)=F(d,e)(1-t^{d+1-(e+1)})\), the coefficient of \(t^{c}\) in \(F(d,e+1)\) is \(u_{c}\) if \(d-e>c\) or \(u_{c}-u_{k}\) if \(d+1-(e+1)\leq c\) and \(k=c-d+e\). In both cases this value is \(\leq 0\) and so \(\alpha(d,e+1)=0\). Now (3) follows immediately from (1) and (2).
To prove (4), we first observe that the assumption \(e<(d/2)+1\) implies that
\[\prod_{i=1}^{e}(1-t^{d+1-i})\ =\ 1-\sum_{i=1}^{e}t^{d+1-i}+\ldots\ \text{ terms of degree}\ >d+1\]
Therefore, up to degree \(d+1\), our series \(F(d,e)\) coincides with the series
\[\left(\sum_{k\geq 0}\binom{n-1+k}{n-1}t^{k}\right)\left(1-\sum_{i=1}^{e}t^{d+1 -i}\right).\]
It follows that the coefficient of \(\,t^{d+1}\,\) in \(\,F(d,e)\,\) equals
\[\binom{n+d}{n-1}-\sum_{i=1}^{e}\binom{n-1+i}{n-1}\ =\ \binom{n+d}{n-1}-\binom{n+e}{n}+1 \ =\ W(d,e). \tag{24}\]
If \(W(d,e)\leq 0\) then both \(\beta(d,e)\) and \(\alpha(d,e)\) are zero. If \(W(d,e)>0\) then \(\beta(d,e)=W(d,e)\). Hence, by (23), we have \(\alpha(d,e)\geq\beta(d,e)>0\). In light of (24), we have \(\alpha(d,e)=W(d,e)\) as well. This concludes our proof of assertion (4).
Now we prove assertion (5). It follows from (3) and (4) that the existence of a positive integer \(e_{1}<(d/2)+1\) with \(W(d,e_{1})\leq 0\) implies that \(\beta(d,e)=\alpha(d,e)\) for all \(e\). Hence it suffices to show that there exists a \(d_{0}\) such that \(W(d,e_{1})\leq 0\) for all \(d\geq d_{0}\) for some \(e_{1}<(d/2)+1\). For \(d=2p\) even, set \(e_{1}=p\). This choice guarantees that, for degree reasons,
\[W(2p,p)\ =\ \binom{2p+n}{n-1}-\binom{p+n}{n}+1 \tag{25}\]
is eventually negative as a function of \(p\). For \(d=2p-1\) odd, we take \(e_{1}=p\) and observe that
\[W(2p-1,p)\ =\ \binom{2p-1+n}{n-1}-\binom{p+n}{n}+1 \tag{26}\]
is eventually negative, and assertion (5) follows. This concludes the proof of Theorem 6.2.
**Remark 6.3**.: The constant \(d_{0}\) in step (5) above depends on \(n\). For any given \(n\), it can be computed explicitly by looking at the roots of the polynomials (25) and (26). For instance, for \(n=4\) we find \(d_{0}=144\). Knowing \(d_{0}\) and assuming Froberg's Conjecture, we can quickly identify all pairs \((d,e)\) such that \(\,\mathcal{T}^{n}_{d,e,d+1}\) is defective. Namely, for all \((d,e)\) with \(e\leq d<d_{0}\), we check whether \(\alpha(d,e)\neq\beta(d,e)\). For \(n=4\), there are precisely \(57\) defective pairs \((d,e)\):
\[(2,2),(3,3),(4,3),(4,4),(5,4),(6,4),(6,5),(7,5),(8,5),(8,6),(9,6),(10,6),(10,7),(11,7),\] \[(11,8),(12,7),(12,8),(13,8),(13,9),(14,8),(14,9),(15,9),(15,10),( 16,9),(16,10),(17,10),\] \[(17,11),(18,10),(18,11),(19,11),(20,11),(20,12),(21,12),(22,13), (23,13),(24,13),\] \[(24,14),(25,14),(26,14),(26,15),(27,15),(28,15),(28,16),(29,16),(30,16),(30,17),\] \[(31,17),(32,17),(33,18),(34,18),(35,19),(36,19),(37,20),(38,20),( 40,21),(42,22).\]
Note that the first eight of these \(57\) pairs are listed in Table 1. For \(n=5\), a computation shows that there are \(431\) such exceptional pairs, with the largest one being \((d,e)=(132,67)\)
**Corollary 6.4**.: _Assume \(m=d+1\). Then parts (1) and (2) of Conjecture 5.5 are true._
Proof.: We know that Froberg's Conjecture holds for \(n=2\) and \(n=3\). Then we can proceed as in Theorem 6.3 and Remark 6.3. We find \(d_{0}=6\) for \(n=2\), and \(d_{0}=17\) for \(n=3\). The claim is established by listing all pairs \((d,e)\) with \(e\leq d<d_{0}\) that satisfy \(\alpha(d,e)\neq\beta(d,e)\).
In conclusion, the discussion above connects the study of Taylor varieties for \(m=d+1\) to Froberg's longstanding conjecture on ideals of general forms in \(\mathbb{C}[x]\). A future project will extend this to arbitrary parameters \(n,d,e,m\), with ideals to be replaced by modules. The modules arise from the Hankel matrices in Section 2 but now the entries are homogeneous polynomials \(T_{i}\) of degree \(i\) in \(\mathbb{C}[x]=\mathbb{C}[x_{1},\ldots,x_{n}]\). Explicitly, the _Hankel matrix_ is defined as
\[H_{T}\ :=\ \begin{bmatrix}T_{m}&T_{m-1}&\cdots&T_{m-e}\\ T_{m-1}&T_{m-2}&\cdots&T_{m-e-1}\\ \vdots&\vdots&\ddots&\vdots\\ T_{d+2}&T_{d+1}&\cdots&T_{d-e+2}\\ T_{d+1}&T_{d}&\cdots&T_{d-e+1}\end{bmatrix}. \tag{27}\]
The _Hankel module_ is the graded module generated by all columns of \(H_{T}\) except the first one. The Taylor variety can be characterized by the constraint that the first column lies in the Hankel module. This suggests an extension of Theorem 6.1 from ideals to modules. The study of Hankel modules and associated vector bundles will be the topic of a follow-up article.
|
2305.03132 | The Role of Global and Local Context in Named Entity Recognition | Pre-trained transformer-based models have recently shown great performance
when applied to Named Entity Recognition (NER). As the complexity of their
self-attention mechanism prevents them from processing long documents at once,
these models are usually applied in a sequential fashion. Such an approach
unfortunately only incorporates local context and prevents leveraging global
document context in long documents such as novels, which might hinder
performance. In this article, we explore the impact of global document context,
and its relationships with local context. We find that correctly retrieving
global document context has a greater impact on performance than only
leveraging local context, prompting for further research on how to better
retrieve that context. | Arthur Amalvy, Vincent Labatut, Richard Dufour | 2023-05-04T20:22:18Z | http://arxiv.org/abs/2305.03132v2 | # The Role of Global and Local Context in Named Entity Recognition
###### Abstract
Pre-trained transformer-based models have recently shown great performance when applied to Named Entity Recognition (NER). As the complexity of their self-attention mechanism prevents them from processing long documents at once, these models are usually applied in a sequential fashion. Such an approach unfortunately only incorporates local context and prevents leveraging global document context in long documents such as novels, which might hinder performance. In this article, we explore the impact of global document context, and its relationships with local context. We find that correctly retrieving global document context has a greater impact on performance than only leveraging local context, prompting for further research on how to better retrieve that context.
## 1 Introduction
Named Entity Recognition (NER) is a fundamental task in Natural Language Processing (NLP), and is often used as a building block for solving higher-level tasks. Recently, pre-trained transformer-based models such as BERT Devlin et al. (2019) or LUKE Yamada et al. (2020) showed great NER performance and have been able to push the state of the art further.
These models, however, have a relatively short range because of the quadratic complexity of self-attention in the number of input tokens: as an example, BERT Devlin et al. (2019) can only process spans of up to 512 tokens. For longer documents, texts are usually processed sequentially using a rolling window. Depending on the document, this local window may not always include all the context needed to perform inference, which may be present at the global document level. This leads to prediction errors Stanislawek et al. (2019): In NER, this often occurs when the type of an entity cannot be inferred from the local context. For instance, in the following sentence from the fantasy novel _Elantris_, one cannot decide if the entity Elantris is a person (PER) or a location (LOC) without prior knowledge:
_"Raoden stood, and as he did, his eyes fell on Elantris again."_
In the novel, this prior knowledge comes from the fact that a human reader can recall previous mentions of Elantris, even at a very long range. A sequentially applied vanilla transformer-based model, however, might make an error without a _neighboring_ sentence clearly establishing the status of Elantris as a city.
While some works propose to retrieve external knowledge to disambiguate entities Zhang et al. (2022); Wang et al. (2021), external resources are not always available. Furthermore, external retrieval might be more costly or less relevant than performing document-level context retrieval, provided the document contains the needed information, which depends on the type of document.
Therefore, we wish to explore the relevance of document-level context when performing NER. We place ourselves at the sentence level, and we distinguish and study two types of contexts:
* _local context_, consisting of surrounding sentences. This type of context can be used directly by vanilla transformer-based models, as their range lies beyond the simple sentence. Fully using surrounding context as in Devlin et al. (2019) is, however, computationally expensive.
* _global context_, consisting of all sentences available at the document level. To enhance NER prediction at the sentence level, we retrieve a few of these sentences and provide them as context for the model.
We seek to answer the following question: is local context sufficient when solving the NER task,
or would the model obtain better performance by retrieving global document context?
To answer this question, we conduct experiments on a literary NER dataset we improved from its original version (Dekker et al., 2019). We release the annotation process, data and code necessary to reproduce these experiments under a free license1.
Footnote 1: [https://github.com/CompNet/conivel/tree/ACL2023](https://github.com/CompNet/conivel/tree/ACL2023)
## 2 Related Work
### Sparse Transformers
Since the range problem of vanilla transformer-based models is due to the quadratic complexity of self-attention in the number of input tokens, several works on _sparse transformers_ proposed alternative attention mechanisms in hope of reducing this complexity (Zaheer et al., 2020; Wang et al., 2020; Kitaev et al., 2020; Tay et al., 2020, 2020; Beltagy et al., 2020; Choromanski et al., 2020; Katharopoulos et al., 2020; Child et al., 2019). While reducing self-attention complexity improves the effective range of transformers, these models still have issues processing very long documents (Tay et al., 2020).
### Context retrieval
Context retrieval in general has been widely leveraged for other NLP tasks, such as semantic parsing (Guo et al., 2019), question answering (Ding et al., 2020), event detection (Pouran Ben Veyseh et al., 2021), or machine translation (Xu et al., 2020).
In NER, context retrieval has mainly been used in an external fashion, for example by leveraging names lists and gazetteers (Seyler et al., 2018; Liu et al., 2019), knowledge bases (Luo et al., 2015) or search engines (Wang et al., 2021; Zhang et al., 2022). Meanwhile, we are interested in document-level context retrieval, which is comparatively seldom explored. While Luoma and Pyysalo (2020) study document-level context, their study is restricted to neighboring sentences, i.e. local context.
## 3 Method and Experiments
### Retrieval Heuristics
We wish to understand the role of both _local_ and _global_ contexts for the NER task. We split all documents in our dataset (described in Section 3.3) into sentences. We evaluate both local and global simple heuristics of sentence retrieval in terms of NER performance impact. We study the following _local_ heuristics:
* before: Retrieves the closest \(k\) sentences at the left of the input sentence.
* after: Same as before, but at the right of the input sentence.
* surrounding: Retrieves the closest \(\frac{k}{2}\) sentences on both sides of the input sentence.
And the following _global_ heuristics:
* random: Randomly retrieves a sentence from the whole document.
* samenoun: Randomly retrieves a sentence from the set of all sentences that have at least one common noun with the input sentence2. Intuitively, this heuristic will return sentences that contain entities of the input sentence, allowing for possible disambiguation. We use the NLTK library (Bird et al., 2009) to identify nouns. Footnote 2: If the set of sentences with a common noun is empty, the samenoun heuristic does not retrieve any sentence.
* bm25: Retrieves sentences that are similar to the input sentences according to BM25 (Robertson, 1994). Retrieving similar sentences has already been found to increase NER performance (Zhang et al., 2022; Wang et al., 2021).
It has to be noted that global heuristics can sometimes retrieve local context, as they are not restricted in which sentences they can retrieve at the document level. For all configurations, we concatenate the retrieved sentences to the input. During this concatenation step, we preserve the global order between sentences in the document.
### Oracles
For each heuristic mentioned in Section 3.1, we also experiment with an _oracle_ version. The oracle version retrieves 16 sentences from the document using the underlying retrieval heuristic, and retain only those that enhance the NER predictions the most. We measure this enhancement by counting the difference in numbers of NER BIO tags errors made with and without the context. In essence, the oracle setup simulates a perfect re-ranker model, and allows us to study the maximum performance of such an approach.
### Dataset
To evaluate our heuristics, we use a corrected and improved version of the literary dataset of Dekker et al. (2019). This dataset is comprised of the first chapter of 40 novels in English, which we consider long enough for our experiments.
Dataset correctionsThe original dataset suffers mainly from annotation issues. To fix them, we design an annotation guide inspired by CoNLL-2003 Tjong Kim Sang and De Meulder (2003) and apply it consistently using a semi-automated process:
1. We apply a set of simple rules to identify obvious errors3 (for example, non capitalized entities annotated as PER are often false positives). Depending on the estimated performance of each rule, we manually reviewed its choices before application. Footnote 3: See Appendix A.2 for details.
2. We manually review each difference between the predictions of a BERT Devlin et al. (2019) model finetuned on a slightly modified version of the CoNLL-2003 dataset Tjong Kim Sang and De Meulder (2003)4 and the existing annotations. Footnote 4: We modified the CoNLL-2003 dataset to include honorifics as part of PER entities to be consistent with our annotation guidelines.
3. We manually correct the remaining errors.
Further annotationsThe original dataset only consists of PER entities. We go further and annotate LOC and ORG entities. The final dataset contains 4476 PER entities, 886 LOC entities and 201 ORG entities.
### NER Training
For all experiments, we use a pretrained BERTBASE Devlin et al. (2019) model, consisting in 110 million parameters, followed by a classification head at the token level to perform NER. We finetune BERT for 2 epochs with a learning rate of \(2\cdot 10^{-5}\) using the huggingface transformers library Wolf et al. (2020), starting from the bert-base-cased checkpoint.
### NER evaluation
We perform cross-validation with 5 folds on our NER dataset. We evaluate NER performance using the default mode of the seqeval Nakayama (2018) python library to ensure results can be reproduced.
## 4 Results
### Retrieval heuristics
The NER performance for retrieval heuristics can be seen in Figure 1. The samenoun heuristic performs the best among global heuristics, whereas the surrounding heuristic is the best for local heuristics. While the top results obtained with both heuristics are quite similar, we consider global heuristics as naive retrieval baselines: they could be tested by more complex approaches, which might enhance performance even more.
Interestingly, the performance of both before and bm25 heuristics decrease strongly after four sentences, and even drop behind the no retrieval baseline. For both heuristics, this might be due to retrieving irrelevant sentences after a while. The bm25 heuristic is limited by the similar sentences present in the document: if there are not enough of them, the heuristic will retrieve unrelated ones. Meanwhile, the case of the before heuristic seems more puzzling, and could be indicative of a specific entity mention pattern that might warrant more investigations.
### Oracle versions
NER results with the oracle versions of retrieval heuristics can be found in Figure 2.
It is worth noting that the performance of the oracle versions of the heuristics always peaks when retrieving a single sentence. This might indicate that a single sentence is usually sufficient to resolve entity type ambiguities, but it might also be a result of the oracle ranking sentences individually, thereby not taking into account their possible combinations.
Global heuristics perform better than local ones overall, with the oracle version of the random heuristic even performing better than both the before and after heuristics. These results tend to highlight the benefits of using global document context, provided it can be retrieved accurately.
Retrieved sentencesTo better understand which sentences are useful for predictions when performing global retrieval, we plot in Figure 3 the distribution of the distance between sentences and their retrieved contexts for the oracle versions of heuristics samenoun and bm25. We find that 8% and 16% of retrieved sentences (for samenoun and bm25, respectively) are comprised within 6 sentences of their input sentence, while the other are
further away, highlighting the need for long-range retrieval.
Local context importanceTo see whether or not local context is an important component of NER performance, we perform an experiment where we restrict the oracle version of the bm25 heuristic from retrieving local surrounding context. Results can be found in Figure 4. NER performance remains about the same without local context, which tends to show that local context is not strictly necessary for performance.
## 5 Conclusion and Future Work
In this article, we explored the role of local and global context in Named Entity Recognition. Our results tend to show that, for literary texts, retrieving global document context is more effective at enhancing NER performance than retrieving only local context, even when using relatively simple retrieval heuristics. We also showed that a re-ranker model using simple document-level retrieval heuris
Figure 4: Mean F1 score versus number of retrieved sentences across 3 runs for the oracle version of the bm25 heuristic, and the same heuristic restricted to distant context.
tics could obtain significant NER performance improvements. Overall, our work prompts for further research in how to accurately retrieve global context for NER.
## 6 Limitations
We acknowledge the following limitations of our work:
* While the oracle selects a sentence according to the benefits it provides when performing NER, it does not consider the interactions between selected sentences. This may lead to lowered performances when the several sentences are retrieved at once.
* The retrieval heuristics considered are naive on purpose, as the focus of this work is not performance. Stronger retrieval heuristics may achieve better results than presented in this article.
* The studied documents only consist in the first chapter of a set of novels. Using complete novel would increase the number of possible information to retrieve for the presented global heuristics.
|
2304.14767 | Dissecting Recall of Factual Associations in Auto-Regressive Language
Models | Transformer-based language models (LMs) are known to capture factual
knowledge in their parameters. While previous work looked into where factual
associations are stored, only little is known about how they are retrieved
internally during inference. We investigate this question through the lens of
information flow. Given a subject-relation query, we study how the model
aggregates information about the subject and relation to predict the correct
attribute. With interventions on attention edges, we first identify two
critical points where information propagates to the prediction: one from the
relation positions followed by another from the subject positions. Next, by
analyzing the information at these points, we unveil a three-step internal
mechanism for attribute extraction. First, the representation at the
last-subject position goes through an enrichment process, driven by the early
MLP sublayers, to encode many subject-related attributes. Second, information
from the relation propagates to the prediction. Third, the prediction
representation "queries" the enriched subject to extract the attribute. Perhaps
surprisingly, this extraction is typically done via attention heads, which
often encode subject-attribute mappings in their parameters. Overall, our
findings introduce a comprehensive view of how factual associations are stored
and extracted internally in LMs, facilitating future research on knowledge
localization and editing. | Mor Geva, Jasmijn Bastings, Katja Filippova, Amir Globerson | 2023-04-28T11:26:17Z | http://arxiv.org/abs/2304.14767v3 | # Dissecting Recall of Factual Associations in
###### Abstract
Transformer-based language models (LMs) are known to capture factual knowledge in their parameters. While previous work looked into _where_ factual associations are stored, only little is known about _how_ they are retrieved internally during inference. We investigate this question through the lens of information flow. Given a subject-relation query, we study how the model aggregates information about the subject and relation to predict the correct attribute. With interventions on attention edges, we first identify two critical points where information propagates to the prediction: one from the relation positions followed by another from the subject positions. Next, by analyzing the information at these points, we unveil a three-step internal mechanism for attribute extraction. First, the representation at the last-subject position goes through an enrichment process, driven by the early MLP sublayers, to encode many subject-related attributes. Second, information from the relation propagates to the prediction. Third, the prediction representation "queries" the enriched subject to extract the attribute. Perhaps surprisingly, this extraction is typically done via attention heads, which often encode subject-attribute mappings in their parameters. Overall, our findings introduce a comprehensive view of how factual associations are stored and extracted internally in LMs, facilitating future research on knowledge localization and editing.
## 1 Introduction
Transformer-based language models (LMs) capture vast amounts of factual knowledge Roberts et al. (2020); Jiang et al. (2020), which they encode in their parameters and recall during inference Petroni et al. (2019); Cohen et al. (2023). While recent works focused on identifying _where_ factual knowledge is encoded in the network Meng et al. (2022); Dai et al. (2022); Wallat et al. (2020), it remains unclear _how_ this knowledge is extracted from the model parameters during inference.
In this work, we investigate this question through the lens of information flow, across layers and input positions Elhage et al. (2021). We focus on a basic information extraction setting, where a subject and a relation are given in a sentence (e.g. _"Beats Music is owned by"_), and the next token is the corresponding attribute (i.e. _"Apple"_). We restrict our analysis to cases where the model predicts the correct attribute as the next token, and set out to understand how internal representations evolve across the layers to produce the output.
Focusing on modern auto-regressive decoder-only LMs, such an extraction process could be implemented in many different ways. Informally, the model needs to "merge" the subject and relation in order to be able to extract the right attribute, and this merger can be conducted at different layers and
Figure 1: Illustration of our findings: given subject-relation query, a subject representation is constructed via attributes’ enrichment from MLP sublayers (A), while the relation propagates to the prediction (B). The attribute is then extracted by the MHSA sublayers (C).
positions. Moreover, the attribute extraction itself could be performed by either or both of the multi-head self-attention (MHSA) and MLP sublayers.
To investigate this, we take a reverse-engineering approach, inspired by common genetic analysis methods Griffiths et al. (2005); Tymms and Kola (2008) and the recent work by Wang et al. (2022). Namely, we artificially block, or "knock out", specific parts in the computation to observe their importance during inference. To implement this approach in LLMs, we intervene on the MHSA sublayers by blocking the last position from attending to other positions at specific layers. We identify two consecutive critical points in the computation, where representations of the relation and then the subject are incorporated into the last position, systematically in this order.
Next, to identify where attribute extraction occurs, we analyze the information that propagates at these critical points and the representation construction process that precedes them. This is done through additional interventions to the MHSA and MLP sublayers and projections to the vocabulary Dar et al. (2022); Geva et al. (2022); Nostalgebraist (2020). We discover an internal mechanism for attribute extraction that relies on two key components. First, a _subject enrichment process_, through which the model constructs a representation at the last subject-position that encodes many subject-related attributes. Moreover, we find that out of the three sources that build a representation (i.e., the MHSA and MLP sublayers and the input token embeddings Mickus et al. (2022)), the early MLP sublayers are the primary source for subject enrichment.
The second component is an _attribute extraction operation_ carried out by the upper MHSA sublayers. For a successful extraction, these sublayers rely on information both from the subject representation and the last position. Moreover, extraction is performed by attention heads, and our analysis shows that these heads often encode subject-attribute mappings in their parameters. We observed this extraction behavior in \(\sim\)70% of the predictions.
Our analysis provides a significantly improved understanding of the way factual predictions are formed. The mechanism we uncover can be intuitively described as the following three key steps (Fig. 1). First, information about the subject is enriched in the last subject token, across early layers of the model. Second, the relation is passed to the last token. Third, the last token uses the relation to extract the corresponding attribute from the subject representation, and this is done via attention head parameters. Unlike prior works on factual knowledge representation, which focus on mid-layer MLPs as the locus of information (e.g. Meng et al. (2022)), our work highlights the key role of lower MLP sublayers and of the MHSA parameters. More generally, we make a substantial step towards increasing model transparency, introducing new research directions for knowledge localization and model editing.
## 2 Background and Notation
We start by providing a detailed description of the transformer inference pass, focusing on auto-regressive decoder-only LMs. For brevity, bias terms and layer normalization Ba et al. (2016) are omitted, as they are nonessential for our analysis.
A transformer-based LM first converts an input text to a sequence \(t_{1},...t_{N}\) of \(N\) tokens. Each token \(t_{i}\) is then embedded as a vector \(\mathbf{x}_{i}^{\ell}\in\mathbb{R}^{d}\) using an embedding matrix \(E\in\mathbb{R}^{|\mathcal{V}|\times d}\), over a vocabulary \(\mathcal{V}\). The input embeddings are then transformed through a sequence of \(L\) transformer layers, each composed of a multi-head self-attention (MHSA) sublayer followed by an MLP sublayer Vaswani et al. (2017). Formally, the representation \(\mathbf{x}_{i}^{\ell}\) of token \(i\) at layer \(\ell\) is obtained by:
\[\mathbf{x}_{i}^{\ell}=\mathbf{x}_{i}^{\ell-1}+\mathbf{a}_{i}^{\ell}+\mathbf{m }_{i}^{\ell} \tag{1}\]
where \(\mathbf{a}_{i}^{\ell}\) and \(\mathbf{m}_{i}^{\ell}\) are the outputs from the \(\ell\)-th MHSA and MLP sublayers (see below), respectively. An output probability distribution is obtained from the final layer representations via a prediction head \(\delta\):
\[\mathbf{p}_{i}=\text{softmax}\big{(}\delta(\mathbf{x}_{i}^{L})\big{)}, \tag{2}\]
that projects the representation to the embedding space, either through projection to embedding matrix (i.e., \(E\mathbf{x}_{i}^{L}\)) or by using a trained linear layer (i.e., \(W\mathbf{x}_{i}^{L}+\mathbf{u}\) for \(W\in\mathbb{R}^{|\mathcal{V}|\times d},\mathbf{u}\in\mathbb{R}^{|\mathcal{V}|}\)).
MHSA SublayersThe MHSA sublayers compute _global_ updates that aggregate information from all the representations at the previous layer. The \(\ell\)-th MHSA sublayer is defined using four parameter matrices: three projection matrices \(W_{Q}^{\ell},W_{K}^{\ell},W_{V}^{\ell}\in\mathbb{R}^{d\times d}\) and an output matrix \(W_{O}^{\ell}\in\mathbb{R}^{d\times d}\). Following Elhage et al. (2021); Dar et al. (2022), the columns of each projection matrix and the rows of the output matrix can be split
into \(H\) equal parts, corresponding to the number of attention heads \(W_{Q}^{\ell,j},W_{K}^{\ell,j},W_{V}^{\ell,j}\in\mathbb{R}^{d\times\frac{d}{H}}\) and \(W_{O}^{\ell,j}\in\mathbb{R}^{\frac{d}{H}\times d}\) for \(j\in[1,H]\). This allows describing the MHSA output as a sum of matrices, each induced by a single attention head:
\[\mathbf{a}_{i}^{\ell} =\sum_{j=1}^{H}A^{\ell,j}\Big{(}X^{\ell-1}W_{V}^{\ell,j}\Big{)}W_{ O}^{\ell,j} \tag{3}\] \[:=\sum_{j=1}^{H}A^{\ell,j}\Big{(}X^{\ell-1}W_{VO}^{\ell,j}\Big{)}\] (4) \[A^{\ell,j} =\gamma\Bigg{(}\frac{\Big{(}X^{\ell-1}W_{Q}^{\ell,j}\Big{)}\Big{(} X^{\ell-1}W_{K}^{\ell,j}\Big{)}^{T}}{\sqrt{d/H}}+M^{\ell,j}\Bigg{)} \tag{5}\]
where \(\gamma\) is a row-wise softmax normalization, \(A^{\ell,j}\in\mathbb{R}^{N\times N}\) encodes the weights computed by the \(j\)-th attention head at layer \(\ell\), and \(M^{\ell,j}\) is a mask for \(A^{\ell,j}\). In auto-regressive LMs, \(A^{\ell,j}\) is masked to a lower triangular matrix, as each position can only attend to preceding positions (i.e. \(M_{rc}^{\ell,j}=-\infty\ \forall c>r\)). Importantly, the cell \(A_{rc}^{\ell,j}\) can viewed as a weighted edge from the \(r\)-th to the \(c\)-th hidden representations at layer \(\ell-1\).
MLP SublayersEvery MLP sublayer computes a _local_ update for each representation:
\[\mathbf{m}_{i}^{\ell}=W_{F}^{\ell}\ \sigma\Big{(}W_{I}^{\ell}\big{(}\mathbf{a}_{i }^{\ell}+\mathbf{x}_{i}^{\ell-1}\big{)}\Big{)} \tag{6}\]
where \(W_{I}^{\ell},W_{F}^{\ell}\in\mathbb{R}^{d\times d_{i}}\) are parameter matrices with inner-dimension \(d_{i}\), and \(\sigma\) is a nonlinear activation function. Recent works showed that transformer MLP sublayers can be cast as key-value memories (Geva et al., 2021) that store factual knowledge (Dai et al., 2022; Meng et al., 2022).
## 3 Experimental Setup
We focus on the task of factual open-domain questions, where a model needs to predict an attribute \(a\) of a given subject-relation pair \((s,r)\). A triplet \((s,r,a)\) is typically expressed in a question-answering format (e.g. _"What instrument did Elvis Presley play?"_) or as a fill-in-the-blank query (e.g. _"Elvis Presley played the -"_). While LMs often succeed at predicting the correct attribute for such queries (Roberts et al., 2020; Petroni et al., 2019), it is unknown how attributes are extracted internally.
For a factual query \(q\) that expresses the subject \(s\) and relation \(r\) of a triplet \((s,r,a)\), let \(t=(t_{1},...,t_{N})\) be the representation of \(q\) as a sequence of tokens, based on some LM. We refer by the _subject tokens_ to the sub-sequence of \(t\) that corresponds to \(s\), and by the _subject positions_ to the positions of the subject tokens in \(t\). The non-subject tokens in \(q\) express the relation \(r\).
DataWe use queries from CounterFact (Meng et al., 2022). For a given model, we extract a random sample of queries for which the model predicts the correct attribute. In the rest of the paper, we refer to the token predicted by the model for a given query \(q\) as the attribute \(a\), even though it could be a sub-word and thus only the prefix of the attribute name (e.g. Wash for "Washington").
ModelsWe analyze two auto-regressive decoder-only GPT LMs with different layouts: GPT-2 (Radford et al., 2019) (\(L=48\), 1.5B parameters) and GPT-J (Wang and Komatsuzaki, 2021) (\(L=28\), 6B parameters). Both models use a vocabulary with \(\sim\)50K tokens. Also, GPT-J employs parallel MHSA and MLP sublayers, where the output of the \(\ell\)-th MLP sublayer for the \(i\)-th representation depends on \(\mathbf{x}_{i}^{\ell-1}\) rather than on \(\mathbf{a}_{i}^{\ell}+\mathbf{x}_{i}^{\ell-1}\) (see Eq. 6). We follow the procedure by Meng et al. (2022) to create a data sample for each model, resulting in 1,209 queries for GPT-2 and 1,199 for GPT-J.
## 4 Overview: Experiments & Findings
We start by introducing our attention blocking method and apply it to identify critical information flow points in factual predictions (SS5) - one from the relation, following by another from the subject. Then, we analyze the evolution of the subject representation in the layers preceding this critical point (SS6), and find that it goes through an enrichment process driven by the MLP sublayers, to encode many subject-related attributes. Last, we investigate how and where the right attribute is extracted from this representation (SS7), and discover that this is typically done by the upper MHSA sublayers, via attention heads that often encode a subject-attribute mapping in their parameters.
## 5 Localizing Information Flow via Attention Knockout
For a successful attribute prediction, a model should process the input subject and relation such that the attribute can be read from the last position. We investigate how this process is done internally by "knocking out" parts of the computation and
measuring the effect on the prediction. To this end, we propose a fine-grained intervention on the MHSA sublayers, as they are the only module that communicates information between positions, and thus, any critical information must be transferred by them. We show that factual predictions are built in stages where critical information propagates to the prediction at specific layers during inference.
Method: Attention KnockoutIntuitively, critical attention edges are those that, when blocked, result in severe degradation in prediction quality. Therefore, we test whether critical information propagates between two hidden representations at a specific layer, by zeroing-out all the attention edges between them. Formally, let \(r,c\in[1,N]\) such that \(r\leq c\) be two positions, we block \(\mathbf{x}_{r}^{\ell}\) from attending to \(\mathbf{x}_{c}^{\ell}\) at a layer \(\ell<L\) by updating the attention weights to that layer (Eq. 5):
\[M_{rc}^{\ell+1,j}=-\infty\ \ \forall j\in[1,H] \tag{7}\]
Effectively, this restricts the source position from obtaining information from the target position, at that particular layer. Notably, this is different from causal tracing (Meng et al., 2022), which checks what hidden representations restore the original prediction when given perturbed input tokens; we test where critical information _propagates_ rather than where it is _located_ during inference.
ExperimentWe use Attention Knockout to test whether and, if so, where information from the subject and relation positions directly propagates to the last position. Let \(\mathcal{S},\mathcal{R}\subset[1,N)\) be the subject and non-subject positions for a given input. For each layer \(\ell\), we block the attention edges from the last position to each of \(\mathcal{S}\), \(\mathcal{R}\) and the last (\(N\)-th) position, for a window of \(k\) layers around the \(\ell\)-th layer, and measure the change in prediction probability. We set \(k=9\) (5) for GPT-2 (GPT-J).
ResultsFig. 2 shows the results. For both GPT-2 and GPT-J, blocking attention to the subject tokens (solid green lines) in the middle-upper layers causes a dramatic decrease in the prediction probability of up to 60%. This suggests that critical information from the subject positions moves directly to the last position at these layers. Moreover, another substantial decrease of 35%-45% is observed for the non-subject positions (dashed purple lines). Importantly, critical information from non-subject positions precedes the propagation of critical information from the subject positions, a trend we observe for different subject-relation orders (SSA.1). Example interventions are provided in SSH.
Overall, this shows that there are specific disjointed stages in the computation with peaks of critical information propagating directly to the prediction from different positions. In the next section, we investigate the critical information that propagates from the subject positions to the prediction.
## 6 Intermediate Subject Representations
In the previous section we saw that critical subject information is passed to the last position in the upper layers. Here, we analyze what this information is. Namely, what information is contained in the subject representation at the point of transfer, and how does this information evolve across layers. To do this, we map hidden representations to vocabulary tokens through projection. Our results indicate that the subject representation contains a wealth of information about the subject at the point where it gets transferred to the last position.
### Inspection of Subject Representations
Motivating ObservationTo analyze what is encoded in a representation \(\mathbf{h}_{t}^{\ell}\), we cast it as a distribution \(\mathbf{p}_{t}^{\ell}\) over the vocabulary (Geva et al., 2022; Nostalgebraist, 2020), using the same projection applied to final-layer representations (Eq. 2). Then, we inspect the \(k\) tokens with the highest probabilities in \(\mathbf{p}_{t}^{\ell}\). Examining these projections, we observed that they are informative and often encode
Figure 2: Relative change in the prediction probability when intervening on attention edges to the last position, for 9 layers in GPT-2 and 5 in GPT-J.
several subject attributes (Tab. 1 and SSD). Therefore, we turn to evaluate quantitatively the extent to which the representation of a subject encodes tokens that are semantically related to it.
Evaluation Metric: Attributes RateSemantic relatedness is hard to measure based on human judgment, as ratings are typically of low agreement, especially between words of various parts of speech Zesch and Gurevych (2010); Feng et al. (2017). Hence, we propose an automatic approximation of the subject-attribute relatedness, which is the rate of the predicted attributes in a given set of tokens known to be highly related to the subject. For given a subject \(s\), we first create a set \(\mathcal{A}_{s}\) of candidate attributes, by retrieving paragraphs about \(s\) from Wikipedia using BM25 Robertson et al. (1995), tokenizing each paragraph, and removing common words and sub-words. The set \(\mathcal{A}_{s}\) consists of non-common tokens that were mentioned in the context of \(s\), and are thus likely to be its attributes. Further details on the construction of these sets are provided in SSC. We define the _attributes rate_ for a subject \(s\) in a set of tokens \(\mathcal{T}\) as the portion of tokens in \(\mathcal{T}\) that appear in \(\mathcal{A}_{s}\).
ExperimentWe measure the attributes rate in the top \(k=50\) tokens by the _subject representation_, that is, the representation at the last-subject position, in each layer. We focus on this position as it is the only subject position that attends to all the subject positions, and thus it is likely to be the most critical (we validate this empirically in SSA.3). We compare with the rate at other positions: the first subject position, the position after the subject, and the last input position.
ResultsFig. 3 shows the results for GPT-2 and GPT-J. In both models, the attributes rate at the last-subject position is increasing throughout the layers, and is substantially higher than at other positions in the intermediate-upper layers, reaching close to 50%. This suggests that, during inference, the model constructs attribute-rich subject representations at the last subject-position. In addition, critical information from these representations propagates to the prediction, as this range of layers corresponds to the peak of critical information observed by blocking the attention edges to the prediction (SS5).
We have seen that the representation of the subject encodes many terms related to it. A natural question that arises is where these terms are extracted from to enrich that representation. In principle, there are three potential sources Mickus et al. (2022), which we turn to analyze in the next sections: the static embeddings of the subject tokens (SS6.2) and the parameters of the MHSA and MLP sublayers (SS6.3).
### Attribute Rate in Token Embeddings
We test whether attributes are already encoded in the static embeddings of the subject tokens, by measuring the attributes rate, as in SS6.1. Concretely, let \(t_{1},...,t_{|s|}\) be the tokens representing a subject \(s\) (e.g. Piet, ro, Men, nea for "Pietro Mennea"), and denote by \(\bar{\mathbf{e}}:=\frac{1}{|s|}\sum_{i=1}^{|s|}\mathbf{e}_{t_{i}}\) their mean embedding vector, where \(\mathbf{e}_{t_{i}}\) is the embedding of \(t_{i}\). We compute the attributes rate in the top \(k=50\) tokens by each of \(\mathbf{e}_{t_{i}}\) and by \(\bar{\mathbf{e}}\). We find that the highest attributes rate across the subject's token embeddings is 19.3 on average for GPT-2 and 28.6
\begin{table}
\begin{tabular}{l l l} \hline \hline & **Subject** & **Example top-scoring tokens by the subject representation** \\ \hline \multirow{4}{*}{GPT-2} & Iron Man & 3, 2, Marvel, Ultron, Aveenger, comics, suit, armor, Tony, Mark, Stark, 2020 \\ \cline{2-3} & Sukarno & Indonesia, Buddhist, Thailand, government, Museum, Palace, Bangkok, Jakarta \\ \cline{2-3} & Roman Republic & Rome, Augustus, circa, conquered, fame, Antiqu, Greece, Athens, AD, Caesar \\ \hline \multirow{4}{*}{GPT-J} & Ferruccio Busoni & music, wrote, piano, composition, International, plays, manuscript, violin \\ \cline{2-3} & Joe Montana & career, National, football, NFL, Award, retired, quarterback, throws, Field \\ \cline{1-1} \cline{2-3} & Chromecast & device, Audio, video, Wireless, HDMI, USB, Google, Android, technology, 2016 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Example tokens by subject representations of GPT-2 (\(\ell=40\)) and GPT-J (\(\ell=22\)).
Figure 3: Attributes rate at different positions across layers, in GPT-2 and GPT-J.
in GPT-J, and the average rate by the mean subject embedding is 4.1 in GPT-2 and 11.5 in GPT-J. These rates are considerably lower than the rates by the subject representations at higher layers (Fig. 3). _This suggests that while static subject-token embeddings encode some factual associations, other model components are needed for extraction of subject-related attributes._
### Subject Representation Enrichment
We next assess how different sublayers contribute to the construction of subject representations through causal interventions.
Method: Sublayer KnockoutTo understand which part of the transformer layer "adds" the information about attributes to the representation, we simply zero-out the two key additive elements: the MHSA and MLP sublayers. Concretely, we zero-out updates to the last subject position from each MHSA and MLP sublayer, for 10 consecutive layers. Formally, when intervening on the MHSA (MLP) sublayer at layer \(\ell\), we set \(\mathbf{a}_{i}^{\ell^{\prime}}=\mathbf{0}\) (\(\mathbf{m}_{i}^{\ell^{\prime}}=\mathbf{0}\)) for \(\ell^{\prime}=\ell,...,\min\{\ell+9,L\}\) (see Eq. 1). For each intervention, we measure the effect on the attributes rate in the subject representation at some specific layer \(\bar{\ell}\), where attribute rate is high.
ResultsResults with respect to layer \(\bar{\ell}=40\) in GPT-2 are shown in Fig. 4, showing that canceling the early MLP sublayers has a destructive effect on the subject representation's attributes rate, decreasing it by \(\sim\)88% on average. In contrast, canceling the MHSA sublayers has a much smaller effect of <30% decrease in the attributes rate, suggesting that the MLP sublayers play a major role in creating subject representations. Results for GPT-J show similar trends and are provided in SSE. We further analyze this by inspecting the MLP updates, showing they promote subject-related concepts (SSF).
Notably, these findings are consistent with the view of MLP sublayers as key-value memories (Geva et al., 2021; Dai et al., 2022) and extend recent observations (Meng et al., 2022; Wallat et al., 2020) that factual associations are stored in intermediate layers, showing that they are spread across the early MLP sublayers as well.
## 7 Attribute Extraction via Attention
The previous section showed that the subject representation is enriched with information throughout the early-middle layers. But recall that in our prediction task, only one specific attribute is sought. How is this particular attribute extracted and at which point? We show that (a) attribute extraction is typically carried out by the MHSA sublayers (SS7.1) when the last position attends to the subject, (b) the extraction is non-trivial as it reduces the attribute's rank by the subject representation considerably (SS7.2) and it depends on the subject enrichment process (SS7.3), and (c) the relevant subject-attribute mappings are often stored in the MHSA parameters (SS7.4). This is in contrast to commonly held belief that MLPs hold such information.
### Attribute Extraction to the Last Position
Recall that the critical information flow from the subject representation (SS5) occurs when this representation encodes many terms related to the subject, and after the critical information flow from the relation positions. Thus, the last-position representation at this point can be viewed as a query of the relation to the subject representation. We therefore hypothesize that the critical information that flows at this point is the attribute itself.
Experiment: Extraction RateTo test this hypothesis, we inspect the MHSA updates to the last position in the vocabulary, and check whether the top-token by each update matches the attribute predicted at the final layer. Formally, let
\[t^{*}:=\arg\max(\mathbf{p}_{N}^{L})\ \ ;\ \ t^{\prime}:=\arg\max(E\mathbf{a}_{N} ^{\ell})\]
be the token predicted by the model and the top-token by the \(\ell\)-th MHSA update to the last position (i.e., \(\mathbf{a}_{N}^{\ell}\)). We check the agreement between \(t^{*}\) and \(t^{\prime}\) for every \(\ell\in[1,L]\), and refer to agreement cases (i.e. when \(t^{\prime}=t^{*}\)) as _extraction events_, as the attribute is being extracted by the MHSA. Similarly, we conduct the experiment while blocking the last position from attending to different positions (using attention knockout, see SS5), and also apply it to the MLP updates to the last position.
Figure 4: The attributes rate of the subject representation with and without canceling updates from the MLP and MHSA sublayers in GPT-2.
ResultsFig. 5 shows the extraction rate for the MHSA and MLP updates across layers in GPT-2, and Tab. 2 provides per-example extraction statistics (similar results for GPT-J are in SSE). When attending to all the input positions, the upper MHSA sublayers promote the attribute to the prediction (Fig. 5), with 68.2% of the examples exhibiting agreement events (Tab. 2). The layers at which extraction happens coincide with those where critical subject information propagates to the last position (Fig. 2), which further explains _why_ this information is critical for the prediction.
Considering the knockout results in Tab. 2, attribute extraction is dramatically suppressed when blocking the attention to the subject positions (30.2%) or non-subject positions (31.5%). Moreover, this suppression is alleviated when allowing the last position to attend to itself and to the subject representation (44.4%), overall suggesting that critical information is centered at these positions.
Last, the extraction rate by the MLP sublayers is substantially lower (31.3%) than by the MHSA. Further analysis shows that for 17.4% of these examples, extraction by the MLP was preceded by an extraction by the MHSA, and for another 10.2% no extraction was made by the MHSA sublayers. _This suggests that both the MHSA and MLP implement attribute extraction, but MHSA is the prominent mechanism for factual queries._
### Extraction Significance
A possible scenario is that the attribute is already located at the top of the projection by the subject representation, and the MHSA merely propagates it "as-is" rather than extracting it. We show that this is not the case, by comparing the attribute's rank in the subject representation and in the MHSA update. For every extraction event with a subject representation \(\mathbf{h}_{s}^{\ell}\), we check the attribute's rank in \(\delta(\mathbf{h}_{s}^{\ell})\), which indicates how prominent the extraction by the MHSA is (recall that, at an extraction event, the attribute's rank by the MHSA output is 1). We observe an average attribute's rank of 999.5, which shows that the extraction operation promotes the specific attribute over many other candidate tokens.
### Importance of Subject Enrichment
An important question is whether the subject representation enrichment is required for attribute extraction by the MHSA. Arguably, the attribute could have been encoded in early-layer representations or extracted from non-subject representations.
ExperimentWe test this by "patching" early-layer representations at the subject positions and measuring the effect on the extraction rate. For each layer \(\ell=0,1,5,10,20\), with \(0\) being the embeddings, we take the representations at the subject positions and feed them as input to the MHSA at any succeeding layer \(\ell^{\prime}>\ell\). This simulates the operation of the MHSA sublayer at different stages of the enrichment process. Similarly, we patch the representations of the last position and of the other non-subject positions.
ResultsResults for GPT-2 are shown in Fig. 6 (and for GPT-J in SSE). Patching early subject rep
\begin{table}
\begin{tabular}{l l l} & **Extraction** & **\# of extracting** \\ & **rate** & **layers** \\ \hline MHSA & 68.2 & 2.15 \\ - all but subj. last + last & 44.4 & 1.03 \\ - all non-subj. but last & 42.1 & 1.04 \\ - last & 39.4 & 0.97 \\ - subj. last & 37.7 & 0.82 \\ - all but last & 32.9 & 0.55 \\ - subj. last + last & 32.8 & 0.7 \\ - non-subj. & 31.5 & 0.71 \\ - subj. & 30.2 & 0.51 \\ - all but subj. last & 23.5 & 0.44 \\ - all but first & 0 & 0.01 \\ \hline MLP & 31.3 & 0.38 \\ \hline \end{tabular}
\end{table}
Table 2: Per-example extraction statistics across layers, for the MHSA and MLP sublayers, and MHSA with interventions on positions: (non-)subj. for (non-)subject positions, last (first) for the last (first) input position.
Figure 5: Attribute extraction rate across layers, for the MHSA and MLP sublayers in GPT-2.
Figure 6: Extraction rate when patching representations from early layers at different positions in GPT-2.
resentations decreases the extraction rate by up to 50%, which stresses the importance of attributes enrichment for attribute recall. In contrast, patching of non-subject representations has a weaker effect, which implies that they are "ready" very early in the computation. These observations are further supported by a gradient-based feature attribution analysis (SSB), which shows the influence of the early subject representations on the prediction.
Notably, for all the positions, a major increase in extraction rate is obtained in the first layer (e.g. \(0.05\to 0.59\) for non-subject positions), suggesting that the major overhead is done by the first layer.
### "Knowledge" Attention Heads
We further investigate how the attribute is extracted by the MHSA, by inspecting the attention heads' parameters in the embedding space and analyzing the mappings they encode for input subjects, using the interpretation by Dar et al. (2022).
AnalysisTo get the top mappings for a token \(t\) by the \(j\)-th head at layer \(\ell\), we inspect the matrix \(W_{VO}^{\ell,j}\) in the embeddings space with
\[G^{\ell,j}:=E^{T}W_{VO}^{\ell,j}E\in\mathbb{R}^{|V|\times|V|}, \tag{8}\]
by taking the \(k\) tokens with the highest values in the \(t\)-th row of \(G^{\ell,j}\). Notably, this is an approximation of the head's operation, which is applied to contextualized subject representations rather than to token embeddings. For every extraction event with a subject \(s\) and an attribute \(a\), we then check if \(a\) appears in the top-10 tokens for any of \(s\)'s tokens.
ResultsWe find that for 30.2% (39.3%) of the extraction events in GPT-2 (GPT-J), there is a head that encodes the subject-attribute mapping in its parameters (see examples in SSG). Moreover, these specific mappings are spread over 150 attention heads in GPT-2, mostly in the upper layers (24-45). Interestingly, further analysis of the frequent heads show they encode hundreds of such mappings, acting as "knowledge hubs" during inference (SSG). _Overall, this suggests that factual associations are encoded in the MHSA parameters._
## 8 Related Work
Recently, there has been a growing interest in knowledge tracing in LMs. A prominent thread focused on locating layers Meng et al. (2022); Wallat et al. (2020) and neurons Dai et al. (2022) that store factual information, which often informs model editing approaches De Cao et al. (2021); Mitchell et al. (2022); Meng et al. (2022). Notably, Hase et al. (2023) showed that it is possible to change an encoded fact by editing weights in a different location from where methods suggest this fact is stored, which highlights how little we understand about how factual associations are built. Our work is motivated by this discrepancy and focuses on understanding the internal recall process of factual associations.
Our analysis also relates to studies of the prediction process in LMs Voita et al. (2019); Tenney et al. (2019). Specifically, Haviv et al. (2022) used fine-grained interventions to show that early MLP sublayers are crucial for memorized predictions. Also, Hernandez et al. (2023) introduced a method for editing knowledge encoded in hidden representations. More broadly, our approach relates to studies of how LMs organize information internally Reif et al. (2019); Hewitt and Manning (2019).
Mechanistic interpretability Olah (2022); Nanda et al. (2023) is an emerging research area. Recent works used projections to the vocabulary Dar et al. (2022); Geva et al. (2022); Ram et al. (2022) and interventions in the transformer computation Wang et al. (2022); Haviv et al. (2022) to study the inner-workings of LMs. A concurrent work by Mohebbi et al. (2023) studied contextualization in LMs by zeroing-out MHSA values, a method that effectively results in the same blocking effect as our _knockout_ method. In our work, we leverage such methods to investigate factual predictions.
## 9 Conclusion and Discussion
We carefully analyze the inner recall process of factual associations in auto-regressive transformer-based LMs, unveiling a core attribute extraction mechanism they implement internally. Our experiments show that factual associations are stored already in the lower layers in the network, and extracted eminently by the MLP sublayers during inference, to form attribute-rich subject representations. Upon a given subject-relation query, the correct attribute is extracted from these representations prominently through the MHSA sublayers, which often encode subject-attribute mappings in their parameters. These findings open new research directions for knowledge localization and model editing, which have primarily focused on modifying parameters indiscriminately in the network or targeted the intermediate MLP sublayers.
## Limitations
Some of our experiments rely on interpreting intermediate layer representations and parameters through projection to the vocabulary space. While this approach has been used widely in recent works (Geva et al., 2022, 2022; Ram et al., 2022; Nostalgebraist, 2020), it only provides an approximation of the information encoded in these vectors, especially in early layers. In principle, this could have been an explanation to the increasing attributes rate in Fig. 3. However, this clear trend is unlikely to be explained only by this, given the low attribute rate at the embedding layer and the increase observed in the last few layers where approximation is better (Geva et al., 2021).
Another limitation is that our attention knockout intervention method does not account for "information leakage" across positions. Namely, if we block attention edges between two positions at a specific layer, it is still possible that information passed across these positions in earlier layers. For this reason, we block a range of layers rather than a single layer, which alleviates the possibility for such leakage. Moreover, our primary goal in this work was to identify critical attention edges, which are still critical even if such leakage occurs.
## Acknowledgements
We thank Asma Ghandeharioun for constructive feedback on this work.
|
2307.12646 | A 1.5-pproximation algorithms for activating 2 disjoint $st$-paths | In the $Activation$ $k$ $Disjoint$ $st$-$Paths$ ($Activation$ $k$-$DP$)
problem we are given a graph $G=(V,E)$ with activation costs
$\{c_{uv}^u,c_{uv}^v\}$ for every edge $uv \in E$, a source-sink pair $s,t \in
V$, and an integer $k$. The goal is to compute an edge set $F \subseteq E$ of
$k$ internally node disjoint $st$-paths of minimum activation cost
$\displaystyle \sum_{v \in V}\max_{uv \in E}c_{uv}^v$. The problem admits an
easy $2$-approximation algorithm. Alqahtani and Erlebach [CIAC, pages 1-12,
2013] claimed that Activation 2-DP admits a $1.5$-approximation algorithm.
Their proof has an error, and we will show that the approximation ratio of
their algorithm is at least $2$. We will then give a different algorithm with
approximation ratio $1.5$. | Zeev Nutov, Dawod Kahba | 2023-07-24T09:38:44Z | http://arxiv.org/abs/2307.12646v1 | # A 1.5-approximation algorithm for activating 2 disjoint \(st\)-paths
###### Abstract
In the Activation \(k\)Disjoint \(st\)-Paths (Activation \(k\)-DP) problem we are given a graph \(G=(V,E)\) with activation costs \(\{c^{u}_{uv},c^{v}_{uv}\}\) for every edge \(uv\in E\), a source-sink pair \(s,t\in V\), and an integer \(k\). The goal is to compute an edge set \(F\subseteq E\) of \(k\) internally node disjoint \(st\)-paths of minimum activation cost \(\sum_{v\in V}\max_{uv\in E}c^{v}_{uv}\). The problem admits an easy 2-approximation algorithm. Alqahtani & Erlebach [1] claimed that Activation 2-DP admits a 1.5-approximation algorithm. The proof of [1] has an error, and we will show that the approximation ratio of their algorithm is at least 2. We will then give a different algorithm with approximation ratio 1.5.
disjoint \(st\)-paths, activation problem, minimum power 10.4230/LIPIcs... 1.5-approximation algorithm for activating 2 disjoint \(st\)-paths 2
## 1 Introduction
In network design problems one seeks a cheap subgraph that satisfies a prescribed property. A traditional setting is when each edge or node has a cost, and we want to minimize the cost of the subgraph. This setting does not capture many wireless networks scenarios, where a communication between two nodes depends on our "investment" in these nodes - like transmission energy and different types of equipment, and the cost incurred is a sum of these "investments". This motivates the type of problems we study here.
More formally, in **activation network design problems** we are given an undirected (multi-)graph \(G=(V,E)\) where every edge \(e=uv\in E\) has two (non-negative) **activation costs**\(\{c^{u}_{e},c^{v}_{e}\}\); here \(e=uv\in E\) means that the edge \(e\) has ends \(u,v\) and belongs to \(E\). An edge \(e=uv\in E\) is **activated by a level assignment**\(\{l_{v}:v\in V\}\) to the nodes if \(l_{u}\geq c^{u}_{e}\) and \(l_{v}\geq c^{v}_{e}\). The goal is to find a level assignment of minimum value \(l(V)=\sum_{v\in V}l_{v}\), such that the activated edge set \(F=\{e=uv\in E:c^{u}_{e}\leq l_{u},c^{v}_{e}\leq l_{v}\}\) satisfies a prescribed property. Equivalently, the minimum value level assignment that activates an edge set \(F\subseteq E\) is given by \(\ell_{F}(v)=\max\{c^{v}_{e}:e\in\delta_{F}(v)\}\); here \(\delta_{F}(v)\) denotes the set of edges in \(F\) incident to \(v\), and a maximum taken over an empty set is assumed to be zero. We seek an edge set \(F\subseteq E\) that satisfies the given property and minimizes \(\ell_{F}(V)=\sum_{v\in V}\ell_{F}(v)\). Note that while we use \(l_{v}\) to denote a level assignment to a node \(v\), we use a slightly different notation \(\ell_{F}(v)\) for the function that evaluates the optimal assignment that activates a given edge set \(F\).
Two types of activation costs were extensively studied in the literature, see a survey [14].
* **Node weights**. For all \(v\in V\), \(c^{v}_{e}\) are identical for all edges incident to \(v\). This is equivalent to having node weights \(w_{v}\) for all \(v\in V\). The goal is to find a node subset \(V^{\prime}\subseteq V\) of minimum total weight \(w(V^{\prime})=\sum_{v\in V^{\prime}}w_{v}\) such that the subgraph induced by \(V^{\prime}\) satisfies the given property.
* **Power costs**: For all \(e=uv\in E\), \(c_{e}^{u}=c_{e}^{v}\). This is equivalent to having "power costs" \(c_{e}=c_{e}^{u}=c_{e}^{v}\) for all \(e=uv\in E\). The goal is to find an edge subset \(F\subseteq E\) of minimum total power \(\sum_{v\in V}\max\{c_{e}:e\in\delta_{F}(v)\}\) that satisfies the given property.
Node weighted problems include many fundamental problems such as Set Cover, Node-Weighted Steiner Tree, and Connected Dominating Set c.f. [18, 8, 5]. Min-power problems were studied already in the 90's, c.f. [19, 21, 17, 7], followed by many more. They were also widely studied in directed graphs, usually under the assumption that to activate an edge one needs to assign power only to its tail, while heads are assigned power zero, c.f. [7, 20, 12, 6, 14]. The undirected case has an additional requirement - we want the network to be bidirected, to allow a bidirectional communication. The general activation setting was first suggested by Panigrahi [16] in 2011. Here we use a simpler but less general setting suggested in [9], which is equivalent to that of Panigrahi [16] for problems in which inclusion minimal feasible solutions have no parallel edges.
In the traditional edge-costs scenario, a fundamental problem in network design is the Shortest \(st\)-Path problem. A natural generalization and the simplest high connectivity network design problem is finding a set of \(k\) disjoint \(st\)-paths of minimum edge cost. Here the paths may be edge disjoint - the \(k\)Edge Disjoint \(st\)-Paths problem, or internally (node) disjoint - the \(k\)Disjoint \(st\)-Paths problem. Both problems can be reduced to the Min-Cost \(k\)-Flow problem, which has a polynomial time algorithm.
Similarly, one of the most fundamental problems in the activation setting is the Activation \(st\)-Path problem. For the min-power version, a linear time reduction to the ordinary Shortest \(st\)-Path problem is given by Althaus et al. [3]. Lando and Nutov [10] suggested a more general (but less efficient) "levels reduction" that converts several power problems into problems with node costs; this method extends also to the activation setting, see [14]. A fundamental generalization is activating a set of \(k\) internally disjoint or edge disjoint \(st\)-paths. Formally, the internally disjoint \(st\)-paths version is as follows.
\begin{tabular}{|l|} \hline Activation \(k\)Disjoint \(st\)-Paths (Activation \(k\)-DP) \\ \hline _Input:_ A multi-graph \(G=(V,E)\) with activation costs \(\{c_{e}^{u},c_{e}^{v}\}\) for each \(uv\)-edge \(e\in E\), \(s,t\in V\), and an integer \(k\). _Output:_ An edge set \(F\subseteq E\) of \(k\) internally disjoint \(st\)-paths of minimum activation cost. \\ \hline \end{tabular}
Activation \(k\)-DP admits an easy approximation ratio \(2\), c.f. [14, Corollary 15.4] and is polynomially solvable on bounded treewidth graphs [2]. Node-Weighted Activation \(k\)-DP admits a polynomial time algorithm, by a reduction to the ordinary Min-Cost \(k\)-DP. However, the complexity status of Min-Power \(k\)-DP is open even for unit power costs - it is not known whether the problem is in P or is NPC; this is so even for \(k=2\).
In the augmentation version of the problem Activation \(k\)-DP Augmentation, we are also given a subgraph \(G_{0}=(V,E_{0})\) of \(G\) of activation cost zero that already contains \(k-1\) disjoint \(st\)-paths, and seek an augmenting edge set \(F\subseteq E\setminus E_{0}\) such that \(G_{0}\cup F\) contains \(k\) disjoint \(st\)-paths. The following lemma was implicitly proved in [1].
**Lemma 1** ([1]).: _If Activation \(2\)-DP Augmentation admits a polynomial time algorithm then Activation \(2\)-DP admits approximation ratio \(1.5\)._
The justification of Lemma 1 is as follows. We may assume that \(c_{e}^{s}=0\) and \(c_{e}^{t}=0\) for every edge \(e\) incident to \(s\) or to \(t\), respectively. For this, we "guess" the values \(l_{s}=\ell_{F^{*}}(s)\) and \(l_{t}=\ell_{F^{*}}(t)\) of some optimal solution \(F^{*}\) at \(s\) and \(t\), respectively; there are at most \(\deg_{G}(s)\cdot\deg_{G}(t)\) choices, so we can try all choices and return the best outcome. Then for
every edge \(e=sv\in E\), remove \(e\) if \(c_{e}^{s}>l_{s}\) and set \(c_{s}^{e}=0\) otherwise, and apply a similar operation on edges incident to \(t\). One can see that the new instance is equivalent to the original one. Since the activation cost incurred at \(s\) and \(t\) is now zero, the cheaper among the two disjoint \(st\) paths of \(F^{*}\) has activation cost at most half \(\frac{1}{2}\mathsf{opt}\), where \(\mathsf{opt}\) is the optimal solution value (to the modified problem). Thus if we compute an optimal \(st\)-path \(P\) and and optimal augmenting edge set for \(P\), the overall activation cost will be \(\frac{3}{3}\mathsf{opt}\).
When the paths are required to be only edge disjoint we get the Activation \(k\)-EDP problem. This problem admits an easy ratio \(2k\). Lando & Nutov [10] improved the approximation ratio to \(k\) by showing that Min-Power \(k\)-EDP Augmentation (the augmentation version of Min-Power \(k\)-EDP) admits a polynomial time algorithm. This algorithm extends to the activation case, see [14]. For simple graphs, Min-Power \(k\)-EDP admits ratio \(O(\sqrt{k})\)[15]. On the other hand [13] shows that ratio \(\rho\) for Min-Power or Node-Weighted \(k\)-EDP with unit costs/weights implies ratio \(1/2\rho^{2}\) for the Densest \(\ell\)-Subgraph problem, that currently has best known ratio \(O(n^{-(1/4+\epsilon)})\)[4] and approximation threshold \(\Omega\left(n^{-1/poly(\log\log n)}\right)\)[11].
Based on an idea of Srinivas & Modiano [20], Alqahtani & Erlebach [1] showed that Activation \(2\)-EDP is not harder to approximate than Activation \(2\)-DP.
[Alqahtani & Erlebach [1]] If Activation \(2\)-DP admits approximation \(\rho\) then so is Activation \(2\)-EDP.
Alqahtani & Erlebach [1] claimed that Activation \(2\)-DP Augmentation admits a polynomial time algorithm, and thus (by Lemmas 2 and 2) both Activation \(2\)-DP and Activation \(2\)-EDP admit approximation ratio \(1.5\). In the next section we will give an example that the approximation ratio of the [1] algorithm for Activation \(2\)-DP Augmentation is not better than \(2\). Then we will give a different polynomial algorithm for Activation \(2\)-DP Augmentation that is based on dynamic programming. Thus combining with Lemmas 2 and 2 we have the following.
[] Activation \(2\)-DP Augmentation admits a polynomial time algorithm. Thus both Activation \(2\)-DP and Activation \(2\)-EDP admit approximation ratio \(1.5\).
## 2 A bad example for the Alqahtani-Erlebach Algorithm
To illustrate the idea of the [1] algorithm, let us first describe a known algorithm for a particular case of the Min-Cost \(2\)-EDP Augmentation problem, where we seek to augment a Hamiltonian \(st\)-path \(P\) of cost \(0\) by an min-cost edge set \(F\) such that \(P\cup F\) contains \(2\) edge disjoint \(st\)-paths. The algorithm reduces this problem to the ordinary Min-Cost \(st\)-Path problem as follows, see Fig. 1(a,b).
```
1 Construct an edge-weighted digraph \(D_{P}\) by directing \(P\) "backward" from \(t\) to \(s\), and directing every edge not in \(P\) "forward" - from predecessor to successor in \(P\).
2 Compute a shortest \(st\)-path \(P^{\prime}\) in \(D_{P}\).
3 Return the subset \(F\) of \(E\) that corresponds to the edges of \(P^{\prime}\setminus P\).
```
**Algorithm 1**Hamiltonian Min-Cost \(2\)-EDP Augmentation(\((V,E),c,P,\{s,t\}\))
A slight modification of this algorithm works for Activation \(2\)-EDP Augmentation. For \(v\in V\) let \(L_{v}=\{c_{vu}^{v}:vu\in E\}\) be the set of possible **levels** at \(v\). Apply the reduction in Algorithm 1, and then apply a step which we call **Levels Splitting**: for every pair \((v,l)\) where \(v\in V\) and \(l\in L_{v}\) we add a node \(v_{l}\) of weight \(l\), and put an edge from \(u_{l_{i}}\) to \(v_{l_{j}}\) if there is an edge \(e=uv\) in \(D_{P}\) with \(c_{e}^{u}\leq l_{i}\) and \(c_{e}^{v}\leq l_{j}\). The reduction here is to the
Node-Weighted \(st\)-Path problem. The later problem can be easily reduced to the ordinary Min-Cost \(st\)-Path problem by a step which we call **In-Out Splitting**: Replace each node \(v\in V\setminus\{s,t\}\) by two nodes \(v^{\mathsf{in}},v^{\mathsf{out}}\) connected by the edge \(v^{\mathsf{in}}v^{\mathsf{out}}\), and redirect every edge that enters \(v\) to enter \(v^{\mathsf{in}}\) and every edge that leaves \(v\) to leave \(v^{\mathsf{out}}\), where we assume that \(s^{\mathsf{in}}=s^{\mathsf{out}}=s\) and \(t^{\mathsf{in}}=t^{\mathsf{out}}=t\). In this reduction the cost/weight of each edge \(v^{\mathsf{in}}v^{\mathsf{out}}\) is the weight \(w_{v}\) of \(v\), see Fig. 1(a,b,c). This is a particular case of the "Levels Reduction" of [10].
One can also solve the version when we have ordinary edge costs and require that \(P\cup F\) contains 2 internally disjoint \(st\)-paths. For that, apply a standard reduction that converts edge connectivity problems into node connectivity ones, as follows
* After step 1 of Algorithm 1, add the In-Out Splitting step, where here the the cost of each edge \(v^{\mathsf{in}}v^{\mathsf{out}}\) is 0.
* Replace every edge \(u^{\mathsf{out}}v^{\mathsf{in}}\notin P\) by the edge \(u^{\mathsf{in}}v^{\mathsf{out}}\).
See Fig. 1(d), where after applying this reduction we switched between the names of \(v^{\mathsf{in}}v^{\mathsf{out}}\), to be consistent with the [1] algorithm.
The algorithm of [1] attempts to combine the later reduction with the Levels Reduction in a sophisticated way. In the case when \(G\) has a zero cost Hamiltonian \(st\)-path \(P\), \(st\notin E\), \(L=\{0,1\}\), \(L_{s}=L_{t}=\{0\}\), and \(c^{u}_{uv}=1\) for all \(e=uv\in E\setminus E(P)\) and \(u\in V\setminus\{s,t\}\), the [1] algorithm reduces to the following, see Fig. 1(a,e).
1. Construct an edge-weighted directed graph \(D_{P}\) with nodes \(s=s_{0}^{\mathsf{out}},t=t_{0}^{\mathsf{in}}\) and 4 nodes \(\{v_{0}^{\mathsf{in}},v_{0}^{\mathsf{out}},v_{1}^{\mathsf{in}},v_{1}^{\mathsf{ out}}\}\) for every \(v\in V\setminus\{s,t\}\). The edge of \(D_{P}\) and their weights are: * For \(v\in V\setminus\{s,t\}\): \(w(v_{a}^{\mathsf{out}}v_{a}^{\mathsf{in}})=0\) \(a\in\{0,1\}\). * For \(uv\in P\): \(w(v_{b}^{\mathsf{in}}u_{a}^{\mathsf{out}})=a\) \(a,b\in\{0,1\}\). * For \(uv\notin P\): \(w(u_{a}^{\mathsf{out}}v_{b}^{\mathsf{in}})=b\) \(a,b\in\{0,1\}\), \(u_{a}v_{b}\in E\).
2. Compute a cheapest \(st\)-path \(P^{\prime}\) in \(D_{P}\) and return the subset of \(E\) that corresponds to \(P^{\prime}\).
Figure 1: Augmenting a Hamiltonian \(st\)-path to two edge/internally disjoint \(st\)-paths. Black edges have cost 0, blue and red edges have cost 1. (a) Problem instance. (b) Reducing Min-Cost 2-EDP Augmentation to Min-Cost \(st\)-Path. (c) Levels splitting, assuming that the activation costs of the blue edges in (a) are 1 at \(x,v,y\) and 0 at \(s,t\). (d) Reducing Min-Cost 2-DP Augmentation to Min-Cost \(st\)-Path. (e) The reduction of [1].
Here for \(e=uv\in E\) we write \(u_{a}v_{b}\in E\) meaning that \(c_{e}^{a}\leq a\) and \(c_{e}^{b}\leq b\), namely, that \(uv\) can be activated by assigning \(a\) units to \(u\) and \(b\) units to \(v\). In the example in Fig. 1(d), the weight of \(P^{\prime}\) is 2 while the optimal solution value is 3. Still, in this example the [1] algorithm computes an optimal solution. We give a more complicated example, which shows that the approximation ratio of this algorithm is no better than 2. Consider the graph in Fig. 2(a), with the initial \(st\)-path
\[s-u-v-x-y-z-p-q-t\.\]
The optimal solution \(\{sx,xz,zt,uy,yq\}\) (the blue edges and the \(uy,yq\) edges) has value 2 (level assignment \(l_{x}=l_{z}=1\) and 0 otherwise); the \(s_{0}^{\mathsf{out}}t_{0}^{\mathsf{in}}\)-path in \(D_{P}\) of weight 4 that corresponds to this solution is (see Fig. 2(b)):
\[s_{0}^{\mathsf{out}}\!\to x_{1}^{\mathsf{in}}\to v_{0}^{\mathsf{out}}\to v_{0}^ {\mathsf{in}}\to u_{0}^{\mathsf{out}}\to y_{0}^{\mathsf{in}}\!\to x_{1}^{ \mathsf{out}}\!\to z_{1}^{\mathsf{in}}\to y_{0}^{\mathsf{out}}\to\ q_{0}^{ \mathsf{in}}\!\to p_{0}^{\mathsf{out}}\!\to z_{1}^{\mathsf{out}}\to\ \dot{\iota}_{0}^{\mathsf{in}}\.\]
The solution \(\{sv,uy,xz,yq,pt\}\) (the red edges and the \(uy,yq,xz\) edges) has value 4 (level assignment \(l_{v}=l_{x}=l_{z}=l_{p}=1\) and 0 otherwise); the \(s_{0}^{\mathsf{out}}t_{0}^{\mathsf{in}}\)-path in \(D_{P}\) of weight 4 that corresponds to this solution is (see Fig. 2(c)):
\[s_{0}^{\mathsf{out}}\!\to v_{1}^{in}\to u_{0}^{\mathsf{out}}\to y_{0}^{ \mathsf{in}}\!\to x_{1}^{\mathsf{out}}\!\to z_{1}^{\mathsf{in}}\to y_{0}^{ \mathsf{out}}\to q_{0}^{\mathsf{in}}\!\to p_{1}^{\mathsf{out}}\to t_{0}^{ \mathsf{in}}\.\]
So in \(D_{P}\), both paths have the same weight 4, but one path gives a solution of value 2 while the other of value 4.
## 3 Proof of Theorem 3
In this section we will prove Theorem 3 - that Activation 2-DP Augmentation admits a polynomial time algorithm
Figure 2: Illustration to the [1] Algorithm. Black edges have weight/thresholds 0. (a) The input graph; colored edges have thresholds 0 at \(s,t\) and 1 otherwise. (b) The edge weighted directed graph \(D_{P},w\) constructed in the AE reduction and the path (shown by dashed lines) in \(D_{P}\) of weight 4 that corresponds to the optimal solution \(\{sx,xz,zt,uy,yq\}\). (c) The path (shown by dashed lines) in \(D_{P}\) of weight 4 that corresponds to the solution \(\{sv,uy,xz,yq,pt\}\).
Recall that for \(F\subseteq E\) and \(v\in V\) we denote by \(\ell_{F}(v)=\max_{e\in\delta_{F}(v)}c_{e}^{v}\) the activation cost incurred by \(F\) at \(v\), and that for \(S\subseteq V\) the activation cost incurred by \(F\) at nodes in \(S\) is
\[\ell_{F}(S)=\sum_{v\in S}\ell_{F}(v)=\sum_{v\in S}\max_{e\in\delta_{F}(v)}c_{e}^ {v}\.\]
For the proof of Theorem 3 it would be convenient to consider a more general problem where each edge \(e=uv\in E\) has three costs \(c_{e}^{u},c(e),c_{e}^{v}\), where \(c_{e}^{u},c_{e}^{v}\) are the activation costs of \(e\) and \(c(e)\) is the ordinary "middle" cost of \(e\). We now describe a method to convert an Activation 2-DP Augmentation instance into an equivalent instance in which \(P\) is a Hamiltonian path but every edge has three costs as above. We call this problem 3-Cost Hamiltonian Activation 2-DP Augmentation.
Let \(\mathcal{I}=(G=(V,E),c,s,t,P)\) be an Activation 2-DP Augmentation instance. Let us say that a \(uv\)-path \(Q\) in \(G\) is an **attachment path** if \(u,v\in P\) but \(Q\) has no internal node in \(P\). Note that any inclusion minimal edge set that contains 2 internally disjoint \(st\)-paths is a cycle. This implies that if \(F\) is an inclusion minimal solution to Activation 2-DP Augmentation then \(\deg_{F}(v)\in\{0,2\}\) for every node \(v\in V\setminus V(P)\), hence \(F\) partitions into attachment paths. This enables us to apply a prepossessing similar to metric completion, and to construct an equivalent 3-Cost Hamiltonian Activation 2-DP Augmentation instance \(\hat{\mathcal{I}}=(\hat{G}=(\hat{V},\hat{E}),\hat{c},s,t,P)\). For this, for every \(u,v\in P\) and \((l_{u},l_{v})\in L_{u}\times L_{v}\) do the following.
1. Among all attachment \(uv\)-paths that have activation costs \(l_{u}\) at \(u\) and \(l_{v}\) at \(v\) (if any), compute the cheapest one \(Q(l_{u},l_{v})\).
2. If \(Q(l_{u},l_{v})\) exists, add a new edge \(e=uv\) with activation costs \(\hat{c}_{e}^{u}=l_{u},\hat{c}_{e}^{v}=l_{v}\), and ordinary cost \(\hat{c}_{e}=\ell_{Q(l_{u},l_{v})}(V)-(l_{u}+l_{v})\) being the activation cost of \(Q(l_{u},l_{v})\) on internal nodes of \(Q(l_{u},l_{v})\).
After that, remove all nodes in \(V\setminus V(P)\). Now \(P\) is a Hamiltonian path, and we get a 3-Cost Hamiltonian Activation 2-DP Augmentation instance \(\hat{\mathcal{I}}=(\hat{G},\hat{c},s,t,P)\). It is easy to see that the instance \(\hat{\mathcal{I}}\) can be constructed in polynomial time. Note that the instance \(\mathcal{I}^{\prime}\) may have many parallel edges, but this is allowed, also in the original instance \(\mathcal{I}\).
Now consider some feasible solution \(F\) to \(\mathcal{I}\). Replacing every attachment paths contained in \(F\) by a single edge as in step 2 above gives a feasible solution \(\hat{F}\) to \(\hat{\mathcal{I}}\) of value at most that of \(F\). Conversely, if \(\hat{F}\) is a feasible \(\hat{\mathcal{I}}\) solution, then replacing every edge in \(\hat{F}\) by an appropriate path gives a feasible solution \(F\) to \(\mathcal{I}\) of value at most that of \(\hat{F}\). Consequently, the new instance is equivalent to the original instance in the sense that every feasible solution to one of the instances can be converted to a feasible solution to the other instance of no greater value. We summarize this as follows.
If \(3\)-Cost Hamiltonian Activation 2-DP Augmentation admits a polynomial time algorithm then so is Activation 2-DP Augmentation.
So from now and on our problem is 3-Cost Hamiltonian Activation 2-DP Augmentation. Let us denote by \(\tau(F)\) the sum of the ordinary and the activation cost of \(F\subseteq E\), namely
\[\tau(F)=\ell_{F}(V)+c(F)=\sum_{v\in V}\max_{e\in\delta_{F}(v)}c_{e}^{v}+\sum_ {e\in F}c(e)\.\]
Let \(\mathsf{opt}\) denote an optimal solution value for an instance of this problem. We will assume that \(V=\{0,1,\ldots,n\}\) and that \(P=0-1-\cdots-n\) is a (Hamiltonian) \((0,n)\)-path, and view each edge \(ij\notin P\) as a directed edge \((i,j)\) where \(i<j\). Our goal is to find and edge set \(F\subset E\) such that \(P\cup F\) contains 2 internally disjoint \((0,n)\)-paths and such that \(\tau(F)\) is minimal.
**Definition 5**.: _For \(0\leq i<j<n\) let \(\mathcal{F}_{i,j}\) denote the family of all edge sets \(F\subseteq E\) that satisfy the following two conditions._
1. \(F\) _is an inclusion minimal edge set such that_ \(P\cup F\) _contains_ \(2\) _internally disjoint_ \((j-1,n)\)_-paths._
2. _No edge in_ \(F\) _has an end strictly preceding_ \(i\)_, namely, if_ \((x,y)\in F\) _then_ \(x\geq i\)_._
We will need the following (essentially known) "recursive" property of the sets in \(\mathcal{F}_{i,j}\).
**Lemma 6**.: \(F\in\mathcal{F}_{i,j}\) _if and only if there exists \(i\leq x<j\) such that exactly one of the following holds, see Fig. 3(a)._
1. \(F=\{(x,n)\}\)_._
2. \(F=\{(x,y)\}\cup F^{\prime}\) _for some_ \((x,y)\in E\) _with_ \(i\leq x<j<y<n\) _and_ \(F^{\prime}\in\mathcal{F}_{j,y}\)_._
Proof.: It is easy to see that if \(|F|=1\) then (i) must hold. Assume that \(|F|\geq 2\). There is \((x,y)\in F\) with \(x<j<y\) as otherwise \((P\cup F)\setminus\{j\}\) has no \((j-1,n)\)-path. Let \(F^{\prime}=F\setminus\{(x,y)\}\). Then \(P\cup F^{\prime}\) contains \(2\) internally disjoint \((y-1,n)\)-paths, as otherwise \((P\cup F)\setminus\{y\}\) has no \((j-1,n)\)-path. Let \(x^{\prime}\) be the lowest end of an edge in \(F^{\prime}\), let \((x^{\prime},y^{\prime})\in F^{\prime}\), and let \(F^{\prime\prime}=F\setminus\{(x^{\prime},y^{\prime})\}\). If \(x^{\prime}<j\) then \(F^{\prime}\in\mathcal{F}_{i,j}\) (if \(y^{\prime}\geq y\)) or \(F^{\prime\prime}\in\mathcal{F}_{i,j}\) (if \(y^{\prime}\leq y\)), contradicting the minimality of \(\mathcal{F}\). If \(x^{\prime}\geq y\) then \((P\cup F)\setminus\{y\}\) has no \((j-1,n)\)-path. This implies that \(F^{\prime}\in\mathcal{F}_{i,j}\), hence (ii) holds.
We note that Lemma 6 has the following (essentially known) consequence. Consider an inclusion minimal feasible solution \(F\) to our problem. Then the edges in \(F\) have an order \((v_{0},v_{2}),(v_{1},v_{4}),(v_{3},v_{6}),\ldots,(v_{2q-1},v_{2q+1})\) such that (see Fig. 3(b))
\[0=v_{0}<v_{1}<v_{2}\leq v_{3}<v_{4}\leq v_{5}<\cdots\leq v_{2q-1}<v_{2q}<v_{2 q+1}=n\.\]
Note that in this node sequence some nodes may be identical (e.g., we may have \(v_{2}=v_{3}\)), while the others are required to be distinct (e.g., \(v_{0}<v_{1}<v_{2}\)).
For \((l_{i},l_{j})\in L_{i}\times L_{j}\) let \(E(l_{i},l_{j})=\{e\in E:\ell_{e}(l_{i})\leq l_{i},\ell_{e}(l_{j})\leq l_{j}\}\). Let \(0\leq i<j<n\). For \(F\subseteq E\) the \((l_{i},l_{j})\)-**forced cost of \(F\)** is defined by
\[\alpha_{F}(l_{i},l_{j})=\left\{\begin{array}{ll}c(F)+\ell_{F}(V\setminus\{ i,j\})+l_{i}+l_{j}&\mbox{ if }\;F\subseteq E(l_{i},l_{j})\\ \infty&\mbox{otherwise}\end{array}\right.\]
Namely, assuming \(F\subseteq E(l_{i},l_{j})\) we pay \(\ell_{F}(v)\) at every node \(v\in V\setminus\{i,j\}\), and in addition we "forcefully" pay \(l_{i}\) at \(i\) and \(l_{j}\) at \(j\). Note that
\[\alpha_{F}(l_{i},l_{j})=c(F)+\ell_{F}(V\setminus\{i,j\})+l_{i}+l_{j}=\tau(F)+ (l_{i}-\ell_{F}(i))+(l_{j}-\ell_{F}(j))\.\]
This implies the following.
Figure 3: Illustration to Lemma 6. Three dots between nodes indicates that these nodes are distinct, while if there are only two dotes then the nodes may coincide.
**Corollary 7**.: \(\alpha_{F}(l_{i},l_{j})\geq\tau(F)\) _and an equality holds if and only if \(\ell_{F}(i)=l_{i}\) and \(\ell_{F}(j)=l_{j}\)._
Let \(f(l_{i},l_{j})\) denote the minimal \((l_{i},l_{j})\)-forced cost of an edge set \(F\in\mathcal{F}_{i,j}\), namely
\[f(l_{i},l_{j})=\min_{F\in\mathcal{F}_{i,j}}\alpha_{F}(l_{i},l_{j}). \tag{1}\]
The number of possible values of \(f(l_{i},l_{j})\) is \(O(|V|^{2}|L|^{2})\) - there are \(O(|V|^{2})\) choices of \(i,j\) and at most \(|L|^{2}\) choices of \(l_{i},l_{j}\). We will show how to compute all these values in polynomial time using dynamic programming. Specifically, we will get a recursive formula that enables to compute each value either directly, or using previously computed values.
The next lemma shows how the function \(f(l_{i},l_{j})\) is related to our problem.
**Lemma 8**.: \(\mathsf{opt}=\min_{l_{0},l_{1}\in L}f(l_{0},l_{1})\)_._
Proof.: From the definition of \(\mathcal{F}_{i,j}\) it follows that that \(\mathcal{F}_{0,1}\) is the set of all inclusion minimal feasible solutions. Let \(F^{*}\) be an inclusion minimal optimal solution and let \(\hat{F}\) be the minimizer of (1). Then for any \(l_{0},l_{1}\) we have
\[f(l_{0},l_{1})=\alpha_{\hat{F}}(l_{0},l_{1})\geq\tau(\hat{F})\geq\tau(F^{*})= \mathsf{opt}\.\]
Consequently, \(\min_{l_{0},l_{1}\in L}f(l_{0},l_{1})\geq\mathsf{opt}\). On the other hand for \(l_{0}=\ell_{F^{*}}(0)\) and \(l_{1}=\ell_{F^{*}}(1)\) we have
\[f(l_{0},l_{1})=\alpha_{F^{*}}(l_{0},l_{1})=\tau(F^{*})=\mathsf{opt}\,\]
by Corollary 7. This implies \(\min_{l_{0},l_{1}\in L}f(l_{0},l_{1})\leq\mathsf{opt}\), concluding the proof.
When \(F=\{e\}\) is a single edge we will use the abbreviated notation \(\alpha_{e}(l_{i},l_{j})=\alpha_{\{e\}}(l_{i},l_{j})\). For \((l_{i},l_{y})\in L_{i}\times L_{y}\) and an edge \(e=(x,y)\in E(l_{i},l_{y})\) with \(i\leq x<y\) let \(\beta_{e}(l_{i},l_{y})\) be defined by
\[\beta_{e}(l_{i},l_{y})=\left\{\begin{array}{ll}0&\mbox{if $x=i$}\\ \ell_{e}(x)&\mbox{if $x>i$}\end{array}\right.\]
We now define two functions that reflect the two different scenarios in Lemma 6.
\[g(l_{i},l_{j}) = \min_{e}\{\alpha_{e}(l_{i},l_{j}):e=(x,n)\in E,i\leq x<j\}\] \[h(l_{i},l_{j}) = \min_{l_{y},e}\{c(e)+l_{i}+\beta_{e}(l_{i},l_{y})+f(l_{j},l_{y}): e=(x,y)\in E(l_{i},l_{y}),i\leq x<j<y\}\]
It is not hard to see that the function \(g(l_{i},l_{j})\) is the minimal \((l_{i},l_{j})\)-forced cost of a single edge set \(e=(x,n)\) such that \(\{e\}\in\mathcal{F}_{i,j}\). Thus if there exist a minimizer of (1) that is a single edge, then \(f(l_{i},l_{j})=g(l_{i},l_{j})\).
We will show that the function \(h(l_{i},l_{j})\) is the minimal \((l_{i},l_{j})\)-forced cost of a non-singleton set \(F\in\mathcal{F}_{i,j}\); note that by lemma 6 any such \(F\) is a union of some single edge \((x,y)\in F\) with \(i\leq x<j<y<n\) and \(F^{\prime}\in\mathcal{F}_{j,y}\). We need the following lemma.
**Lemma 9**.: _Let \(F\in\mathcal{F}_{i,j}\) such that \(F\subseteq E(l_{i},l_{j})\) and \(|F|\geq 2\). Let \((x,y)\in F\) be the first edge of \(F\) as in Lemma 6, let \(F^{\prime}=F\setminus\{e\}\), and let \(l_{y}=\ell_{F}(y)\). Then_
\[\alpha_{F}(l_{i},l_{j})=c(e)+l_{i}+\beta_{e}(l_{i},l_{y})+\alpha_{F^{\prime}} (l_{j},l_{y})\]
Proof.: One can verify that for \(l_{y}=\ell_{F}(y)\) we have \(F^{\prime}\subseteq E(l_{j},l_{y})\) and the following holds:
\[\ell_{F}(V\setminus\{i,j\})=\beta_{e}(l_{i},l_{y})+\ell_{F^{\prime}}(V\setminus \{j,y\})+l_{y}\.\]
From this and using that \(c(F)=c(e)+c(F^{\prime})\) we get
\[\alpha_{F}(l_{i},l_{j}) = c(F)+\ell_{F}(V\setminus\{i,j\})+l_{i}+l_{j}\] \[= c(e)+l_{i}+\beta_{e}(l_{i},l_{y})+c(F^{\prime})+\ell_{F^{\prime} }(V\setminus\{j,y\})+l_{y}+l_{j}\] \[= c(e)+l_{i}+\beta_{e}(l_{i},l_{y})+\alpha_{F^{\prime}}(l_{j},l_{ y})\,\]
as required.
Among all non-singleton sets in \(\mathcal{F}_{i,j}\) let \(F^{*}\) have minimal \((l_{i},l_{j})\)-forced cost. Then \(h(l_{i},l_{j})=\alpha_{F^{*}}(l_{i},l_{j})\).
Proof.: We show that \(\alpha_{F^{*}}(l_{i},l_{j})\geq h(l_{i},l_{j})\). Let \(e=(x,y)\in F^{*}\) be the first edge of \(F^{*}\) as in Lemma 6(ii), let \(F^{\prime}=F\setminus\{e\}\), and let \(l_{y}=\ell_{F^{*}}(y)\). Then \(e\in E(l_{i},l_{y})\), hence by Lemma 9
\[\alpha_{F^{*}}(l_{i},l_{j})=c(e)+l_{i}+\beta_{e}(l_{i},l_{y})+\alpha_{F^{ \prime}}(l_{j},l_{y})\geq h(l_{i},l_{j})\.\]
The inequality is since in the definition of \(h(l_{i},l_{j})\) we minimize over \(e=(x,y)\) and \(l_{y}\).
We show that \(\alpha_{F^{*}}(l_{i},l_{j})\leq h(l_{i},l_{j})\). Let \(e=(x,y)\in E\) and \(l_{y}\) be the parameters for which the minimum in the definition of \(h(l_{i},l_{j})\) is attained, and let \(F^{\prime}\in\mathcal{F}_{j,y}\) such that \(f(l_{j},l_{y})=\alpha_{F^{\prime}}(l_{j},l_{y})\). Let \(F=F^{\prime}\cup\{e\}\) and note that \(F\in\mathcal{F}_{i,j}\) (by Lemma 6) and that \(l_{y}=\ell_{F}(y)\) (by the definition of \(h(l_{i},l_{j})\)). Consequently,
\[\alpha_{F^{*}}(l_{i},l_{j})\leq\alpha_{F}(l_{i},l_{j})=c(e)+l_{i}+\beta_{e}(l _{i},l_{y})+f(l_{j},l_{y})=h(l_{i},l_{j})\]
The inequality is since \(F^{*}\) has minimal \((l_{i},l_{j})\)-forced cost.
We showed that \(\alpha_{F^{*}}(l_{i},l_{j})\geq h(l_{i},l_{j})\) and that \(\alpha_{F^{*}}(l_{i},l_{j})\leq h(l_{i},l_{j})\), hence the proof is complete.
Let \(F^{*}\) be the minimizer of (1). From Lemma 10 we have:
* If \(|F^{*}|=1\) then \(f(l_{i},l_{j})=g(l_{i},l_{j})\).
* If \(|F^{*}|\geq 2\) then \(f(l_{i},l_{j})=h(l_{i},l_{j})\).
Therefore
\[f(l_{i},l_{j})=\min\{g(l_{i},l_{j}),h(l_{i},l_{j})\} \tag{2}\]
Note that the quantities \(g(l_{i},l_{j})\) can be computed directly in polynomial time. The recurrence in (2) enables to compute values of \(f(l_{i},l_{j})\), for all \(0\leq i<j\leq n-1\) and \((l_{i},l_{j})\in L_{i}\times L_{j}\), in polynomial time. The number of such values is \(O(n^{2}|L|^{2})\), concluding the proof of Theorem 3.
Let us illustrate the recursion by showing how the values of \(f(l_{i},l_{j})\) are computed for \(j=n-1,n-2\). Recall that the values of the function \(g\) are computed directly, without recursion and that
\[h(l_{i},l_{j})=\min_{l_{y},c}\{c(e)+l_{i}+\beta_{e}(l_{i},l_{y})+f(l_{j},l_{y} ):e=(x,y)\in E(l_{i},l_{y}),i\leq x<j<y\}\]
For \(j=n-1\) we have \(h(l_{i},l_{j})=\infty\), and thus:
\[f(l_{i},l_{n-1})=g(l_{i},l_{n-1})\]
For \(j=n-2\), the only possible value of \(y\) is \(y=n-1\). For every \(i\leq x<y=n-1\) and \(e=xy\in E\) we compute directly (without recursion) the values \(\beta_{e}(l_{i},l_{y})\). Then
\[h(l_{i},l_{n-2}) = \min_{l_{y},e}\{c(e)+l_{i}+\beta_{e}(l_{i},l_{y})+f(l_{j},l_{y}):e =(x,y)\in E(l_{i},l_{y}),i\leq x<j<y\}\] \[= \min_{l_{n-1},e}\left\{c(e)+l_{i}+\beta_{e}(l_{i},l_{n-1})+f(l_{n- 2},l_{n-1}):\begin{array}{l}e=(x,n-1)\in E(l_{i},l_{n-1})\\ i\leq x<j<n-1\end{array}\right\}\]
Substituting the already computed value \(f(l_{n-2},l_{n-1})=g(l_{n-2},l_{n-1})\) enables to compute the minimum of the obtained expression, and thus also to compute \(f(l_{i},l_{n}-2)\) via (2).
In a similar way we can compute \(h(l_{i},l_{n-3})\), then \(f(l_{i},l_{n-3})\), and so on.
|
2310.11958 | Emptying the Ocean with a Spoon: Should We Edit Models? | We call into question the recently popularized method of direct model editing
as a means of correcting factual errors in LLM generations. We contrast model
editing with three similar but distinct approaches that pursue better defined
objectives: (1) retrieval-based architectures, which decouple factual memory
from inference and linguistic capabilities embodied in LLMs; (2) concept
erasure methods, which aim at preventing systemic bias in generated text; and
(3) attribution methods, which aim at grounding generations into identified
textual sources. We argue that direct model editing cannot be trusted as a
systematic remedy for the disadvantages inherent to LLMs, and while it has
proven potential in improving model explainability, it opens risks by
reinforcing the notion that models can be trusted for factuality. We call for
cautious promotion and application of model editing as part of the LLM
deployment process, and for responsibly limiting the use cases of LLMs to those
not relying on editing as a critical component. | Yuval Pinter, Michael Elhadad | 2023-10-18T13:38:03Z | http://arxiv.org/abs/2310.11958v1 | # Emptying the Ocean with a Spoon: Should We Edit Models?
###### Abstract
We call into question the recently popularized method of direct model editing as a means of correcting factual errors in LLM generations. We contrast _model editing_ with three similar but distinct approaches that pursue better defined objectives: (1) _retrieval-based architectures_, which decouple factual memory from inference and linguistic capabilities embodied in LLMs; (2) _concept erasure methods_, which aim at preventing systemic bias in generated text; and (3) _attribution methods_, which aim at grounding generations into identified textual sources.
We argue that direct model editing cannot be trusted as a systematic remedy for the disadvantages inherent to LLMs, and while it has proven potential in improving model explainability, it opens risks by reinforcing the notion that models can be trusted for factuality. We call for cautious promotion and application of model editing as part of the LLM deployment process, and for responsibly limiting the use cases of LLMs to those not relying on editing as a critical component.
## 1 Introduction
Large language models, or LLMs, have taken the NLP world by storm. After originally focusing on training them as a vehicle for transfer learning, recent advancements have cast LLMs in the role of one-stop shop, all-knowing oracles. One particular finding that has contributed to this reframed usage is the apparent **factuality** properties of pre-trained LLMs: somehow, mere next-word prediction training has produced models that can complete certain correct facts about the world when asked to do so. LLMs have been perceived by the public at large as a replacement for search engines, and the scant disclaimers appearing in commercial services offering LLM querying do not appear to have made a dent in this perception.
Despite these developments, LLMs of course keep making mistakes. There is, after all, an inherent mismatch between their pre-training objective and the desire for factuality. In recent years, researchers have come up with several remedies to the problem of nonfactual LLM outputs, one of which being **model editing**, where parameters inside an LLM are tweaked based on individual facts marked for correction. These works focus on solving problems such as ensuring stability of other factual outputs following the editing, or batch editing, or computationally efficient editing. In this opinion paper, we call into question the entire _idea_ of model editing, citing concerns about intended use cases, conceptual scalability, potential for bias, safety, and overall accountability. We advocate using models that have explicit knowledge modules for tasks requiring factual knowledge, and other existing techniques to mitigate the need for model editing in the first place. Having said that, we acknowledge the usefulness of direct editing for certain use cases such as interpretability probes, and recommend limiting editing to such scenarios.
## 2 Model Editing
Sinisin et al. (2020) first introduce the notion of updating large ML models to account for local performance expectations motivated externally. They cite cases where mistakes are critical, like object detection in self-driving cars. Later work suggests model editing can aid in protecting privacy and eliminating bias Zhu et al. (2020), and as a measure of "catching up" with time-sensitive facts such as "PM of the UK" changing over time Bommasani et al. (2021); Mitchell et al. (2022), or as a means of understanding the mechanics of black box models Meng et al. (2022). Sinitsin et al. (2020) specify several desired properties of editing methods: **reliability** (the target facts are updated as intended), **locality** (other changes don't happen; a measure for this property is termed **drawdown**), and **efficiency** (successful edits are computationally light); later work De Cao et al. (2021) added **generality** (the
ability to modify models not originally trained for knowledge retention), **consistency** (robustness to paraphrasing, in the specific use case of text models), and an unspecified property we'll call **frugality** (only minimal components of the model are changed during editing). A well-studied limitation, which most of these works focus on mitigating, and related to locality, is catastrophic forgetting Ratcliff (1990), i.e., ensuring that an edited model does not lose performance on tasks for which it was explicitly trained and performs well on.
Methods for model editing have evolved from editable training Sinitsin et al. (2020), a procedure requiring an a-priori decision that a model would later be edited, to the locality-motivated changing of specific parameters within models Zhu et al. (2020); Meng et al. (2022). Recent work Mitchell et al. (2022) draws attention to the danger of model performance degradation accumulating over the course of many successive edits, and seeks mitigation through improved methods. Hase et al. (2023) extend the consistency requirement to hold over entailed and equivalent facts as well as paraphrases, suggesting model editing as a way of reconciling cases where certain facts are produced correctly but those entailed from them are not.
## 3 Critique
In this section, we present the case against model editing as a practice, regardless of the performance obtained by any single method. We begin with an analysis of the premise underlying the model editing research objective: the hypothesis that LLMs can be used as fact repositories. We then focus on reasons for which editing facts cannot be designed a-priori as a means to maintaining fact-providing LLMs, and continue with practical considerations for why even this ill-posed goal is probably unattainable.
### LLMs as Knowledge Repositories?
The perception that LLMs can behave as knowledge repositories was first formulated and experimentally supported with the introduction of the LAMA benchmark Petroni et al. (2019), where pretrained language models were queried in a zero-shot setting against 51K knowledge triplets extracted from knowledge banks and reformulated as fill-in-the-blank statements. 2019-level models (BERT-XL) got the top answer correctly about 26.5% of the time. The limitations of the LAMA study were later studied and this result was shown to not be robust against multi-token spans (as opposed to a single token answer; Kalo and Fichtel (2022), in an open vocabulary setting (as opposed to a closed vocabulary of 21K tokens; Roberts et al. (2020), and when queries vary Kassner and Schutze (2020); indicating that the LAMA experiment relied on heuristics to predict answers. Since then, more realistic benchmarks have been introduced, as well as more robust querying techniques Jiang et al. (2020), to address some of these limitations. As the scale of LLMs increased, recent work has also scaled up the size of benchmarks, shifting the focus to facts concerning rare entities, where even recent LLMs struggle Kandpal et al. (2023); Sun et al. (2023). These experiments indicate that an LM's ability to answer a question depends on the number of times information relevant to that question appears in the pretraining data. As a result, LLMs cannot be safely used to answer queries about the long-tail of facts that have been rarely mentioned in pretraining Mallen et al. (2023).
Beyond the capability to reliably answer queries, LLMs should also fulfill other requirements in order to be considered as fact repositories AlKhamissi et al. (2022): the ability to edit knowledge (add, delete, update facts), logical consistency (answers to different but related facts must agree), reasoning (the ability to infer additional answers on the basis of logical rules), and explainability and interpretability (supporting an answer through a convincing chain of arguments). Experimental results assessing these dimensions indicate that current LLMs fail on all of them. He et al. (2023) demonstrate that LLMs underperform on computing ontology subsumption inference when compared with other trained approaches such as NLI models and symbolic systems such as OWL Reasoners Glimm et al. (2014).
The evidence supporting the premise that LLMs can be used as fact repositories is, thus, currently weak. Beyond this fragile foundation, we focus below on specific arguments regarding the capability to edit models as if they were fact repositories.
### Systemic Mismatch
A very basic property of LLMs which contrasts with their usage as knowledge repositories is their _stochastic_ nature. When used for augmenting creative endeavours, for data exploration, or for free-form tasks such as summarization, this property
is desirable: the use case calls for variation or unexpectedness. In other cases, we may be content with models that supply us with a distribution of outputs, from which we can estimate the probability of individual responses and adjust our expectation for a reliable output. Since the latter is absent from many models available only through 3rd-party APIs, we are left with obtaining text generated from an unknown distribution, which we argue is insufficient for factuality-dependent applications.1 It can even be argued that getting facts wrong is a _feature_ of vanilla LLMs rather than a bug: as their core training procedure aims to simulate plausible continuation of text, it should not surprise us that models would repeat widely-assumed falsehoods in a way that negates the idea of using them for factual purposes. If most people think, and write, that Los Angeles is the capital of California,2 an LLM is _supposed_ to complete a relevant prompt accordingly. There is also no built-in reliability or robustness in LLMs that sample outputs from a distribution: two instances of the same prompt can easily produce contradicting facts, and indeed often do.
Footnote 1: Called “transparent-sensitive tasks” (Luo et al., 2023).
Footnote 2: The capital of California is Sacramento.
Additionally, the idea of editing facts in a model suggests that we always _want_ a model to supply us with a fact as an answer to a question. However, at times, questions may be posed while pre-supposing or otherwise assuming harmful propositions such as stereotypes or conspiracy theories. Editing the "fact" associated with the question "which government agency faked the moon landing?" would not provide us with an improved model; what we may want is to _remove_ such facts altogether, or provide the model with a means of challenging the pre-supposition, or avoiding giving any answer at all. At the same time, many relations that we would term "facts" can be argued to be vital notions without which certain kinds of basic communication is impossible.3 An LLM that cannot assert whether trees have leaves, or asserts that they never do, is in danger of becoming irrelevant for most tasks requiring any kind of interaction with the world. As philosophy and practice surrounding these questions progresses, we can hope this gap between'must-know' and'must-not-know' will eventually tighten to form workable bounds on LLM knowledge capacity.
Footnote 3: We thank a reviewer for making this point.
### Architectural Implausibility
Estimates vary wildly, but there are over 100 million notable facts in the world.4 Even the cutoff for what constitutes a fact is unclear. Does a.3% change in a demographics statistic, or a new esoteric sports record, call for an edit? Do the daily whereabouts of world leaders constitute facts? What about those of celebrities or journalists? As events unfold daily in world politics, economics, sports, and many other walks of life, facts are added and changed in larger quantities and rates than can be plausibly "caught up with" through surgical model editing, akin to emptying the ocean with a spoon.5 If we choose to limit ourselves in which facts we find important enough to edit, we introduce bias into the system, opening the door to a host of documented harms which permeate many language technologies (Chang et al., 2019; Blodgett et al., 2020). This choice can be implicit as well as explicit, and is incredibly hard to avoid.
Footnote 4: 107,323,022 appear in Wikidata as of October 17, 2023, conforming to the definition of _notable items_ used in [https://www.wikidata.org/wiki/Help:Items](https://www.wikidata.org/wiki/Help:Items).
Footnote 5: Attributed, according to the Internet, as a “Yiddish proverb”, but, hey, who knows?
In a similar vein, the vastness and variability of facts is likely to lead to bias in evaluating the complement set of edits, those facts controlled for as _not_ changing following an edit (drawdown). Even paraphrases of edited facts are not guaranteed to change alongside the selected phrasing (De Cao et al., 2021), nor are entailed facts (Hase et al., 2023). This problem also manifests as a safety issue, since unchecked facts can turn out to be quite important for model usage, but perhaps taken for granted (or simply not explicitly covered) when designing a drawdown benchmark.
There is evidence (Mallen et al., 2023; Jang et al., 2021) that facts above a certain "popularity threshold", measured in terms of views on Wikipedia articles, are harder to edit out of models compared to facts lying on the long tail of view distributions. Inherently being out of sight, the unpopular facts thus become susceptible to the double risk of being edited alongside target facts while not being deemed important enough to be checked during drawdown testing. The ultimate outcome of such procedures can be a monolithization of LLM-supplied "knowledge" that focuses on certain popular domains and interests while losing all usefulness in many topics contributing to the extensive diversity of the human and natural experience.
Empirical evidence indicates existing editing models fail to properly account for the ripple effect of a fact editing operation (Cohen et al., 2023). For example, _the insertion of the fact "Jack Depp is the son of Johnny Depp" introduces a "ripple effect" in the form of additional facts that the model needs to update (e.g."Jack Depp is the sibling of Lily-Rose Depp")_. Results in symbolic approaches to this task have demonstrated that this knowledge updating task is of high computational complexity, even NP-hard, for example in Truth Maintenance Systems (TMS; Rutenburg, 1991). These results carry to approaches based on machine learning techniques (Knoblauch et al., 2020). There is, therefore, theoretical basis to conclude that model editing will at best address the problem of consistent updating in a roughly approximate manner, and most likely fail to update rarely seen facts within the ripple effect of editing operations.
Finally, recent empirical findings show additional weaknesses of edit methods by extending the evaluation suites to cover aspects beyond fact editing metrics, such as specificity and robustness of the models post-editing (Onoe et al., 2023; Hoelscher-Obermaier et al., 2023; Hase et al., 2023; Brown et al., 2023).
## 4 Alternatives to Model Editing
Model editing is motivated by the desire to control the text generated by LLMs to make them more compliant with desirable outcomes, specifically to control factuality: when an LLM generates text expressing an incorrect or obsolete fact, remove the fact from the LLM's "memory" and try again (Peng et al., 2023). Other relatively direct approaches to improve LLM factuality (and, in general, to control and revise the text generated by LLMs) have been proposed, most notably Reinforcement Learning with Human Feedback (RLHF; Bai et al., 2022; Ouyang et al., 2022; Ramamurthy et al., 2023; Rafailov et al., 2023; Carta et al., 2023), or adding a factuality objective to the training procedure (Lee et al., 2022). We now survey approaches which define a more achievable objective than that pursued by direct model editing or output manipulation. These approaches avoid the assumption that the LLM is a fact repository, and thus steer clear of the attempt to update this repository in a logically consistent manner, which is computationally hard to achieve and verify. While these alternatives avoid the more problematic aspects of model editing, they still suffer from their own limitations, but we argue that they identify more promising research directions.
Incorporating Knowledge BasesIn _retrieval-based models_, factual knowledge is explicitly represented in a dedicated component external to the LLM. The way this external fact store is represented and combined with the LLM varies: it can be a collection of textual documents that is searched using a text retrieval component, or an RDF graph, or encoded as a set of vector embeddings, or represented as modular expert LMs trained on curated datasets. In all cases, in the retrieval-based approach, the model can explicitly cite the source which underlies a specific generation, and let the user decide its credibility.
Once external (non-parametric) knowledge is retrieved, it must be composed with the LLM generation process. Khandelwal et al. (2020) introduce a k-nearest neighbor method with interpolation; RETRO (Borgeaud et al., 2021) combines the prompt with retrieved documents through a specialized cross-attention mechanism; other LLM-IR variants include a fusion-in-decoder method (ATLAS; Izacard et al., 2022) and TRIME (Zhong et al., 2022), all retrieval-based models that maintain the capacity for few-shot in-context learning.
Recent work addresses the integration of LLMs and IR by learning to combine results from search engines into the context provided to the LLM to answer a specific question (Xu et al., 2023). SAIL (Luo et al., 2023) introduces an instruction fine-tuning method to ground language generation on search results and learn to select trustworthy sources. CooK (Feng et al., 2023) approaches the task of combining multiple curated modular knowledge sources into an integrative system, where sources are modeled as independent LMs and an integrator LLM combines the information from these modules to answer a question.
In all of these approaches, factual knowledge is stored outside of the parameters of the LLM and can be manipulated without retraining the LLM. These approaches have been shown to be scalable in the number of facts. Editing the fact store means the same as updating a database, thus simplifying the formulation of the task.
Retrieval-based models, arguably, do not resolve all of the concerns we identify with model editing. The problem of identifying the provenance of a given generation span in these combined models
remains acute: the text can be determined by the internal LLM parameters, by the external stores, or by their combination, even if they are not logically consistent with one another. Facts that have been seen more often during LLM training may have more weight in this interpolation even if they are wrong or when they become obsolete. Zhu et al. (2020) claim that "modifying knowledge in explicit memory module networks like FaE (Verga et al., 2020) is not easier and demands changes in the LLM component as well." This effect was also identified in the RealQA baseline test (Kasai et al., 2022): in this benchmark containing questions about time-sensitive data, experiments showed that in many cases GPT-3 (Brown et al., 2020) properly integrated data retrieved from a KB and injected into the context, but often still generated text based on outdated data from the LLM. While the clear separation of the components and the identification of the composition of external knowledge improve transparency-sensitive tasks, the objective of identifying provenance in the generated text and controlling which sources are appropriate in a given context remains an open research question. The formulation of the task of _attributing generated text to identifiable sources_ (AIS; Rashkin et al., 2022) is a key contribution to this direction. Neeman et al. (2022) also address an aspect of this issue with a counterfactual data augmentation technique to disentangle contextual and LLM knowledge when generating text.
Continual trainingfocuses on incrementally training a model by introducing new tasks or new domains (e.g., Razdaibiedina et al., 2023). Model editing does not directly fall within this objective, as it concerns updating precise elements in the model while keeping tasks and domains unchanged. Yet, the identification of drawdown in model editing is similar to the risk of _catastrophic forgetting_ identified in continual learning. Within this approach, we could situate model editing as a type of re-training or post-training. Zhu et al. (2020) note that just fine-tuning over a set of facts-to-update costs in degradation on other facts. Jang et al. (2021) identify the problem and propose to apply techniques from continual training to the task of incremental updating of LLM knowledge. Overall, while continuous training may seem to _a priori_ avoid the risks of the model editing approach, it seems to suffer from many of the main evaluation problems.
Concept ErasureThe goal of concept erasure (Elazar and Goldberg, 2018; Ravfogel et al., 2020; Belrose et al., 2023) is to remove unwanted bias from embeddings generated by LLMs and subsequently from the generated text. This goal is motivated by fairness objectives: preventing protected attributes from causally affecting text generation. This motivation is somehow related to that of model editing (prevent damage from generated text) but key differences exist: (1) the method addresses general, well-identified concepts (protected attributes such as gender, race, age) as opposed to less well-defined specific _facts_; (2) it operates as a post-hoc transformation of embeddings, as opposed to a modification of the model itself, and as such it allows for more accountability than an ever-changing model; (3) it is verifiable with well-defined metrics over a limited pre-declared scope. While concept erasure has more limited scope than model editing, it defines an objective that can be evaluated with aggregated metrics in a robust manner. One possible avenue of future research can be to examine whether the erasure approach can be extended to address specific aspects of factuality, such as temporal validity.
Maybe it's better to have a model that knows what it doesn't know?As identified in Kasai et al. (2022), a prerequisite to avoid generating wrong text is to identify what is not known: "can an open-domain QA system identify unanswerable cases?" The related issue of unanswerable questions has been addressed in a robust way (Rajpurkar et al., 2018; Sulem et al., 2021, 2022); yet the challenge in the context of LLMs and factuality is that problematic questions like those specified in SS3.2 do _look_ answerable.
## 5 Conclusion
We agree that model editing is an attractive task to define, with clear benchmarks and expected outcomes. However, in current practice it contributes towards unrealistic expectations that we can solve problems like LLM hallucinations, which would lead to potential harm in unleashing use cases that are not in fact within the capabilities of LLMs alone. We advocate for the usage of retrieval-augmented methods and other structural and post-hoc methods in order to achieve the stated large-scale goals, while conceding the benefits of editing to "safer" applications such as model interpretability and robustness checking.
## Acknowledgments
We thank Sarah Wiegreffe for comments on earlier drafts. We thank the reviewers for their valuable feedback. Yuval Pinter was supported in part by the Israeli Ministry of Innovation, Science and Technology (Grant 2022/5451).
## Limitations
This is an opinion paper. We have purposely not made empirical analyses in order to support our critique, knowing that data is contestable and may vary according to its collection methodology. We aim to convince on the merit of examples and rhetorical argumentation rather than concrete evidence, mostly out of our reach for the models under consideration, which we nevertheless urge readers and practitioners to seek and use in attempts to either support or disprove our claims.
|
2303.11860 | Online Transformers with Spiking Neurons for Fast Prosthetic Hand
Control | Transformers are state-of-the-art networks for most sequence processing
tasks. However, the self-attention mechanism often used in Transformers
requires large time windows for each computation step and thus makes them less
suitable for online signal processing compared to Recurrent Neural Networks
(RNNs). In this paper, instead of the self-attention mechanism, we use a
sliding window attention mechanism. We show that this mechanism is more
efficient for continuous signals with finite-range dependencies between input
and target, and that we can use it to process sequences element-by-element,
this making it compatible with online processing. We test our model on a finger
position regression dataset (NinaproDB8) with Surface Electromyographic (sEMG)
signals measured on the forearm skin to estimate muscle activities. Our
approach sets the new state-of-the-art in terms of accuracy on this dataset
while requiring only very short time windows of 3.5 ms at each inference step.
Moreover, we increase the sparsity of the network using Leaky-Integrate and
Fire (LIF) units, a bio-inspired neuron model that activates sparsely in time
solely when crossing a threshold. We thus reduce the number of synaptic
operations up to a factor of $\times5.3$ without loss of accuracy. Our results
hold great promises for accurate and fast online processing of sEMG signals for
smooth prosthetic hand control and is a step towards Transformers and Spiking
Neural Networks (SNNs) co-integration for energy efficient temporal signal
processing. | Nathan Leroux, Jan Finkbeiner, Emre Neftci | 2023-03-21T13:59:35Z | http://arxiv.org/abs/2303.11860v1 | # Online Transformers with Spiking Neurons for Fast Prosthetic Hand Control
###### Abstract
Transformers are state-of-the-art networks for most sequence processing tasks. However, the self-attention mechanism often used in Transformers requires large time windows for each computation step and thus makes them less suitable for online signal processing compared to Recurrent Neural Networks (RNNs). In this paper, instead of the self-attention mechanism, we use a sliding window attention mechanism. We show that this mechanism is more efficient for continuous signals with finite-range dependencies between input and target, and that we can use it to process sequences element-by-element, this making it compatible with online processing. We test our model on a finger position regression dataset (NinaproDB8) with Surface Electromyographic (sEMG) signals measured on the forearm skin to estimate muscle activities. Our approach sets the new state-of-the-art in terms of accuracy on this dataset while requiring only very short time windows of 3.5 ms at each inference step. Moreover, we increase the sparsity of the network using Leaky-Integrate and Fire (LIF) units, a bio-inspired neuron model that activates sparsely in time solely when crossing a threshold. We thus reduce the number of synaptic operations up to a factor of \(\times 5.3\) without loss of accuracy. Our results hold great promises for accurate and fast online processing of sEMG signals for smooth prosthetic hand control and is a step towards Transformers and Spiking Neural Networks (SNNs) co-integration for energy efficient temporal signal processing.
## 1 Introduction
Surface Electromyography (sEMG) is a technique that senses currents running through muscular fibers' membrane to measure muscular activity [1]. As sEMG signals are triggered by electrical stimuli from the central nervous system, this method is gaining a strong interest as a mean for Human-Machine Interfacing [1]. Since sEMG measurements only require electrodes positioned on the forearm skin, this technique is very promising for future non-invasive wearable prosthetic hand control system [1].
Transformers, which are the state-of-the-art networks for sequence processing [2; 3], can be very efficient to process sEMG signals [4]. However, the self-attention mechanism [2] used in conventional transformers requires to wait for large time windows, which induces a delay preventing fast online processing of continuous signals. Moreover, memory and computation of the self-attention mechanism scales quadratically with the sequence length.
In contrast, Recurrent Neural Networks (RNNs) integrate the concept of time into their operating model and are thus suited for continuous signals online processing. Spiking Neural Networks (SNNs) [5; 6] are a bio-inspired type of RNNs. They are very promising for low power applications because their neurons only transmit information when their membrane potential (an internal state of each neuron) reaches a threshold, and these events happen sparsely in time [6]. Many research focus on building new hardware that leverage the inherent temporal sparsity of SNNs [7; 8; 9; 10].
In this paper, we propose an online transformer that makes use of a linearized sliding window attention mechanism [11]. We adapt this attention mechanism for online processing of continuous signals by making it forward in time and serialized. Our online transformer thus performs inference for each token as they are generated. In order to leverage information from past inputs, we store information in the keys and the values of the attention mechanism, and we update this memory dynamically as the tokens are generated. The length of the sequences stored in the keys and the values is a hyper-parameter that we can tune to change the temporal depth of the information used in the attention mechanism, as well as the computational complexity and the memory usage.
We test our model on a finger position regression through sEMG signals using the Non-Invasive Adaptive Hand Prosthetics Database 8 (NinaProDB8) dataset. First, we show that our online transformer allows users to process sEMG signals with high accuracy using solely very short time windows of 3.5 ms, which permits a very fine granularity in time of prosthetic hand control. Secondly, we show that selecting the temporal depth of the attention improves the results of signal processing and makes our model outperform a self-attention-based transformer, as well as previous state-of-the-art models. Finally, we show how our custom online attention mechanism allows us to SNNs inside the transformer architecture to increase the network sparsity, which in turn results in a reduction of the required number of synaptic operations by a factor of \(\times 5.3\) without loss of accuracy.
## 2 Related work
Deep Learning for Surface Electromyography processingAlthough sEMG signals and muscle activity are correlated, their relation is unknown and processing sEMG signals remains very challenging because of electrical noise (e.g., interference, ground noise, crosstalk between electrodes), inter-subject variability (e.g., different forearm circumferences, muscle characteristics), and intra-subject variability (e.g., variation of the electrodes position or the skin conductivity from one day to the next) [12].
Deep learning methods can leverage large datasets to extract the more relevant features despite noise or variability [13]. They can thus outperform conventional machine learning techniques like Support Vector Machine (SVM) [14]. Moreover, deep networks can process raw sEMG signals whereas conventional networks require prior pre-processing like Principal Component Analysis [15], Linear Discriminant Analysis (LDA) [15], Fourier transforms [16], and others.
Deep learning has already been applied to sEMG signals processing using Temporal Convolutions [17; 18; 19] and Recurrent Neural Networks (RNNs) [20; 21; 22; 23]. While the ability to compute on the edge with restricted memory capacity and low power consumption are essential to the deployment of autonomous wearable prosthetic hand control systems, most deep learning techniques are computationally intensive. Mukhopadhyay et al. [24] have shown that the inherent sparsity of SNNs can be leveraged to reduce drastically the computational intensity of sEMG signals processing. Burrello et al. [4] have shown that a transformer network can process sEMG signals with a limited memory usage and reduced number of Multiply-And-Accumulate (MAC) operations.
TransformersUnlike RNNs, Transformers do not suffer from the vanishing gradient problem for learning features in time [3], they do not have inductive biases made from assumptions about the data structure, and they can be trained very fast on GPUs since they can process an entire temporal sequence in parallel. The workhorse of Transformers is the self-attention mechanism, an operation that allows all the elements of a sequence to be compared with each other's. For Natural Language
Processing (NLP), the strength of self-attention is that is allows one token to be compared with present, past, and future tokens [2]. However, depending on the application, conventional self-attention is not always the best choice. It has been shown that using local attention and sliding windows attention can lead to better results for long sequences in NLP [11] and in Machine Vision [25].
Transformers with Spiking Neural Networks.SNNs, which mimic biological neural networks, are very promising for low power applications because their neurons only transmit information when their membrane potential (an internal state of each neuron) reaches a threshold, and these events happen sparsely in time [6]. Integrating SNNs in a transformer architecture is challenging and not intuitive. Just as RNNs, SNNs have a temporal dynamic. Thus, each element of a sequence must be fed to RNNs or SNNs sequentially. In contrast, since the self-attention mechanism compares all the different elements of a sequence in parallel, Transformers require to wait for the completion of a sequence before computing. For instance, the transformer used in [4] for sEMG classification used time windows of 150 ms. Naively stacking conventional self-attention layers and recurrent layers would then lead to undesirable delays due to the alteration between waiting time windows and processing sequences sequentially.
Yao et al. [26] have used a type of attention mechanism to select the importance of event frames, and then process the events with a SNN. Sabater et al. [27] have shown that a transformer can be used to process event-based vision sensor data more efficiently and accurately than convolutional neural networks. Zhou et al. [28] have used binarized self-attention to integrate sparsity in Transformers. Li et al. [29] have used a SNN as a pre-processing step for a transformer. It was also shown by Gehrig and Scaramuzza [30] that Long-Term Short-Term (LSTM) units can be integrated inside a transformer architecture, but in this work the attention mechanism was spatial and not temporal. Finally, Zhu et al. [31] have integrated spiking neurons inside a transformer architecture, but by using a custom attention mechanism that cannot be computed online.
In this paper, we introduce a transformer model that can perform attention in time online, and is compatible with spiking neurons at every layer of the architecture.
## 3 Methods
### NinaproDB8: A Finger Position Regression Dataset
In this work, we used the Non-Invasive Adaptive Hand Prosthetics Database 8 (NinaProDB8) [12], a public sEMG database made as a benchmark for estimation of kinematic finger position. Many deep learning efforts applied to sEMG focus on simple functional movement classification [1; 4; 17; 18]. However, sequence-to-sequence regression of finger position can lead to a wider range of gestures and can be more easily coupled to sensory feedback from robotic hands for a closed-loop precise control [33].
The measurements of the database were made on 10 able-bodied subjects and two right trans-radial amputees. The sEMG signal, that is the input of our neural network (see Fig. 1 (a)), are recorded using 16 electrodes (Delsys Trigno IM Wireless EMG system) positioned around the right forearm of the participants. The finger positions were measured using a dataglove, the Cyberglove II, 18-Degrees of Freedom (DoF) model, that measures the finger-joint angles that correspond to the dots in Fig. 1 (b). The sEMG signals and the dataglove signals were up sampled to 2 kHz and post-synchronized. The details of the dataset can be found in [12].
In order to disregard the irrelevant degrees of freedom and focus directly on motions relevant for prosthetic hand control, it has been shown by Krasoulis et al. [12] that we can convert the 18-DoF recorded by the dataglove into 5-Degrees of Actuation (DoA) using a simple linear transformation. The matrix used for this linear transformation can be found in the supplementary materials of Krasoulis et al. [12]. We used the DoA as targets of our neural network.
Three datasets were recorded for each participant: the first two datasets (acquisition 1 and 2) comprised 10 repetitions of each movement and the third dataset (acquisition 3) comprised only 2 repetitions. We used both acquisition 1 and 2 as training set and acquisition 3 as testing set. In Fig. 1 (a) and (c) we show the example of the testing set for subject 1 (target).
To facilitate the training of our neural network, we normalize each set of repetition by subtracting the sEMG signals by their mean and dividing by their standard deviation.
### Online Inference with a Custom Attention Mechanism
In conventional transformers [2], the entire self-attention stage is calculated in parallel. The elements of the input sequence of a self-attention layer are called tokens, and the operation of self-attention is described as
\[\mathrm{Attention}(Q,K,V)=\mathrm{softmax}\left(\frac{Q\circ K^{T}}{\sqrt{d}} \right)\circ V \tag{1}\]
where \(\circ\) is the dot-product operator, \(Q\in\mathbb{R}^{N\times d}\), \(K\in\mathbb{R}^{N\times d}\), and \(V\in\mathbb{R}^{N\times d}\) are respectively called the queries, the keys, and the values and are three different projections of the same sequence of tokens:
\[Q =W_{Q}x \tag{2a}\] \[K =W_{K}x\] (2b) \[V =W_{V}x. \tag{2c}\]
The attention dimension \(d\) is the size of each token projection and \(N\) is the sequence length. \(W_{Q}\in\mathbb{R}^{d\times D}\), \(W_{K}\in\mathbb{R}^{d\times D}\), and \(W_{V}\in\mathbb{R}^{d\times D}\) are learnable weights matrices with with \(D\) the embedding dimension, and \(x\in\mathbb{R}^{N\times D}\) the input of the attention mechanism.
In the case of continuous signals (such as bio-medical signals), it is possible to split the input signal into finite time windows, and to wait for the end of each time window before carrying-out the inference (as in Burrello et al. [4]). However, this method induces delays due to waiting for the end of the time windows. Our online transformer uses a custom attention mechanism that can be computed online for each element of the sequence without delays.
To avoid waiting for future tokens, the tokens of time step \(t_{0}\) are not compared with future tokens of time steps \(t>t_{0}\). The information from previous tokens is stored in the keys and the values \(K\in\mathbb{R}^{M\times d}\) and \(V\in\mathbb{R}^{M\times d}\). Unlike for self-attention, here the size of \(K\) and \(V\) does not depend on the full sequence length, but solely on \(M\), which is the number past time steps we choose to store.
Figure 1: (a) Surface electromyography signal acquired using a 16 channel Delsys Trigno IM Wireless EMG system (see Krasoulis et al. [12]). The signal of only one out of the 16 channels is plotted. (b) The Cyberglove II is used for the acquisition of the ground truth finger-joint angles [32]. (c) Ground truth finger-joint angles and reconstruction with our Online Transformer model.
\(K\) and \(V\) are initially zeroed. Then, the elements \(K_{i}\in\mathbb{R}^{d}\) and \(V_{i}\in\mathbb{R}^{d}\) are iteratively replaced token-wise using the projections of Eqs. 2b and 2c. At each time step, a single query \(Q_{t}\in\mathbb{R}^{d}\) is also computed with Eq. 2a, and the attention is computed as
\[\mathrm{Attention}_{t}=\mathrm{softmax}\left(\frac{Q_{t}\circ K}{\sqrt{d}} \right)\circ V \tag{3}\]
The softmax is computed on the memory length dimension \(M\). Since one different element of \(K\) and \(V\) is updated at each time step, and since the length of \(K\) and \(V\) is \(M\), all their elements are updated with a frequency \(\frac{1}{M}\). The above-mentioned procedure is summarized as the Algorithm 1.
In contrast to the quadratic dependence with respect to sequence length for conventional self-attention (\(O\left(N^{2}\right)\)), the computational complexity of the sliding window attention mechanism is linear (\(O\left(MN\right)\)).
Eq. 3 shows that our attention mechanism can be computed time step wise instead of waiting for the end of large time windows to compute the attention in parallel. Now, we will show how this attention mechanism fits in our full neural network.
### Neural Network Architecture
Our neural network consists of three blocks as depicted in Fig. 2 (a): an embedding block that converts the raw EMG signal into a sequence of tokens, an encoder block that uses attention to find correlation between sequence elements, and a regression block that converts the output into five degrees of actuation.
Figure 2: (a) Online Transformer neural network architecture. (b) Online Attention sketch: The different tokens are created by a temporal convolution (with a kernel size 3 and a stride 2 in this example). The tokens are linearly projected toward the queries, the keys and the values (\(Q\), \(K\), \(V\)). \(Q\) matches only the present token whereas \(K\) and \(V\) store multiple previous tokens. The length \(M\) of this memory is 3 in this example. At each time step, \(K\) and \(V\) forget the projection of the oldest token and store the projection of the new one. The mathematical operations of the online attention mechanism are described in section 3.2.
The embedding is made of a temporal convolution layer. Convolutional layers have overlaps between input time windows, which means that unlike linear layers, they have an intrinsic order, and thus do not require positional embeddings [2]. The convolutional layer has \(C=16\) input channels matching the 16 electrodes, and \(D=64\) output channels. The network is tested with kernels of various sizes to vary the length of the input time window that matches one token. We chose to make an overlap of two time steps between time windows, which makes the convolutional layer stride be \(s=k-2\), where \(k\) is the kernel size. With a padding \(p=1\), the number of tokens generated thus depends on the stride as
\[N=\lfloor\frac{N_{\mathrm{samples}}}{s}\rfloor \tag{4}\]
where \(N_{\mathrm{samples}}\) is the number of processed samples of the input signal.
The encoder is described as
\[\begin{array}{rcl}f\left(x\right)&=&x+\mathrm{MHA}\left(\mathrm{LN}\left(x \right)\right)\\ z\left(f\left(x\right)\right)&=&f\left(x\right)+\mathrm{FNN}\left(\mathrm{LN} \left(f\left(x\right)\right)\right)\end{array} \tag{5}\]
where \(\mathrm{LN}\) is a layer norm layer. \(MHA\) is the multi-head attention layer with \(h=8\) heads computed in parallel using Eq. 3, with an attention dimension \(d=32\). After that attention is computed, the \(h\) heads are concatenated and projected into dimension \(D=64\). \(FNN\) is a Feedforward Neural Network with one hidden layer of 128 GeLU units [34] and a dropout layer with probability 0.2. The linear projections of \(\mathrm{FNN}\) are applied token-wise so that that they can be computed online. The entire encoder block can be repeated and stacked \(L\) times, but here we chose to keep \(L=1\). The backbone of our neural network is inspired from Burrello et al. [4].
Finally, the regression block consists of a linear layer that projects each token from a dimension \(D=64\) to a dimension 5 (the number of degrees of actuation we perform the regression on), and an up sampling layer that duplicates the output of each token to generate as many samples as there are in the target signal (an example of target signal is shown in Fig. 1 (c)). The up sampling factor is equal to the stride that we use in the convolutional embedding layer (see Eq. 4).
### Increasing the network sparsity with binarization and spiking neurons
In order to reduce the number of required operations, we increase the network sparsity by using binarization and Leaky Integrate and Fire (LIF) units [5; 6] units. We test two sparse models. In the first one, we binarize the output of the convolutional embedding, we binarize the projections \(Q\), \(K\), and \(V\), and we replace the FNN by a SNN with a first layer of 128 LIF units, and a second of \(D=64\) LIF units. The second sparse model is similar to the first one, but instead of binarizing \(Q\), \(K\), and \(V\), we replace each projection of Eqs. 1(a), 1(b), and 1(c) by a single spiking layer of \(d=32\) LIF units, which adds an additional dynamic to the model.
Binarization is done by applying a Heaviside function. The dynamics of the LIF units are defined by
\[U_{t} =\alpha\left(1-S_{t-1}\right)U_{t-1}+\left(1-\alpha\right)I_{t-1} \tag{6a}\] \[I_{t} =\beta I_{t-1}+\left(1-\beta\right)Wx_{t}\] (6b) \[S_{t} =H\left(U_{t-1}-\Theta\right) \tag{6c}\]
where \(t\) is the index of the tokens, \(U\) is the membrane potential, \(I\) is the synaptic current, S is the spike response, H is the Heaviside function, \(\alpha=0.95\), \(\beta=0.9\), and \(\boldsymbol{\Theta}=1\). The outputs of the \(Q\), \(K\), and \(V\) projections are the spike responses \(S_{t}\) (see Eq. 6c). The outputs of the first layer of the SNN replacing the FNN are the spike responses \(S_{t}\), and the outputs of the second layer are the membrane potentials \(U_{t}\) (see Eq. 6a).
Because the Heaviside function is not differentiable, during training the gradient of the different Heaviside functions (used for binarization and LIF units) are replaced by the SuperSpike surrogate gradient [5; 35]. To preserve the sparsity between the embedding and the encoder block, we remove the layer norm layer that precedes the embedding when the embedding is binarized. In addition, we remove the dropout layers in the two sparse models. The softmax of the attention mechanism is only computed on non-zero elements.
### Training
To speed up training, the attention block is computed in parallel. Projections \(Q\in\mathbb{R}^{N\times d}\), \(K\in\mathbb{R}^{N\times d}\), and \(V\in\mathbb{R}^{N\times d}\) are computed for an entire time window with \(N\) tokens. The keys and values are then unfolded into sliding windows of size \(M\) and stride 1, similarly as for a convolution (see in Fig. 3 an example of sliding window attention). The product between queries and keys is thus computed as
\[\begin{bmatrix}Q_{0}K_{1-M}&\cdots&Q_{0}K_{-2}&Q_{0}K_{-1}&Q_{0}K_{0}\\ Q_{1}K_{2-M}&\cdots&Q_{1}K_{-1}&Q_{1}K_{0}&Q_{1}K_{1}\\ Q_{2}K_{3-M}&\cdots&Q_{2}K_{0}&Q_{2}K_{1}&Q_{2}K_{2}\\ \vdots&\because&\vdots&\vdots&\vdots\\ Q_{N}K_{N-M}&\cdots&Q_{N-1}K_{N-3}&Q_{N-1}K_{N-2}&Q_{N-1}K_{N-1}\end{bmatrix}. \tag{7}\]
Since the keys \(K_{i<0}\) are forbidden values, we mask them by replacing them with \(-\infty\) as in Vaswani et al. [2], so that they are not computed in the softmax (see Eq. 3). The values \(V_{i<0}\) are simply zeroed.
To improve training, we developed a simple data augmentation protocol: first, the training set signals are sliced into time windows of \(N_{\mathrm{samples}}=2000\) samples (which corresponds to \(1\,\mathrm{s}\) since the sampling rate is \(2\,\mathrm{kHz}\)). Then, each time window is duplicated 64 times. For data augmentation, the beginning of each of this duplicated time window is shifted with a random number sampled in a uniform distribution between 0 and 2000. Finally, the resulting time windows are shuffled to create the training dataset.
We trained each network for each subject for 10 epochs using the Adam optimizer [36], a learning rate of \(10^{-3}\), batch sizes of 64, and since the metric we want to minimize is the mean average error (MAE) over the 5 degrees of freedom (DoA), we used the L1 loss function.
For the sparse models, we added a sparsity loss function term [37] to the global loss to increase the sparsity of the embedding, the queries, keys and values such that the total loss is:
\[\mathcal{L}\left(y,\hat{y}\right)= \parallel y_{i,j}-\hat{y}_{i,j}\parallel_{1}-\frac{1}{2}\lambda \left(\parallel x\parallel_{2}+\parallel\mathrm{Concat}\left(Q,K,V\right) \parallel_{2}\right) \tag{8}\]
with \(y\) the network outputs, \(\hat{y}\) the targets, \(x\) the embeddings and \(\lambda=1\).
In this study, we simply trained and tested datasets independently for each subject. To improve accuracy and repeatability in future studies, it is also possible to use transfer learning: the network can learn from multiple subjects before fine-tuning and testing on a new subject, as in [38].
## 4 Results
In Fig. 1 (c) we show an example of the regression results for a sparse online transformer with a embedding convolution kernel size \(k=7\) and a memory length \(M=150\). We first investigate how
and \(M\) affect the final accuracy. For this study, we use the network without sparsity. Since \(s=k-2\), we simultaneously change \(s\) and \(k\), and thus the number of tokens \(N\) generated for a given time window (see Eq. 4). The memory length \(M\) defines how many past tokens are used in the attention mechanism. The time length of the signal used to store information in \(K\) and \(V\) is thus:
\[\tau_{\mathrm{memory}}=\frac{M\times s}{\mathrm{SamplingRate}}. \tag{9}\]
In Fig. 4 we plot the mean absolute error (MAE) over the different degrees of actuation for values of \(M\) swept between 10 and 150 with intervals of 20, and for five different values \(k=7\), 15, 20, 25, and 30 (which correspond to \(s=\) 5, 13, 18, 23 and 28). While sweeping \(M\), we see for \(k=\) 15, 20, 25, and 30 that the MAE reaches a minimum and then increases. It shows that there is then an optimum value of memory length for each kernel size, and we see that this optimum value decreases with the kernel size and thus with the stride. Using Eq. 9, this result indicates that there is an optimum length of information \(\tau_{\mathrm{memory}}\) used in the attention mechanism, and that past that point increasing the stored information does not increase the accuracy. Then, we compare our different models using each time a kernel size \(k=7\) and a memory length is \(M=150\). As we see in Fig 4, these parameters lead to the best accuracy using the shortest time window for each token. For this study we also measure the 10\({}^{\circ}\)-accuracy and the 15\({}^{\circ}\)-accuracy, which are respectively the proportion of time samples that lead to mean average errors inferior to ten and fifteen degrees [21]. These additional metrics are important to measure the accuracy of the prediction within a margin of error. The different results are shown in Table 4. The mean and standard deviation of each metric are computed over the 12 subjects of the NinaproDB8 dataset.
To see the impact on our online transformer with custom sliding window attention mechanism, we compare it to a conventional Transformer with self-attention. For the three metrics, our online transformers outperform the transformer with conventional self-attention (see Table 4). This results further reveals the importance of selecting relevant information, and that for sEMG signal processing, it is likely more important to use local information from the past than global information from both past and future.
Our two sparse models reach similar accuracy than our non-sparse online transformer (and thus also better accuracy than equivalent conventional transformer), and respectively lead to a reduction of the number of required Multiply-And-Accumulate operations (MAC ops) by factors of \(3.8\times\) and \(5.3\times\) compared to the non-sparse online transformer (the activation function operations are not included in these calculations). The method used to compute the number of required operations is described in the Appendix.
Moreover, we see that our three online transformer models outperform LSTMs [21] by at least 0.88\({}^{\circ}\) of MAE, outperform Temporal Convolutions [19] with at least 0.76\({}^{\circ}\) of MAE (previous SoTA on NinaproDB8 dataset). To compare the inference speed of the different methods, we define the minimum time of computing as the length of the time windows used for each inference step, which for our online transformer is \(\tau_{\mathrm{min}}=\frac{k}{\mathrm{SamplingRate}}\), with \(k\) the embedding convolution kernel size. Since \(k=7\) and the sampling rate is 2 kHz, our network can compute with a minimum latency of
Figure 4: Mean absolute error averaged over the 12 subjects versus the number of stored tokens \(M\) for kernel embedding sizes of 7, 15, 20, 25, and 30 (from black to light yellow curves).
3.5 ms, which is shorter than any previous methods and in particular more than \(30\times\) shorter than the Temporal Convolutional network [19] which was the previous SoTA for the Ninapro DB8 dataset.
## 5 Conclusion
In this work, we developed an online transformer model that leverages sliding window attention to process tokens one at the time. We have shown that the locality of the sliding window makes it more efficient than self-attention. The proposed method makes sEMG signal processing with very short time windows (3.5 ms) possible, and sets the new state-of-the-art on the prosthetic hand control NinaproDBS dataset. Using sliding window attention, our model also solves the problem of the integration of SNNs temporal dynamics in Transformers. We used a combination of binarization and SNNs to increase the network sparsity, thus reducing the number of required operation up to a factor \(5.3\times\). In conclusion, this work is a step toward precise, smooth, and low-power Human-Machine Interfacing, and holds great promises for future neuromorphic transformer models.
|
2307.03993 | The emergence of dynamic networks from many coupled polar oscillators. A
model for Artificial Life | This work concerns a many-body deterministic model that displays life-like
properties as emergence, complexity, self-organization, spontaneous
compartmentalization, and self-regulation. The model portraits the dynamics of
an ensemble of locally coupled polar phase oscillators, moving in a
two-dimensional space, that in certain conditions exhibit emergent
superstructures. Those superstructures are self-organized dynamic networks,
resulting from a synchronization process of many units, over length scales much
greater than the interaction length. Such networks compartmentalize the
two-dimensional space with no a priori constraints, due to the formation of
porous transport walls, and represent a highly complex and novel non-linear
behavior. The analysis is numerically carried out as a function of a control
parameter showing distinct regimes: static, stable dynamic networks,
intermittency, and chaos. A statistical analysis is drawn to determine the
control parameter ranges for the various behaviors to appear. | Alessandro Scirè, Valerio Annovazzi-Lodi | 2023-07-08T15:00:13Z | http://arxiv.org/abs/2307.03993v1 | # The emergence of dynamic networks from many coupled polar oscillators. A model for Artificial Life.
###### Abstract
This work concerns a many-body deterministic model that displays life-like properties as emergence, complexity, self-organization, spontaneous compartmentalization, and self-regulation. The model portraits the dynamics of an ensemble of locally coupled polar phase oscillators, moving in a two-dimensional space, that in certain conditions exhibit emergent superstructures. Those superstructures are self-organized dynamic networks, resulting from a synchronization process of many units, over length scales much greater than the interaction length. Such networks compartmentalize the two-dimensional space with no a priori constraints, due to the formation of porous transport walls, and represent a highly complex and novel non-linear behavior. The analysis is numerically carried out as a function of a control parameter showing distinct regimes: static, stable dynamic networks, intermittency, and chaos. A statistical analysis is drawn to determine the control parameter ranges for the various behaviors to appear. The model and the results shown in this work are expected to contribute to the field of artificial life.
* Corresponding author
Email: [email protected] (A.S.)
Keywords:Artificial Life, Self-Organization, Collective behaviors, Synchronization, Dynamics Networks, Chaos
## 1 Introduction
_Artificial Life_ (ALife) is an interdisciplinary research topic (Langton, 1997; Adami, 1998; Dorin, 2014), that brings together scientists, philosophers and artists. Three interplaying branches of artificial life are commonly devised: "Soft" artificial life creates numerical simulations that exhibit life-like behavior, "hard" artificial life produces hardware implementations of life-like systems, and "wet" artificial life synthesizes living systems from biochemical products (Rasmussen et al., 2003, 2008).
Several fields are involved, such as complexity (Bar-Yam, 1997; Mitchell, 2009), natural computing (Castro, 2006), evolutionary computation (Baeck et al., 1997; Coello et al., 2007), language evolution (Christiansen and Kirby, 2003; Cangelosi and Parisi, 2002), theoretical biology (Waddington, 1968), evolutionary biology (Smith et al., 1995), philosophy (Boden, 1996), cognitive science (Clark, 1997; Bedau, 2003; Couzin, 2009), robotics (Mataric and Cliff, 1996), artificial intelligence (Steels and Brooks, 1995), behavior-based systems (Maes, 1993; Webb, 2000), game theory (Sigmund, 1993), network theory (Newman, 2003; Newman et al., 2006), and synthetic biology (Benner and Sismour, 2005) among others.
The term Artificial Life (Alife) was introduced (Langton, 1989) as "life made by man rather than by nature", meaning artificial systems that exhibit life-like properties. More recently (Bedau, 2007) defined artificial life as an interdisciplinary research concerning life and life-like processes, that emphasizes the inherent/organizational rather than the structural/material properties of living systems, and that aims at comprehending living systems by creating simple forms of them. Meaning to create simple artificial systems that display some specific life-like properties such as compartmentalization, homeostasis, the ability to reproduce, growth and development, adaptation, and evolution, among many others. Compartmentalization, in particular, is considered of fundamental importance for life. Indeed, primitive compartments provided a mechanism by which chemical systems underwent speciation. _"It is indeed unlikely that life started in [...] conditions of extreme dilution of a few molecules in the prebiotic ocean, and some form of compartmentalization might be considered to explain how the necessary local metabolite concentration was achieved."_ (Luisi, 2014).
However, many of the specific life-like properties are paraphrases of two generic processes, namely _emergence_ and _self-organization_, which are indeed both believed to lay at the root of abiogenesis (Luisi, 2006). Emergence refers to a collective behavior that is _more than the sum of the parts_, what parts of a system do together that they would not do alone, whereas self-organization means a process where order arises solely from local interactions, with no _Deus ex machina_ intervention. Soft ALife has been linked to emergence and self-organization in many subdomains. Cellular automata, a popular form of soft ALife, are illustrative examples of self-organizing systems. Without a global controller involved, cellular automata self-organize their state configurations in many ways (Wolfram, 2002). Further examples, among others, are Partial Differential Equations (PDEs) (Cross and Hohenberg, 1993) and self-propelled agents (Krolikowski, 2016), that show a wide range of self-organizing dynamics and emergent properties.
Concerning dynamical systems and PDEs, self-organization and emergence are expressed by spontaneous pattern formation and spatio-temporal coherence. This means the spontaneous synchronization of the spatiotemporal dynamics of many units, where "spontaneous" means emerging from solely local interaction. The theoretical paradigm for the description of the transition to a synchronized state of many oscillating units is the Kuramoto model (Kuramoto, 1975; Acebron et al., 2005), a model for the dynamics of a large set of coupled oscillators. In the Kuramoto model the main parameters are the oscillators diversity, that
acts as a source of disorder, and the interaction strength (coupling) that pushes the oscillators to synchronize together. Importantly enough, although in presence of disorder, the model is deterministic. The Kuramoto model is effective in many contexts and disciplines including biological systems, as it elucidated the core mechanisms of various biological phenomena, ranging from the rhythmic flashing of firefly congregations (Ermentrout, 1991) to the coordinate firing of neurons (Breakspear, 2010) or cardiac pacemaker cells (Osaka, 2017), corroborating what Winfree envisioned as the _geometry of biological time_ (Winfree, 1991). Recently, (Scire and Annovazzi-Lodi, 2017) a theoretical work, inspired by the Kuramoto model, introduced a deterministic phase transition with intrinsic self-organization properties, able to produce adaptive spatiotemporal patterns.
In this work, we enlighten a collective process (a synchronization process) due to which an ensemble of polar phase oscillators, free to move in a two-dimensional space, builds complex and self-regulating networks. Those oscillators build cooperative dynamic superstructures, that stretch over length scales much greater than the oscillators interaction length. Such networks compartmentalize the two-dimensional space with no a priori constraints, by means of porous transport walls. We therefore argue that our system displays generic life-like properties as self-organization and emergence, and structural ones like compartmentalization and transport walls.
The Manuscript is organized as follows, section 2 is devoted to the introduction and discussion of the model, section 3 shows the results for different punctual values of the control parameter, and contains a subsection named _Statistics_, where the averaged collective parameters are numerically evaluated by sweeping the control parameter. Finally, Section 4 is devoted to summarizing the manuscript and discussing the results.
## 2 The model
As any typical Artificial Life model (Maes, 1993; Bedau, 2003) our model is _bottom-up_, thus implemented as low-level agents that simultaneously interact with each other, and the dynamics of which is based on information about, and affects, only their own local environment. It relies on a recently introduced model (Scire and Annovazzi-Lodi, 2017), that described the spatiotemporal dynamics of many coupled polar phase oscillators (the low-level agents) free to move in a two-dimensional space. The polar phase oscillators are abstract mathematical objects consisting of two complementary kinds of oscillators (poles) arbitrarily labelled as _circles_ and _squares_. The circle and square poles obey an interaction law that makes them attract each other out-of-phase or mutually repel in-phase, as sketched in Fig. 1 (see (Scire and Annovazzi-Lodi, 2017) for more details). Such poles are abstract agents, not aiming at modelling any physical object. However, the interaction scheme reminds the proton-electron dipole with spin interaction, the well-known Pauli exclusion principle, which shapes most of the natural chemistry. In (Scire and Annovazzi-Lodi, 2017) we had shown how such system, despite its relative simplicity, retains high complexity and self-organizational properties by attributing random natural frequencies to the oscillators, and using the standard deviation of the statistical distribution as the
control parameter of the analysis. Static "crystals" and dynamic "molecules" emerged and were discussed, without exhausting the possibilities of a surprisingly rich dynamics. Differently from (Scire and Annovazzi-Lodi, 2017), here a fixed detuning is given between the circle and the square poles, leading to substantial new results. Such detuning is called \(\Delta_{H}\) in the following, and it will be the control parameter of the analysis.
**Fig.1** Sketch of the dipolar interaction scheme, poles of different kind attract each other and their phases tent to be different by \(\pi\) (out of phase), poles of the same kind repel each other and their phases tent to be equal (in-phase)
The equations of motion, modified from (Scire and Annovazzi-Lodi, 2017) according to the above assumptions, for \(N\) poles read
\[\dot{x}_{i}=\sum_{j=1}^{N}\nabla_{i}\ W\big{(}|x_{i}-x_{j}|\big{)}\cos\big{(} \varphi_{i}-\varphi_{j}\big{)}, \tag{1}\]
\[\dot{\varphi}_{i}=\gamma_{i}\Delta_{H}+\sum_{j=1}^{N}\gamma_{i}\ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 3 Results
**Fig.2 a) Initial conditions drawn from a uniform distribution in a 20x20 square. b) Regime distribution after 2x10\({}^{5}\)integration steps simulation of Eqs. (1)-(2) for 500 poles (N=500). N\({}_{cycles}\)=250, N\({}_{square}\)=250. \(\Delta_{H}=0\)**
As a result of numerical simulations of Eqs. (1) - (2) for \(\Delta_{H}=0\) and N = 500 (corresponding to 250 c-poles and 250 s-poles), after a transient starting from random initial conditions (Fig.2a), a stable phase-locked global pattern is formed in the (X,Y) plane (Fig.2b). The oscillators end up located along chains characterized by some spatial regularity made by the superposition of a c and a s-poles, that attract each other out-of-phase and relax to occupy the same position (zero distance), i.e. any c-pole attains the same phase value, and any s-pole
attains the same phase value, and the difference of those two values is \(\pi\). Fig. 2 shows (a) the initial conditions, i.e. random positions and phases, and (b) the final pattern of overlapped dipoles. For each oscillator, the position is represented by spatial coordinates in a (X,Y) plane, whereas the phases \(\varphi_{i,j}\) - considered in [0, 2\(\pi\)] - are encoded into the _color_ of the respective units by a standard colormap. From a visual point of view, c-poles are portraited by thin empty circles, whereas s-poles are portraited by smaller thick squares, so that when they overlap both markers are still visible. Concerning the spatial variables, initial random conditions are taken in a squared area such that at least one interaction is guaranteed in average, so that neither the space is too busy, and the pattern cannot develop, nor the units are too sparse, and they do not "see" each other. Initial phases are randomly distributed with a uniform distribution in [0, 2\(\pi\)]. The movie S1 shows the progressive formation of the static regime pattern of Fig. 2b, from random starting conditions (Fig. 2a).
The degree of synchronization (phase locking) in a phase oscillator ensemble is conventionally quantified by the Kuramoto complex order parameter, which, slightly modified respect to the original formula (Kuramoto, 1975) due to the presence of the coefficients \(\gamma_{k}\), reads
\[\rho e^{i\theta}=\tfrac{1}{n}\sum_{k=1}^{n}\gamma_{k}e^{i\varphi_{k}}\qquad. \tag{4}\]
The absolute value \(\rho\) measures the oscillators _global degree of entrainment_, it is bounded between 0 and 1, where 1 means total coherence (global phase locking) and 0 means total disorder (disordered phases, unlocking). The phase \(\beta\) is the _global phase_ of the whole ensemble, so the _global frequency d\(\beta\)/dt_ can quantify collective pulsations, with regular (_d\(\beta\)/dt_ - constant) or chaotic dynamics (_d\(\beta\)/dt_ fluctuating), or collective excitability (temporal spikes in _d\(\beta\)/dt_).
Another useful global parameter, able to detect the kinetic motion in space, is the total kinetic energy
\[T=\sum_{k=1}^{N}{v_{k}}^{2}\qquad, \tag{5}\]
where \(\nu_{k}=\dot{x}_{k}\) are the spatial velocities.
The order parameter (4) and the total kinetic energy (5) concerning the \(\Delta_{H}=0\) simulation above mentioned are shown in Fig. 3.
Fig. 3 shows that, after a transient, the ensemble displays a static pattern (T \(\rightarrow\)0 in Fig.3a) and full static synchrony (phase-locking) with \(\rho\)\(\rightarrow\)1 (Fig.3b), and the global frequency d\(\beta\)/dt\(\rightarrow\)0 (Fig.3c).
Figure 3: (a) Kinetic energy T Vs time (note the logarithmic scales to enhance the visibility of the transients). (b) Global degree of entrainment \(\rho\) Vs time (c). Global frequency d\(\beta\)/dt Vs time. \(\Delta_{H}=\)0
During the transient the ensemble shows that the inclusion of the last elements in the pattern (blue arrows in movie S1) takes place by means of a collective involvement of the neighboring oscillators, causing peaks in the total kinetic energy (yellow arrows in Fig.3a). This is a sign that the formed pattern is a connected tissue, able to act collectively.
Increasing the detuning \(\Delta_{H}\) a highly organized collective dynamics takes place. The movie S2 shows the dynamics starting from random conditions as in the previous case, but for \(\Delta_{H}=0.1\). The result is the formation of a spatiotemporal dynamic network of entrained currents, a synchronized flux of kinetic and phase dynamics, that shapes compartments with transport walls. Indeed, the movie S2 shows the formation of a dynamic network made of vesicles supporting a counterpropagating flux of \(c\) versus \(s\)-poles, continuously reshaping and reorganizing itself. The network represents a dynamic entrainment of a highly intricate nature, emergent, self-organized and - at best of our knowledge - never reported before. Due to the high nonlinearity of the interaction functions, small changes in the initial conditions leads to markedly different networks. The networks themselves are indeed complex chaotic attractors. Moreover, those networks establish for values of detuning \(\Delta_{H}\) that would not give raise to phase current in a single cs-dipole, i.e. well below the excitable threshold of a single cs-dipole, see (Scire and Annovazzi-Lodi, 2017). It is an emergent, collective, and cooperative effect.
As a static picture of the dynamics, an occupation matrix is calculated from above mentioned simulated data (see Fig. 4). The occupation matrix represents how many times a particle was present in a point in space during the simulation.
Fig. 4 gives a static picture of the network and shows the persistence of vesiculation. For the same data, Fig. 5 shows: (a) the time dependent
kinetic energy T(t) (b) the time evolution of the global entrainment \(\rho\)(t), and (c) the global frequency \(\,\mathrm{d\beta/dt}\).
Analyzing the movie S2 together with Fig.4 and Fig.5, we have devised three processes in the network evolution.
1) _Network Formation_ (roughly 0 \(<\) t \(<\) 1000). Local patterns, vesicles and chains of oscillators emerge from the initial disorder in different parts of the plane. They merge through collective events that cause peaks in the kinetic energy T(t) (yellow arrows in Fig.5a), until one single network is formed (roughly 800 \(<\) t \(<\) 1000) by collective transport walls.
2) _Inclusion/exclusion of material from the environment_: (roughly 1000 \(<\) t \(<\) 6400). An example is found at time \(\approx\) 3700, when a s-pole is included from the environment in the current flux, as shown by the movie S2 in the lower left corner and straight afterwards a dipole is expelled in the upper right part of the network (yellow arrows in movie S2). The transport walls appear therefore to be porous respect to the environment, while the network _self-regulates_ the circulating material.
3) _Network Uniform_: (roughly t \(>\) 6400). The movie S2 shows that the network is the results of an adaptive merging of initially separated patterns, including small networks that possess different flux velocities. During the process of forming a unique network the flux velocities of the distinct vesicles undergo changes, to be compatible with the whole structure. This happens by means of smooth changes or abrupt events as the one that takes place close after time \(=\) 6400, when two vesicles merge their flux (red arrow in movie S2) and the total kinetic energy suddenly lowers (red arrow in Fig.5a). Indeed, the merging of the two vesicles appears to be functional to the overall flux as the system globally slows down and spontaneously uniformizes the velocities in the network, that can be now sustained while using less kinetic energy. Such event is as well signalized by both a time peak in the global frequency \(\,\mathrm{d\beta/dt}\) (called a _collective firing_ in (Tessone et al., 2007) - see the red arrow in Fig. 5c) and by a sudden lowering (red arrow in Fig 5b) in the entrainment \(\rho\)(t). This is a typical behavior associated to the collective firing in coupled oscillating systems, as already reported in (Tessone et al., 2007) "_We [...] show that the mechanism for collective firing is generic: it arises from degradation of entrainment_."
Processes 2 and 3 may overlap and repeat in time and appear to be
Figure 5: a) Kinetic energy T vs time. Order parameters: b) global entrainment \(\rho\) and c) global frequency \(\,\mathrm{d\beta/dt}\) versus time, for \(\Delta_{H}=\)0.1
functional to the network persistency and self-regulation, at least during the investigated simulation time.
In order to evaluate the robustness of the networks, we have performed several numerical simulations with different ensembles, always retaining global "neutrality", i.e. the same number of c and s-poles. We have observed that networks need a consistent number of oscillators to emerge, of the order of N \(\sim\) 200. Moreover, in order to investigate the long-term behavior, we have performed long simulations. The movie S3 reports of a long simulation for N = 400 and \(\Delta_{H}\) = 0.15, where the initial transient has been removed for brevity, showing the persistence of the network for over \(10^{9}\) simulation steps, in compresence with smaller static patterns.
Increasing further the detuning \(\Delta_{H}\), intermittency between the formation of dynamic networks and low dimensional patterns takes place. As an example, for \(\Delta_{H}\) = 0.2 the network initially stabilizes, but after a while it breaks up in smaller patterns that prevent long term stable transport walls, as shown in movie S4. The occupation matrix for \(\Delta_{H}\) = 0.2 is shown in Fig. 6, and it is shaped by the irregular compresence of vesicles, chains, small dynamic patterns and single poles.
The order parameters for \(\Delta_{H}\) = 0.2 exhibit (Fig.7) high kinetic activity and coherence degradation after a transient where coherent vesicles were present, and the activity was relatively stable (roughly from t = 200 to t = 2000). Increasing further the detuning \(\Delta_{H}\) the dynamics becomes progressively more erratic, and no trace of order is finally left. Remarkably, the system does not include any noise or disorder but the starting conditions, the progressively disordering observed scenario is a consequence of a non-linear dynamics.
Figure 6: Occupation Matrix for \(\Delta_{H}\) =0.2. Initial conditions drawn from a uniform distribution in a 20x20 square and random phases in [0, 27], simulation of \(10^{7}\) integration steps simulation for 500 poles. N\({}_{circles}\)=250, N\({}_{squares}\)=250
To evaluate the robustness of the dynamic network solution, further numerical simulations were performed in presence of small additive white noise or diversity in the natural frequencies. Preliminary results showed the persistence of the above depicted scenario in both circumstances.
### Statistics
This subsection illustrates the behavior of the time averaged global parameters versus the detuning \(\Delta_{H}\). The obtained results for a given ensemble are summarized in Fig. 8. Fig. 8A shows the time averaged total kinetic energy \(<\)T\(>\) and highlights different regimes regions. Figure 8B shows the time averaged global entrainment \(<\)\(\rho\)\(>\), and Fig. 8C shows the time averaged collective frequency \(<\)d\(\boldsymbol{\beta}\)/d t\(>\), all these numerically calculated versus the control parameter \(\Delta_{H}\). For \(\Delta_{H}\)\(\sim\)0, synchronized static patterns are formed, the kinetic energy vanishes, the global entrainment is maximum close to 1, and the global frequency is stationary close to zero.
Increasing \(\Delta_{H}\), for \(\Delta_{H}\)\(\sim\)0.05, the system shows the onset of coherent currents, shaping dynamic networks with transport walls that persists up to \(\Delta_{H}\)\(\sim\) 0.2. Here the kinetic energy is non-vanishing in order to dissipate the energy fed to the system by the detuning, the degree of entrainment is not vanishing (\(<\)\(\rho\)\(>\)\(\sim\) 0.5) and the global frequency (\(<\)d\(\boldsymbol{\beta}\)/dt\(>\)\(\sim\) 0) is stable, so there is a significative degree of global coherence.
Numerical simulations showed that vesicles (the compartments of the networks) become smaller increasing the detuning for a given ensemble, but they become also intermittent and unstable. For \(\Delta_{H}\)\(\sim\)0.2, the onset of an intermittent regime that includes disordered dynamic patterns compromises the networks stability. The average kinetic energy increases because more energy is now fed to the system. In general, the emerging patterns are dissipative structures that serve to dissipate the energy fed to the system by the detuning \(\Delta_{H}\). When the detuning is low, the slow dynamics of the big networks is effective as a dissipative structure, but when the detuning is increased, smaller dynamic structures, able to move faster, are preferred for that purpose. Panels B and C in Fig. 5 show that coherence is progressively lost when \(\Delta_{H}\)\(>\) 0.2, because local (instead of collective) dynamics prevails.
Figure 7: a) Kinetic energy T vs time. Order parameter: b) global entrainment \(\rho\) and c) global frequency d\(\boldsymbol{\beta}\)/dt versus time, for the same simulation as in Fig.6
The initial oscillators density also revealed to be of some importance. If the oscillators start too tight (many oscillators within the interaction length) the space is too busy, the networks are compressed and may fail to develop. If the initial density is too low (much less than one oscillator in average within the interaction length) the ensemble disagregates in subdomains that do not interact each other. As a rule of thumb, a good value (without being critical) for the initial density \(d=N/t^{2}\), where \(l\) is the square side of the initial conditions area, is \(d\sim\ 1\).
Open questions concern the asymptotic stability of the networks: whether they keep indefinitely evolving or finally attain a fixed configuration, and whether their lifetime is infinite or not. Those issues will be addressed in future investigations.
## 4 Conclusions
We have reported of an ensemble of polar phase oscillators free to move in a two-dimensional space, that in certain conditions build coherent dynamic networks. Those networks are emergent and self-regulating complex superstructures, resulting from of a cooperative behavior involving many units over length scales much greater than the interaction length, compartmentalizing the two-dimensional space with no a priori constraints. This kind of behavior has - at best of our knowledge - never been reported before in soft artificial life systems, neither in theoretical soft matter, nor in dynamical systems in general. The analysis was numerically carried out as a function of a control parameter, showing static pattern formation, the emergence
Figure 8: _Statistics._ A: Time averaged Kinetic energy vs \(\Delta_{H}\), with indications concerning the different regimes. B: Time averaged collective entrainment \(\rho\) vs \(\Delta_{H}\). C: Time averaged (with standard deviation) global frequency Vs \(\Delta_{H}\). For each simulation: Initial conditions drawn from a uniform distribution in a 20x20 square, simulation of 2x10\({}^{6}\) integration steps for N = 500. N\({}_{circles}\)=250, N\({}_{squares}\)=250
of persistent and intermittent dynamic networks, and irregular dynamics, respectively, for increasing values of the control parameter. Such complex scenario is solely due to the non-linear dynamics of the system, where the starting conditions are the only source of disorder included.
We have drawn a numerical statistical analysis versus the control parameter, to have a glance over the whole scenario, identifying the range of existence of the different regimes for a given ensemble. The robustness issue against noise and diversity has been checked with good preliminary results.
In conclusion, we have argued that our system displays emergence, complexity, self-organization, spontaneous compartmentalization and self-regulation, only due to a non-linear many-body dynamics. This model is expected to contribute to the field of artificial life and, in general, it gives a new portrait of the organizational processes that govern the emergence of adaptive structures from locally interacting units.
## Acknowledgements
A.S. Acknowledges Giuseppe Aromataris (University of Pavia, Pavia - Italy) and Emilio Hernandez-Garcia (Instituto de Fisica Interdisciplinar y Sistemas Complejos, Palma de Mallorca - Spain) for fruitful conversations.
## Supporting information captions
* Spatio-temporal dynamics resulting from a numerical simulation of Eqs (1)(2) N=500 (250 c-poles and 250 s-poles). \(\Delta u\)=0.
* Spatio-temporal dynamics resulting from a numerical simulation of Eqs (1)(2) N=500 (250 c-poles and 250 s-poles). \(\Delta u\)=0.1. A time counter has been added to the movie in order to better connect the spatio-temporal dynamics to Fig.5.
* Spatio-temporal dynamics resulting from a numerical simulation of Eqs (1)(2) N=400 (200 c-poles and 200 s-poles). \(\Delta u\)=0.15.
* Spatio-temporal dynamics resulting from a numerical simulation of Eqs (1)(2) N=500 (250 c-poles and 250 s-poles). \(\Delta u\)=0.2.
|
2308.04879 | Comparing How a Chatbot References User Utterances from Previous
Chatting Sessions: An Investigation of Users' Privacy Concerns and
Perceptions | Chatbots are capable of remembering and referencing previous conversations,
but does this enhance user engagement or infringe on privacy? To explore this
trade-off, we investigated the format of how a chatbot references previous
conversations with a user and its effects on a user's perceptions and privacy
concerns. In a three-week longitudinal between-subjects study, 169 participants
talked about their dental flossing habits to a chatbot that either, (1-None):
did not explicitly reference previous user utterances, (2-Verbatim): referenced
previous utterances verbatim, or (3-Paraphrase): used paraphrases to reference
previous utterances. Participants perceived Verbatim and Paraphrase chatbots as
more intelligent and engaging. However, the Verbatim chatbot also raised
privacy concerns with participants. To gain insights as to why people prefer
certain conditions or had privacy concerns, we conducted semi-structured
interviews with 15 participants. We discuss implications from our findings that
can help designers choose an appropriate format to reference previous user
utterances and inform in the design of longitudinal dialogue scripting. | Samuel Rhys Cox, Yi-Chieh Lee, Wei Tsang Ooi | 2023-08-09T11:21:51Z | http://arxiv.org/abs/2308.04879v1 | # Comparing How a Chatbot References User Utterances from Previous Chatting Sessions:
###### Abstract.
Chatbots are capable of remembering and referencing previous conversations, but does this enhance user engagement or infringe on privacy? To explore this trade-off, we investigated the format of how a chatbot references previous conversations with a user and its effects on a user's perceptions and privacy concerns. In a three-week longitudinal between-subjects study, 169 participants talked about their dental flossing habits to a chatbot that either, (1-None): did not explicitly reference previous user utterances, (2-Verbatim): referenced previous utterances verbatim, or (3-Paraphrase): used paraphrases to reference previous utterances. Participants perceived Verbatim and Paraphrase chatbots as more intelligent and engaging. However, the Verbatim chatbot also raised privacy concerns with participants. To gain insights as to why people prefer certain conditions or had privacy concerns, we conducted semi-structured interviews with 15 participants. We discuss implications from our findings that can help designers choose an appropriate format to reference previous user utterances and inform in the design of longitudinal dialogue scripting.
Chatbots, Conversational Agents, Referencing User Utterances, Privacy Concerns +
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted none none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
Footnote †: copyrighted: none none
+
FootnoteFootnote †: copyrighted: none
+
Footnote †: copyrighted: none none
+
FootnoteFootnote †: copyrighted: none
[29; 81] and desire to engage with [6; 7; 42; 53] chatbots. Furthermore, chatbots have been found to benefit from various human-like qualities such as empathy [43; 44], listening [73], and differing conversational styles [15; 18; 77], personas [58; 69] or politeness strategies [11; 47; 51].
Previous studies have also found benefit in interviewers that have higher levels of social presence. For example, Xiao et al. found that people give higher quality responses to chatbots that use a battery of AI-driven techniques such as using more relevant responses to users [73]; Tsai et al. found that users were more likely to disclose embarrassing behaviours related to their sexual health to a human compared to a chatbot [68]; and multiple studies have found that chatbots that self-disclose information lead to mutual disclosure from users and improved feelings of trust [1; 41; 59; 48].
More specifically to our study of chatbot referencing format, previous work has found benefit in chatbots that remember and reference details from previous interactions [17; 33; 46; 55; 78]. For example, Jain et al. found chatbots that reference details from previous conversations lead to increased feelings of empathy [33], and Portela and Granell-Canut reported that participants perceived a chatbot to have higher levels of affection when it remembered previous user utterances or the user's name [55]. Beyond this, we are interested in the effect on positive user perceptions caused by the format used by a chatbot when referencing a user's previous utterances. This gives us our first research question of:
* **RQ1:** How does chatbot referencing format (None, Verbatim, Paraphrase) impact: 1. desire to continue using the chatbot? 2. perceived chatbot engagement? 3. perceived chatbot intelligence?
### Privacy Concerns Among Chatbot Users
However, while Section 2.1 outlines the benefits of increased social presence, it could also lead to increased feelings of privacy concerns [74] amongst chatbot users [17; 32]. For example, Schuetzler et al. [60] found that people were less likely to disclose to chatbots that use more relevant responses to user utterances during a small-talk session before asking (non-differentiated) health questions. Ng et al. showed participants two hypothetical financial chatbots (one human-like and one factual) and found that, while the human-like chatbot scored higher social presence, participants were more likely to share information with the factual chatbot [49]. Bae et al. [5] found that people trusted a robot-like chatbot more than a human-like chatbot when discussing positive experiences. More analogous to our study's aim, Chen et al. investigated the perceived invasiveness of a chatbot that referenced participants' personal information (name, presence of heart disease and hand-washing frequency) [17]. While some findings indicated that people found chatbots more invasive when referencing their information, this was contrasted with a null finding once the user's perceived identity of the chatbot (human or chatbot) was taken into account. Building
Figure 1. Extracts from week 3 of the study showing the 3 levels of chatbot referencing format. Grey bubbles are chatbot utterances, and teal bubbles are user utterances (as seen by participants). Differences in referencing format are circled in red.
on these previous findings and conflicting results gives us our second research question of:
* **RQ2:** How does the chatbot referencing format (None, Verbatim, Paraphrase) impact the user's feelings of privacy violations?
We were interested to explore RQ2, as conflicting previous work indicates potential contradictory and uncertain findings. That is to say, by referencing user utterances in different formats, it could become more apparent to the user that the chatbot is storing or manipulating their personal information, and thereby heighten privacy concerns. Alternatively, users could appreciate the increased levels of social presence and personalisation. Additionally, by referencing user utterances verbatim, the chatbot could either make data storage more apparent and therefore privacy-violating to users, or it could be seen as more transparent about storing the user's data without manipulation (and by showing less advanced AI capabilities, users may perceive the chatbot more favourably by generating a metaphor of a chatbot which is less capable (Srivastava et al., 2016)). Similarly, by paraphrasing user utterances the chatbot could be seen as invasive (by storing and manipulating user data), or create greater feelings of engagement with the user. Finally, by not explicitly referencing user utterances, the chatbot could be seen as less privacy violating, but also potentially less engaging.
## 3. User Study
This study investigates the effect of a chatbot remembering (and incorporating into conversation dialogue) user utterances from a previous chatting session. For this, we conducted a longitudinal between-subjects experiment where participants talked to a chatbot about their dental flossing once a week for three consecutive weeks2. Our chatbot had an independent variable of **Chatbot Referencing Format** (3 levels) which affected whether the chatbot _explicitly_ referenced (the previous week's) user utterances, and the format used when referencing utterances (see Figure 1 for examples of referencing format). The levels of **Chatbot Referencing Format3** are:
Footnote 2: Ethics approval received from our institutional IRB prior to study commencement.
* **None** (control group): Chatbot did not explicitly incorporate previous user utterances into subsequent conversations, and instead referenced previous discussions at a high-level.
* **Verbatim**: Chatbot incorporated previous user utterances verbatim into subsequent chatbot utterances.
* **Paraphrase**: Chatbot incorporated paraphrased versions of user utterances into subsequent chatbot utterances.
### Chatbot Script
The chatbot led a conversation with the user about their dental flossing habits and beliefs. We chose dental flossing as we want users to discuss something personal to themselves and (as flossing can benefit from both diary keeping (Srivastava et al., 2016) and brief interventions (Srivastava et al., 2016)) it is appropriate for short weekly personal conversations. Additionally, dental flossing is an activity that health experts recommend daily adherence to (Srivastava et al., 2016), and people can have barriers to dental flossing (Bahdan et al., 2016; Krizhevsky et al., 2017), both of which are incorporated into our chatbot's script.
The conversations for each of the 3 weeks were as follows (responses elicited by the chatbot were open-ended unless specified otherwise)4:
Footnote 4: Literature may refer to referencing formats using various terminology. In our case, verbatim is analogous to extractive summarization (Bahdan et al., 2016; Krizhevsky et al., 2017) or direct quotation (Krizhevsky et al., 2017; Krizhevsky et al., 2017), and paraphrase is analogous to abstractive summarization (Bahdan et al., 2016; Krizhevsky et al., 2017) or indirect quotation (Krizhevsky et al., 2017).
* **Week 1:** All participants saw the same script as the chatbot could not yet reference previous week's utterances. Participants shared their dental flossing beliefs (Krizhevsky et al., 2017) (7-point Likert), flossing frequency, and perceived benefits of flossing.
* **Week 2:** The chatbot referenced flossing frequency and perceived benefits from Week 1. Participants shared their flossing frequency, barriers to flossing, and strategies to overcome barriers.
* **Week 3:** The chatbot referenced flossing frequency, and barriers and strategies from Week 2. Participants shared their flossing frequency, reflected on their barriers and strategies from the previous week, and shared their perceived susceptibility and perceived risks, before sharing their dental flossing beliefs (Krizhevsky et al., 2017) (7-point Likert).
### Implementation Details
The chatbot was hosted on Qualtrics, and used JavaScript and HTML to emulate the look and feel of a chatbot. Microsoft LUIS5 was used for both intent recognition (in real-time) and for selecting the most appropriate paraphrase for a given week.
Footnote 5: [https://www.luis.ai/](https://www.luis.ai/)
Intent recognition was trained using utterances from (Krizhevsky et al., 2017) for users' barriers to flossing and strategies to overcome barriers. Training data for other prompts was generated by the research team and by piloting the chatbot until a range of responses could be recognised. Data augmentation (e.g., synonym replacement) was then used to generate additional training data.
#### 3.2.1. Intent Recognition
We used intent recognition (in all 3 conditions) to recognise the intent of user utterances within a week's session. An appropriate response would then be appended to the start of the subsequent chatbot utterance. For example, the chatbot could deliver "_Well done on flossing five days a week_" in response to a user's flossing frequency.
#### 3.2.2. Delivering Paraphrases
To deliver paraphrases of user utterances, first user intent was recognised via LUIS. Each user intent had a corresponding paraphrase written by the research team, that was then used as the paraphrase in the next chatting session (e.g., for flossing benefits, an intent of "_prevent gum disease_"), was given the paraphrase "_flossing helps prevent gum disease_"). While this approach is limited in providing a discrete number of paraphrases and not accounting for multiple intents, it ensured that consistent and coherent paraphrases could be delivered to users. Example script and paraphrases can be seen in Figures 1 and 2, and a full list of paraphrases and script can be found in supplementary material.
### Participants
We recruited participants using university advertisement boards. We only selected participants who did not fully adhere to daily flossing (similarly to previous intervention studies (Srivastava et al., 2017)), and all responses were completed remotely and asynchronously. Participants were paid S$2 for the first week's session, S$2 for the second, and S$3 for completing the third and final week. Weeks 1 and 2 took on average \(\sim\)3 minutes, and Week 3 \(\sim\)5 minutes.
169 participants (mean age 22.7; 64% female) completed all 3 weeks of the study, with 7 participants completing weeks 1 and 2 only, and 4 participants completing week 1 only. We only include data from participants who completed all 3 weeks (with other participants being paid for their completed time, but excluded from analysis), resulting in 55 None, 58 Verbaim, and 56 Paraphrase.
### Procedure
Each week, participants were contacted via email and followed the procedure: (1) Follow Qualtrics link to individual chatbot session. (_2-Week 1 only_) give consent (participants informed responses are stored and analysed). (3) Brief instructions recap (i.e., no right/wrong answers, responses in English). (4) Complete weekly chatting session with chatbot. (5) Post-test questions (see Section 3.5).
Participants were invited to weeks 2 and 3 seven days after completing the previous week's session, and were given three days to complete these sessions. Responses were controlled so that only desktop or laptop devices could be used.
### Measures
#### 3.5.1. Weekly Measures
At the end of each week's chatbot session, participants rated their experience on 7-point Likert scales (Strongly Disagree to Strongly Agree), and were asked "_Do you personally agree or disagree that..._" for the following measures: **Interest to continue chatbot usage: "I would want to continue using the chatbot" (Tran et al., 2017); **Chatbot engagement:**"The chatbot seemed engaged in our discussion, "I felt the chatbot was NOT paying attention to what I said" (Srivastava et al., 2017); **Chatbot intelligence:**"The chatbot was intelligent", "The chatbot was competent" (Srivastava et al., 2017; Srivastava et al., 2017).
#### 3.5.2. Privacy concerns, intrusiveness, and risks
To investigate whether chatbot referencing style impacts privacy-related measures, (at the end of _week 3 only_) participants responded to the following 7-point Likert scale questions. For **privacy concerns** (referring to concerns that inhibit users from sharing information (Srivastava et al., 2017)) measures were: "I was concerned that the chatbot was collecting too much personal information about me", "I was concerned about submitting my information to the chatbot". For **privacy intrusiveness** (referring to the unwelcome general encroachment into another's presence or activities (Srivastava et al., 2017)) measures were: "I feel that as a result of this interaction, information about me is out there that, if used, will invade my privacy", "I feel that as a result of this interaction, my privacy has been invaded". For **privacy risks** (referring to the uncertainty arising from the possibility of an adverse consequence (Srivastava et al., 2017)) measures were: "Personal information was inappropriately used by the chatbot", "Providing the chatbot with my personal information involved many unexpected problems".
## 4. User Study Results
We fit a linear model on each dependent variable collected from the final week and Chatbot Referencing Format as the fixed effect, and performed post-hoc Student's t-tests to identify specific differences. We excluded the Likert scale ratings of 8 participants (4 None, 1 Verbatim, 3 Paraphrase) who gave conflicting responses for chatbot engagement (e.g., both rated as Strongly Agree). This left us with 161 responses. See Figure 3 for summary results. In addition, we analysed user responses (response length before and after removing stop words), but found no difference between conditions. We will now discuss individual findings and their significance.
### General Chatbot Perceptions
Measures related to RQ1 are described below. Chatbot referencing format had no direct impact on a user's desire to continue using the chatbot, and there was no significant difference between conditions.
However, participants found the Verbatim and Paraphrase conditions to be more engaging compared to None. Specifically, for positively perceived engagement, both Paraphrase (\(p=0.0022\)
Figure 2. User utterances and their potential None, Verbatim and Paraphrase chatbot responses across all 3 weeks. Grey bubbles are chatbot utterances, and teal bubbles are user utterances. Red arrows show where a user utterance would be referenced (by Verbatim and Paraphrase) in the following week.
and Verbatim (\(p=0.0444\)) were rated more favourably than None. While Paraphrase scored higher than Verbatim, it was not statistically significant. Similarly, for negatively perceived engagement, both Paraphrase (\(p=0.00125\)) and Verbatim (\(p=0.0159\)) were rated more favourably than None. These results indicate that explicitly referencing a user's previous week's utterances positively impacts the user's feelings of chatbot engagement, while not explicitly referencing negatively impacts a user's feelings of being listened to.
Participants also found the Verbatim and Paraphrase chatbots to be more intelligent. For perceived intelligence, both Paraphrase (\(p=0.0093\)) and Verbatim (\(p=0.0301\)) were rated more favourably than None. For perceived competence, Paraphrase was rated more favourably than None (\(p=0.0327\)). These two results indicate that explicitly referencing a user's previous utterances makes a chatbot appear more intelligent and competent.
### Privacy Perceptions
Measures related to RQ2 are described below. The Verbatim chatbot was found to generate more **privacy concerns** than None for one of the measures. Specifically, for "_I was concerned that the chatbot was collecting too much personal information about me_", Verbatim scored higher than None (\(p=0.0227\)).
For measures of **privacy intrusiveness**, there were weakly significant differences, that _could_ suggest participants found Verbatim or Paraphrase conditions to be more intrusive compared to None. For the measure: "_I feel that as a result of this interaction, information about me is out there that, if used, will invade my privacy_", Verbatim scored highest (worst) and was weakly different to None (\(p=0.0716\)). For the measure: "_I feel that as a result of this interaction, my privacy has been invaded_", both Verbatim (\(p=0.0677\)) and Paraphrase (\(p=0.0866\)) scored higher than and were weakly different to None.
For both measures of **privacy risk**, while Verbatim and Paraphrase trended above None, there were no statistically significant differences between conditions.
These results indicate that explicitly referencing a user's previous utterances may raise privacy concerns, and that this may be further exacerbated if utterances are referenced verbatim. In particular, Verbatim participants were more concerned that the chatbot was collecting too much information about themselves. This may indicate that directly quoting a user's utterances made users more conscious of their data being collected, and therefore increased privacy concerns.
However, it is important to note that all privacy measures averaged below "_4 - Neither Agree Nor Disagree_" reflecting that feelings of privacy violations were still low amongst participants. This feeling may be reflected by the domain of the chatbot (dental flossing) which some participants may not find to be a very sensitive topic (discussed further in Section 5.3.1).
## 5. Semi-structured Interviews
Our quantitative results found Verbatim raised more privacy concerns than None, and there were also trending (but weakly-significant) results to indicate that participants found Verbatim and Paraphrase potentially more intrusive than None (see Section 4.2). To gain further insights as to _why_ people may perceive the chatbot referencing formats differently, we conducted semi-structured interviews.
### Participants
We recruited 5 participants per condition (N = 15; mean age 20.9; 9 female) for remote interviews, all of whom had completed the full 3 weeks of the study. Interviews lasted between 20 and 30 minutes, and participants were reimbursed S55 for their time.
### Procedure
First, participants were instructed that there are no right or wrong answers, and consent was sought to record the interview. Participants then discussed their experience taking part in the study and responded to questions pertaining to perceived effect on dental flossing, privacy violations, chatbot intelligence, chatbot warmth and the participant's perception of their assigned condition.
After these questions, the interviewer concluded by revealing and describing the 3 experiment conditions. Participants were then asked to think-aloud (Han et al., 2017), and rank their preference for the conditions while explaining their opinion and reasoning. See supplementary material for the interview guide used.
### Findings
We will now discuss the findings from our semi-structured interviews. We discuss privacy concerns raised by participants (split
Figure 3. Outcome measures by question asked in the final week of the study. Significance \(p<0.05\) indicated by \(\ast\), and \(p<0.10\) indicated by \(\ast\).
between those related and not related to referencing format), chatbot intelligence, recall assistance, and chatbot naturalness. Finally, we discuss the last section of the interview where participants saw all 3 conditions and explained their referencing format preferences.
#### 5.3.1. **Privacy Concerns (Unrelated to Reference Format)**
Similarly to previous findings, the perceived sensitivity of the domain varied among participants, and affected their hesitancy in sharing information (Han et al., 2017; Wang et al., 2018). Some participants without privacy concerns described dental flossing as a non-sensitive domain, meaning they were not hesitant sharing their information:
In contrast, some participants were concerned to share their dental flossing behaviour as they saw it as sensitive information. P11(Verbatim) raised this concern in addition to hesitancy sharing their information from uncertainty as to who will read their messages:
Footnote 6: Due to space limitations, some transcriptions have been shortened to remove word repetitions, or thinking aloud speech before interviewees arrived at a final conclusion.
On from this, participants described feeling embarrassed when sharing health behaviour that they perceived as insufficient:
Furthering this, P12(Paraphrase) felt embarrassed when their flossing frequency was referenced:
Similarly to previous findings on socially desirable responding (Zhao et al., 2018; Wang et al., 2018), one participant described how they considered lying to the chatbot about their flossing behaviour:
The expectation of data storage and perceived sensitivity of the task also affected feelings of privacy invasion, with P1(Paraphrase) equating the task and data storage to writing a diary for themselves:
#### 5.3.2. **Privacy Concerns (Related to Reference Format)**
When discussing privacy concerns, several participants expressed surprise that the chatbot referenced what they said in previous weeks:
Some of this surprise was accounted to participants' (lack of) expectations of chatbot abilities, with interviewees describing their concerns subsiding after the initial exposure to chatbot referencing.
However, some Verbatim participants were negatively surprised. P11(Verbatim) found it "unnerving" that Verbatim remembered what they said, and found sharing flossing embarrassing:
Conversely, P1(Paraphrase) described how the referencing format did not raise feelings of privacy intrusion as they expected their data to be stored:
P9(Verbatim) described appreciating utterances being unchanged, as any "processing" would have raised privacy concerns:
When asked what made them hesitant sharing information, some None participants described how the non-explicit referencing format of None made them doubt the engagement of the chatbot and thereby be hesitant in sharing information:
This went further with some None participants describing simplifying their responses as they did not think the chatbot would understand them otherwise:
* _"probably if anything_ [made me hesitant sharing information], _it was maybe like how complex I structured my sentences. So I tried to keep my sentences like as simple as possible so that maybe the chatbot
would be._ < pause> _easier for the chatbot to recognise the sentence structures_" - P6(None)
However, some None participants described lack of privacy concerns due to no explicit references to their utterances:
"_it was just a series of prompts that doesn't really consider any reference to my own, and so I don't really feel any breach of privacy or something_" - P7(None)
#### 5.3.3. **Perceived Intelligence and Engagement**
Interviewees generally viewed Verbatim and Paraphrase chatbots as intelligent.
"_I was like pretty pleasantly surprised that it like remembered my answers from previous weeks. Yeah, It made me think the chatbot toss like a little bit more intelligent._" - P12(Paraphrase)
Similarly, Verbatim and Paraphrase participants found referencing their previous utterances made the chatbot feel engaged.
However, some participants thought less of Verbatim with P5(Verbatim) stating: "_it felt like a survey_". Others disliked Verbatim due to its repetition of their utterance word-for-word:
"_it'd be good to somehow be able to paraphrase what I've said [...] so it wouldn't feel so obvious that it's just copying and pasting what I've said previously_" - P5(Verbatim)
By contrast, while None participants described the intent recognition as a feature of an intelligent chatbot, they also (due to None's referencing format) questioned the intelligence of the chatbot, with some doubting the chatbot's ability to understand them.
#### 5.3.4. **Referencing Format and Recall**
Participants described how the referencing from both Verbatim and Paraphrase helped them remember what they wrote previously. Verbatim was preferred by some participants as a more precise reminder of their utterance. For example, P15(Verbatim) equated the referencing style to a lecture recap, and valued Verbatim's consistency:
"_Like in lectures and like videos where there's like a recap or review._ [...] _I wouldn't have remembered what I said to the robot, so it kept like a certain consistency of like the interview_" - P15(Verbatim)
Otherwise, participants appreciated Verbatim as they: distrust a chatbot's ability to accurately paraphrase their words (and believe paraphrasing will lose nuance); want to know their exact utterance so previous conversations are not repeated; consider Verbatim will better distinguish their own utterances from the chatbot's; or may desire to be held accountable to their prior utterances:
"_the retrieval by the chatbot to bring back exactly, especially word for word, what I said, kind of reminded me that "ohh, I kind of agreed to this, to try this strategy" and yeah to see one week later I actually did carry it out_" - P9(Verbatim)
By contrast, the None participants found referencing utterances at a high-level negatively impacted recall:
"_the problem in very generic statements is that_ [...] _I kind of like forgot what I've written, and then when they tried to resume conversation, I had no idea what I said._" - P2(None)
Which led some None participants to suggest the chatbot should reference their previous utterances:
"_it would have been better if the chatbot could like mention what I said at least_" - P2(None)
For example, P10(None) suggested that the chatbot could reference utterances similarly to existing messaging applications:
"_it can be more like [...] in WhatsApp or Telegram you can reply to the message._ [...] so you can actually see that "actually the chatbot is referring to this message that I have sent previously", so it is clearer._" - P10(None)
#### 5.3.5. **Naturalness of Referencing Format**
Participants described Paraphrase as feeling natural and human-like. For example, P3(Paraphrase) appreciated that the chatbot did not copy previous utterances word-for-word, and thereby felt more engaging:
"_how the bot referenced it feels very natural._ [...] _it didn't copy what I said verbatim. So like it felt as if like a friend was just like, "Oh yeah, I remember you said something about this like last time we met_"so it felt quite natural, and [...] _I also really like the fact that they did remember [...] because then it made me feel like "OK, at least the bot is listening to what I say. I'm not like shouting into the abyss_" - P3(Paraphrase)
Conversely, some Verbatim participants described how quoting verbatim did not feel personable:
"_I feel like because it's. it was quoted directly, right? I felt like there wasn't, say, a lot of personal interaction. It felt more like those things... are just coded._" - P5(Verbatim)
Expanding on this P5(Verbatim) described how they would prefer it if the chatbot could paraphrase their utterances:
"_it'd be good to somehow be able to paraphrase what I've said, or to do so without directly quoting? Yeah, so it wouldn't feel so obvious that it's just copying and pasting what I've said previously, yeah?_" - P5(Verbatim)
Some None participants described the condition as less natural, and suggested that explicitly referencing past utterances would make the chatbot more personable:
"_if they reference to my difficulties directly, you feel more... personal._" - P7(None)
#### 5.3.6. **Comparing the 3 Referencing Formats**
At the end of the interview, we revealed the 3 referencing formats to participants, and asked them to think-aloud and explain their preference between formats. This reinforced some of the previous qualitative findings, and also generated opinions from participants of their non-assigned conditions. When ranking their preference for referencing format, all interviewees put None as their last choice, 5 interviewees put Verbatim as their first choice, and 10 interviewees put Paraphrase as their first choice.
Some user feedback, mirrored that discussed in Sections 5.3.1 to 5.3.5, with users describing Verbatim as "creepy", "scary" and "guiltripping" them, or stating that they appreciate the fidelity to their original utterance; Paraphrase as more natural and human-like; and
None as unengaging. Interestingly, some participants who chose Verbatim as their first preference described that they see chatbots as a tool, and value their own word over that of a robot. In contrast, those who favoured Paraphrase described seeing a chatbot as a conversational partner that they wish to be more human-like.
## 6. Discussion
Here we discuss the implications of our study. We aimed to investigate the impact of a chatbot's format when referencing a user's utterances from a previous chatting session. By comparing high-level non-explicit references, verbatim references, and paraphrased references, we wanted to investigate effects on both positive user perceptions and privacy-related perceptions. Our findings provide some empirical evidence that users value Verbatim and Paraphrase as more engaging and intelligent. However, (in support of Personalisation-Privacy Paradox (Bordes et al., 2016)) there is some evidence Verbatim and Paraphrase raised privacy concerns among users.
Although we did not find measurable differences in response quality between conditions, results indicated that people receiving non-explicit or verbatim references may be hesitant in providing their personal information. Specifically, Verbatim participants were more concerned about the quantity of personal information being collected, and our interviews found that Verbatim participants raised concerns that the referencing style was "unnerving" and "creepy". Some None participants were hesitant providing complex utterances (as they doubted that the chatbot could understand them). These findings could reflect the expectations of users before interacting with the chatbot (Shou et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). In order to abate these concerns, more clear consent could be sought and explanation of privacy practices could be provided (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019) before using different referencing formats, and the abilities of the chatbot could be more clearly advertised to avoid user disappointment (Wang et al., 2019).
Interviewees saw chatbots along a spectrum as either more of a conversation partner, or more of a tool to be used. Implications from this are that those who view chatbots as conversation partners may prefer paraphrased references, while contrarily, those who view chatbots as more of a tool may prefer a chatbot that references them verbatim. Similarly, those who more faith in their own word compared to a chatbot (or no belief in chatbot intelligence or emotions) may prefer a verbatim reference format. This could be taken further by investigating the role of personality in user preferences for referencing formats. For example, users who are more extvorted or agreeable may prefer a more conversational (paraphrased) format, while users who are more introverted or conscientious may prefer a more direct and factual (verbatim) format.
Our findings also indicate the contextual nature of reference format. For example, if the user's utterance is akin to a "contract" to themselves (such as a goal for a healthy behaviour), they may want to see their utterance in its entirety in order to solidify their commitment. Similarly, if there is purpose in the user revisiting and developing on previous utterances (such as for creativity tasks or goal-setting) users may prefer their words to remain unchanged so as to build on their previous interaction. Equally, certain use cases (such as in legal settings) may require chatbots to be more conservative in their use of paraphrasing, or to provide verbatim quotes alongside the chatbot's paraphrase (akin to the use of mixed quotations in linguistics literature (Friedman et al., 2016; Wang et al., 2019)).
This implies that chatbots could, in some cases, use a mixture of paraphrased and verbatim reference formats, depending on the content of the user's utterance. In the case of dental flossing, the chatbot could use paraphrased responses to reference a user's previous behaviour (flossing frequency), but maintain the user's utterance when referencing the user's behaviour strategy that they devised in the previous chatting session.
Study findings also have implications for the design of chatbot interfaces. If chatbots are designed to reference utterances (e.g., verbatim quotes), designers need to be transparent to users, and ensure user control over their data and that user privacy is protected. Similarly, if paraphrased references are used, the chatbot needs to ensure that the meaning of the user's original utterance is retained and that users do not feel that their utterance has been distorted.
## 7. Limitations and Future Work
The user study was conducted over 3 weeks with one chatting session per week, which was not long enough to potentially encourage health behaviour change among participants. Furthermore, we cannot claim generality over different chatbot referencing formats (Wang et al., 2018; Wang et al., 2019), sensitivity and intimacy of user data in references (Wang et al., 2019), domain of conversations with the chatbot, and input modalities.
Further work could investigate the use of referencing formats across different modalities. For example, while a voice-user interface (VUI) could also reference users verbatim or via paraphrasing, verbatim references could have the added dimension of using the voice of either an agent or of the user themselves (Wang et al., 2019). The added dimension of voice playback could raise addition concerns among users. Additionally, alternative referencing formats (such as summarisation styles (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019), or use of mixed quotations (Friedman et al., 2016; Wang et al., 2019)) could be investigated. Choice of these could depend on factors such as the length, quantity, temporal spacing and content of utterances. For example, for longer utterances, showing the entire utterance verbatim may prove unwieldy, adding to user burden (Wang et al., 2019; Wang et al., 2019).
## 8. Conclusion
This study investigates how the format used when a chatbot references user utterances from a previous chatting session affects a user's positive perceptions (chatbot intelligence and engagement) and privacy related perceptions. Our findings suggest that if a chatbot references previous user utterances, both verbatim or by using paraphrases, it can lead to increased feelings of chatbot intelligence and engagement. Despite this, referencing user utterances can also raise privacy concerns among users. Our semi-structured interviews then investigated _why_ people have these privacy concerns. We discussed the implications of our findings for chatbot designers and researchers, and we provided recommendations for the choice of referencing format.
###### Acknowledgements.
This research is part of the programme DesCartes and is supported by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. |
2304.04574 | Defunctionalization with Dependent Types | The defunctionalization translation that eliminates higher-order functions
from programs forms a key part of many compilers. However, defunctionalization
for dependently-typed languages has not been formally studied. We present the
first formally-specified defunctionalization translation for a
dependently-typed language and establish key metatheoretical properties such as
soundness and type preservation. The translation is suitable for incorporation
into type-preserving compilers for dependently-typed languages | Yulong Huang, Jeremy Yallop | 2023-04-10T13:27:34Z | http://arxiv.org/abs/2304.04574v1 | # Defunctionalization with Dependent Types+
###### Abstract.
The _defunctionalization_ translation that eliminates higher-order functions from programs forms a key part of many compilers. However, defunctionalization for dependently-typed languages has not been formally studied.
We present the first formally-specified defunctionalization translation for a dependently-typed language and establish key metatheoretical properties such as soundness and type preservation. The translation is suitable for incorporation into type-preserving compilers for dependently-typed languages.
Complication, type preservation, type systems, dependent types 20232
Pettyjohn et al., 2005; Podlovics et al., 2021; Weeks, 2006). Type-preserving variants of defunctionalization are available for a variety of type systems (Bell et al., 1997; Nielsen, 2000; Pottier and Gauthier, 2004). Defunctionalization is also useful in the compilation of dependently-typed languages, such as Idris 1. However, to date no type-preserving variant of the defunctionalization translation for dependently-typed languages has been developed.
Footnote 1: [https://github.com/idris-lang/ldris-dev/blob/v1.3.4/src/IRTS/Defunctionalise.hs](https://github.com/idris-lang/ldris-dev/blob/v1.3.4/src/IRTS/Defunctionalise.hs)
This work meets that need, introducing a typed defunctionalization translation for a dependently-typed language, and establishing its fundamental properties. As with previous work that has adapted similar program translations to support dependent types, we have encountered and resolved various difficulties that do not arise in simply-typed settings. In particular, the need to preserve universe sizes (used by dependently-typed languages to avoid inconsistencies), and to preserve reduction (used to establish type equality) make a straightforward adaption of the standard defunctionalization unfeasible.
### Contributions
The central contribution of this paper is the first type-preserving defunctionalization translation for a dependently typed language. In more detail,
* SS2 shows that the type-preserving defunctionalization translations used for simply typed languages (SS2.1) do not extend to a dependently-typed setting (SS2.3), and presents an abstract translation suited to dependently-typed languages (SS2.4).
* SS3 has the technical development of our type-preserving defunctionalization translation. SS3.3 formally defines the abstract translation and SS3.4 and SS3.5 establish key meta-theoretical properties such as soundness, type preservation, and consistency.
Finally, SS4 describes an implementation of our translation (included as supplementary material), and SS5 summarises related work on type-preserving compilation and on defunctionalization.
## 2. Overview
### Defunctionalization
The _defunctionalization_ translation turns higher-order programs into first-order programs, by replacing the function arrow - - with a first-order data type - -. Defunctionalization replaces each abstraction \(\lambda x.e_{i}\) in the source program with a constructor application \(C_{i}\;\overline{y}\) where \(C_{i}\) is a constructor of - - and \(\overline{y}\) are the free variables of the abstraction, and replaces each application \(f\;x\) with \(f\;\$\;x\), where the infix operator \(\$\;\) maps \(C_{i}\) back to \(e_{i}\).
Here is an example. The polymorphic compose function contains three abstractions, here labeled F1, F2, and F3.
```
compose:(b->c)->(a->b)->(a->c) compose=|df->[]x->f(g->x)|
```
Defunctionalizing compose produces a data type with one constructor for each abstraction. Here - separates constructor arguments: F2 has one argument of type b -, corresponding to f in F2 above, and F3 has two arguments, corresponding to f and g in F3.
``` data(>)abwhere F1:(b->c)->(a->b)->(a->c) F2:(b->c)->(a->b)->(a->c) F3:(b->c)->(a->b)->(a->c)
Following Pottier and Gauthier (2004), the data type \(\sim\) produced by defunctionalization is a _generalized algebraic data type_ (GADT), in which the return type of each constructor can have a distinct instantiation of the type parameters, and constructor types can involve type variables (such as b in the type of F3) that do not appear in return types.
Defunctionalization also produces an operator \(\$\) that maps the constructors of \(\sim\) to the bodies of the corresponding abstractions:
(\(z_{1}:S_{1}\)) \(\rightarrow\)\(\ldots\)\(\rightarrow\)(\(z_{m}:S_{m}\)) \(\rightarrow\)D t\({}_{1}\)\(\ldots\)t\({}_{n}\).
The arguments to D and c are dependently typed: T\({}_{i+1}\) can mention \(y_{1}\)\(\ldots\)\(y_{i}\), and S\({}_{i+1}\) can mention \(z_{1}\)\(\ldots\)\(z_{i}\).
To ensure that inductive family definitions are consistent, Agda imposes additional restrictions.
First, _universe checking_ rejects inductive definitions with impredicative constructors -- that is, definitions whose constructors inhabit a larger universe than the data types themselves. More concretely, for a data type such as D above, the universe of every argument type S\({}_{i}\) (i.e. the type of S\({}_{i}\)) should be smaller than Set\({}_{d}\) to pass the universe check. Without this restriction, inductive families can be used to encode Girard's paradox.
Second, _positivity checking_ rejects inductive families that contain references to themselves in non-strictly-positive positions. Without this restriction, inductive families such as the following Fix can be used to build recursive definitions that violate consistency, such as bad:
\[\begin{array}{l}\text{data Fix}:\text{Set}\rightarrow\text{Set where}\\ \text{fix}:\forall\{a\}\rightarrow(\text{Fix }a\to a) \rightarrow\text{Fix }a\end{array}\]
\[\begin{array}{l}\text{bad}:\forall\{a\}\to a\\ \text{bad = f (fix f) where f}:\forall\{a\}\rightarrow\text{Fix }a\to a\\ \text{f (fix g)}=g\text{ (fix g)}\end{array}\]
Strict positivity imposes two conditions on the constructor types A\({}_{i}\) of an inductive family definition D. First, where D appears, it must not be indexed by expressions involving D itself. Second, in the argument types S\({}_{i}\), D must not occur to the left of function arrows.
These requirements around strict positivity and universes are shared by many dependently-typed languages that support inductive families, like Coq's Gallina [Coq Development Team 2022], Lean [de Moura et al. 2015], and Timany and Sozeau's pCuIC [Timany and Sozeau 2017].
### Problems extending defunctionalization to dependent types
At first glance, extending defunctionalization to support dependent functions, targeting inductive families, appears straightforward. As an example, we consider the defunctionalization of the following fully-dependent compose function, written in Agda, with all arguments explicit for clarity:
\[\begin{array}{l}\text{compose}:(A:\text{Set})\rightarrow(B:A\rightarrow\text {Set})\rightarrow(C:(x:A)\to B\ x\rightarrow\text{Set})\rightarrow\\ (f:(y:A)\rightarrow(z:B\ y)\to C\ y\ z)\rightarrow(g:(x:A) \to B\ x)\rightarrow(x:A)\rightarrow\\ C\ x\ (g\ x)\\ \text{compose}=\lambda\ A\rightarrow\lambda\ B\rightarrow\lambda\ C \rightarrow\lambda\ f\rightarrow\lambda\ g\rightarrow\lambda\ x \to f\ x\ (g\ x)\end{array}\]
Adapting Pottier and Gauthier's recipe, we start by defining an inductive family \(\Pi\) to represent dependent functions, just as the GADT \(\leadsto\) represents non-dependent functions:
\[\text{data }\Pi:(A:\text{Set})\rightarrow(A\rightarrow\text{Set}) \rightarrow\text{Set where}\]
Each dependent function type \(\Pi x\):\(A.f\ x\) (written (x : A) \(\rightarrow\)f x in Agda) in the original program will be defunctionalized to \(\Pi\ A\)f.
Next, we add a constructor to \(\Pi\) for each lambda abstraction in the original program. For example, the F6 constructor corresponds to the innermost abstraction, with free variables A, B, C, f and g:
\[\begin{array}{l}\text{F6}:(A:\text{Set})\rightarrow\\ (B:\Pi\ A\ (\lambda\_\rightarrow\text{Set}))\rightarrow\\ (C:\Pi\ A\ (\lambda\_\rightarrow\Pi(B\$\ \text{$x$})\ (\lambda\_ \rightarrow\text{Set})))\rightarrow\\ (f:\Pi\ A\ (\lambda\ y\rightarrow\Pi(B\$\ \text{$y$})\ (\lambda\ z \to C\$\ \text{$y$}\ \text{$z$})))\rightarrow\\ \end{array}\]
\((g:\Pi\ A\ (\lambda\ x\to B\$\ x))\rightarrow\)
\(\Pi\ A\ (\lambda\ x\rightarrow\)
\(C\$\ x\ \$\ (g\$\ x))\)
Finally, we add a case for each constructor to the definition of \(\$\):
\(\F6\ A\ B\ C\ f\ g\$\ x\ -f\$\ x\ \$\ (g\$\ x)\)
Appendix A gives the full definitions. Unfortunately, although these definitions are type-correct, they do not satisfy Agda's additional checks.
First, _universe checking_ rejects the definition of \(\F6\) because in the type of \(B\) the second argument of \(\Pi\) (i.e. \(\lambda_{-}\rightarrow\Set\)) inhabits the universe \(\Set_{1}\), which is larger than \(\Set\).
Second, _positivity checking_ rejects the definition of \(\F6\) because in the type of \(C\), \(\Pi\) is indexed by an expression involving \(\Pi\).
Finally, Agda's _termination checking_ rejects the definition of \(\$\) because the case for \(\F6\) is not structurally terminating.
#### 2.3.1. A simpler example
The example above suggests that although defunctionalization apparently extends naturally to dependent types, the extension suffers from consistency problems. In fact, the situation is more grave: even if we do not make use of dependency, the same problems with universes and positivity arise.
For example, here is a simply-typed compose function, based on fixed types \(A\), \(B\), and \(C\):
\(\compose:(B\to C)\rightarrow(A\to B)\rightarrow(A\to C)\)
\(\compose=\lambda\ f\rightarrow\lambda\ g\rightarrow\lambda\ x\to f\ (g\ x)\)
Defunctionalizing compose produces an inductive family \(\leadsto\) and corresponding _apply_ function \(\$\):
\(\data\ \leadsto\ \Set\rightarrow\Set\rightarrow\Set\)where
\(\F1:(B\leadsto C)\leadsto(A\leadsto B)\leadsto(A\leadsto C)\)
\(\F2:(B\leadsto C)\rightarrow(A\leadsto B)\leadsto(A\leadsto C)\)
\(\F3:(B\leadsto C)\rightarrow(A\leadsto B)\rightarrow(A\leadsto C)\)
\(\_\$_{-}:\forall\ \{A\ B\}\rightarrow(A\leadsto B)\to A\to B\)
\(\F1\$\ f=\F2f\)
\(\F2f\$\ g=\F3f\ g\)
\(\F3f\$\ x-f\$\ (g\$\ x)\)
Unfortunately, the simple definition \(\leadsto\) suffers from the same problems as the more dependent \(\Pi\). First, universe checking rejects the constructor \(\F1\), because the type \(B\leadsto C\) inhabits the universe \(\Set_{1}\), which is larger than \(\Set\). Second, in the type of \(\F1\), \(\leadsto\) is indexed by \(\leadsto\) itself, so the definition fails positivity checking. Finally, the \(\F3\) case of \(\$\) fails termination checking because the arguments to the recursive call are not structurally smaller than the parameters.
#### 2.3.2. An expressivity mismatch
We might note that the Agda's restrictions are only fairly crude syntactic approximations of semantic properties, that programs that breach them are not necessarily "incorrect". A similar approach has been taken by Ahrens et al. (2018) (for universe checking), and by Weirich and Casinghino (2010) (for all three checks), among others.
However, we do not favour taking off the safety guards in this way for the code generated by defunctionalization. In our view, the fact that Agda rejects the inductive families generated by defunctionalization suggests that inductive families are ill suited to the task. For example, the universe restriction that rejects the constructors of \(\Pi\) does not apply to the closures that correspond to
those constructors in the source program: there is nothing requiring a free variable in an abstraction body to inhabit a smaller universe than the function itself. The additional restriction arises from an expressivity mismatch: the universe restriction is only needed when inductive families are not used in a closure-like fashion -- e.g. when constructor arguments are extracted.
### Abstract defunctionalization
The examples above suggest that the extension of defunctionalization to dependent types is _type-preserving_. It is also possible to show that it is _meaning-preserving_. As Pottier and Gauthier (2006) observe, when defunctionalization produces a single polymorphic apply function, it coincides with the untyped defunctionalization translation. Pottier and Gauthier use this coincidence to prove that the typed translation is meaning-preserving by lifting a proof about the untyped translation. We might similarly lift the proof to the dependently-typed setting to establish the correctness of the extended translation.
Since the extended defunctionalization translation appears to preserve types and meanings, it is disappointing that it falls foul of Agda's various restrictions. How might we build a translation that does not violate these checks?
We choose to follow the direction taken by Minamide et al. (1996) and Bowman and Ahmed (2018) for _abstract closure conversion_, which studies closure conversion for a specialized target language with new constructs for representing closures and closure types. Closure conversion into these constructs captures the essence of the translation, while avoiding the unnecessary restrictions imposed by more concrete settings. Similarly, we will define a target language, the _Defunctionalized Calculus of Constructions_ (DCC), in the style of lambda calculus, but with a new construct for defunctionalized _labels_ (representing indexes into a _label context_) in place of lambda abstractions.
Fig. 1 shows the result of defunctionalizing the simply-typed compose function to DCC 2, which looks and behaves like the conventional defunctionalization presented in SS2.1. In our translation into DCC, each abstraction \(\lambda x.e_{i}\) is replaced with a _label expression_\(\mathfrak{L}_{i}\{\overline{\gamma}\}\) where \(\mathfrak{L}_{i}\) is the label's identifier and \(\overline{\gamma}\) are the abstraction's free variables. The function body \(e_{i}\) is stored in a separate _label context_\(\mathfrak{D}\) indexed by the label identifier, along with its typing information.
Footnote 2: We assume that \(A\), \(B\), and \(C\) are base types here.
In Fig. 1, the label context \(\mathfrak{D}\) has three entries, one for each abstraction in the original compose function. Each entry corresponds to one case of the $ function in the conventional defunctionalization. For example, \(\mathfrak{L}_{3}\) arises from the translation of \(\lambda x:A\). \(f\) (\(g\)\(x\)), and corresponds to the F3 case in the definition of $: it has two free variables \(f:B\to C\) and \(g:A\to B\), a bound variable x, and a body \(f\boxplus(g\boxplus x)\). As we shall see, a label application \(\mathfrak{L}_{3}\{f,g\}\boxplus N\) reduces to \(f\boxplus(g\boxplus N)\), just as the application (F3 f g) $ x reduces to the corresponding right hand side \(f\) $ (g $ x).
It is straightforward to add dependent types to this scheme, but some care is needed to define the transformation and show that it has the desired meta-theoretical properties. In particular, as we shall see, the transformation needs to consider the entire derivation tree rather than just the source language expression (SS3.3.2), and we need to use a version of the source language with
Figure 1. Defunctionalized simply-typed composition
explicit substitutions (SS3.4) to make the type-preservation proof go through. These challenges arise only in defunctionalization in a dependently typed setting, which has not been previously studied.
## 3. Defunctionalizing with Dependent Types
Having informally introduced the key concepts and motivated our abstract defunctionalization translation, we now turn to the technical details. The next few sections introduce our source language, the calculus of constructions (SS3.1), our target language, the _defunctionalized_ calculus of constructions (SS3.2), and the defunctionalizion translation that links them (SS3.3). We then establish the soundness of the translation (SS3.4) and prove the consistency of the target language (SS3.5).
### Calculus of Constructions
Our source language is a variant of the Calculus of Constructions (CC) (Coquand and Huet 1988), an expressive dependently-typed lambda calculus that serves as a basis for several programming languages and proof assistants. Our main departure from the original presentation of CC is in following the approach taken by Luo (1990), and by many dependently-typed languages such as Agda, Lean, Coq, and F\({}^{*}\), by extending CC with a Martin-Lof style hierarchy of universes.
Here is an example CC definition, compose, which represents the fully dependent composition function for functions \(f\) and \(g\):
\[\begin{array}{rll}\text{compose}&::=&\lambda A\colon U_{0}.\ \lambda B\colon(\Pi x \colon A.U_{0}).\ \ \lambda C\colon(\Pi x\colon A.\Pi y\colon B x.U_{0}).\\ &&\lambda f\colon(\Pi y\colon A.(\Pi z\colon B y.C\ y\ z)).\ \ \lambda g\colon(\Pi x \colon A.B\ x).\\ &&\lambda x\colon A.\ f\ x\ (g\ x)\end{array}\]
The expression component \(\lambda A.\lambda B.\lambda C.\lambda f\lambda g.\lambda x.f\ x\ (g\ x)\) of this definition is unremarkable; all the interest is in the dependencies of types on arguments. In particular, the result type \(C\ y\ z\) of \(f\) depends on \(f\)'s arguments \(y\) and \(z\) and the result type \(B\ x\) of \(g\) depends on \(g\)'s argument \(x\). (In a practical programming language, both \(y\) and the type arguments \(A\), \(B\) and \(C\) would be passed implicitly, but our minimal calculus does not support implicit arguments.)
Fig. 1(a) shows the syntax of CC. The expressions of CC are variables \(x\) (drawn from an infinite set of names), universes \(U\), dependent function types \(\Pi x\colon A.B\), applications \(L\ M\), and abstractions \(\lambda x\colon A.M\). A CC context \(\Gamma\) is a telescope of variable-expression pairs.
CC has four judgements:
1. reduction (Fig. 1(b)) \[M\succ N\]
2. type membership (Fig. 1(c)) \[\Gamma\vdash M:A\]
3. context formation (Fig. 1(d)) \[\vdash\Gamma\]
4. equivalence (Fig. 1(e)) \[\vdash A\equiv B\]
There is a single reduction rule (Fig. 1(b)), for \(\beta\)-reduction, \((\lambda x\colon A.N)\ M\succ N[M/x]\). We write \(L\triangleright^{*}M\) to mean that \(L\) reduces to \(M\) in a sequence with zero or more steps.
CC's rules for typing (Fig. 1(c)) and context formation (Fig. 1(d)) are defined by mutual induction.
The type of a variable \(x\) is \(A\) if \(x\ :A\) is present in the well-formed context \(\Gamma\) (TY-Var). The type of a universe \(U_{i}\) is \(U_{i+1}\) (TY-Universe), and the type of \(\Pi x\colon A.B\) is the higher universe among
universes of \(A\) and \(B\) (Ty-Pi). If \(M\) has type \(B\) in some context \(\Gamma\) extended with \(x{:}A\), then \(\lambda x{:}A.M\) has the dependent function type \(\Pi x{:}A.B\) (Ty-Lambda). Applications have types \(B[N/x]\), since the output of a function type may depend on the argument \(N\) (Ty-Apply). Finally, if an expression \(M\) has type \(A\) and \(A\) is equivalent to \(B\), then \(M\) also has type \(B\) (Ty-Equiv).
A context \(\Gamma\) is _well-formed_ (written \(\vdash\Gamma\)) if every variable in it is associated with a valid type -- that is, the associated expression's type is a universe in the context \(\Gamma\).
We make use of two shorthands, writing \(\Gamma\vdash A:U\) to mean that \(\Gamma\vdash A:U_{i}\) for _some_\(i\) (which means that \(A\) is a type), and \(A\to B\) to stand for the \(\Pi\)-type \(\Pi x{:}A.B\) where \(B\) does not depend on
Figure 2. The Calculus of Constructions (CC)
\(x\). For simplicity, we omit base types such as the unit type 1 and the natural numbers \(\mathbb{N}\) from the formal definition, but we will use them freely in examples.
In CC, two expressions are _equivalent_ (Fig. 2e) if they reduce to the same expression (eq-Reduce) or are \(\eta\)-equivalent as defined by two symmetric rules eq-Eta1 and eq-Eta2. Under eq-Eta1, \(L\) and \(M\) are equivalent if \(L\) reduces to an abstraction \(\lambda x\):\(A.L^{\prime}\), \(M\) reduces to some \(M^{\prime}\), and \(L^{\prime}\equiv M^{\prime}\)\(x\), and eq-Eta2 corresponds symmetrically (Bowman and Ahmed 2018; Bowman et al. 2018).
One useful property of CC is as follows: if \(\Gamma+M:A\), then \(\Gamma+A:U\). Furthermore, CC is type safe and consistent, and type-checking in CC is decidable (Coquand and Huet 1988; Luo 1990).
### Defunctionalized Calculus of Constructions
Fig. 3a shows the syntax of our target language, the Defunctionalized Calculus of Constructions (DCC). As in CC, DCC expressions include variables \(x\), universes \(U\), dependent function types \(flx\):\(A.B\), and applications \(L@\) M. Unlike CC, DCC contains first-class function labels \(\mathfrak{L}\{\overline{M}\}\) instead of lambda abstractions.
A label expression \(\mathfrak{L}\{\overline{M}\}\) is a label name \(\mathfrak{L}\) supplied with a list of zero or more expressions \(\overline{M}\) (standing for \(M_{1},\cdots,M_{n}\)) assigned to its free variables. Label names \(\mathfrak{L}_{1},\mathfrak{L}_{2},\cdots\) are disjoint from variable names, as we emphasize using a different font.
There are two varieties of context in DCC. As in CC, type contexts \(\Gamma\) associate variables \(x\) with types \(A\). Label definition contexts \(\mathfrak{D}\) pair label names with their associated data: \(\mathfrak{L}\{\overline{x}:\overline{A}\},x\):\(A\mapsto M:B\). Here \(\overline{x}:\overline{A}\) records the type of the (possibly empty) telescope of free variables that the label takes, \((x:A)\to B\) specifies the label type, and \(M\) is the expression to which the label reduces when applied to an argument. Note that types in a type context \(\Gamma\) may refer to labels \(\mathfrak{L}_{1},\mathfrak{L}_{2},\cdots\) in the label context \(\mathfrak{D}\), but not vice versa.
DCC has four judgements:
1. reduction (Fig. 3b) \[\mathfrak{D}+M\mapsto N\]
2. type membership (Fig. 3c) \[\mathfrak{D};\Gamma\vdash M:A\]
3. context formation (Fig. 3d) \[\vdash\mathfrak{D};\Gamma\]
4. equivalence (Fig. 3e) \[\mathfrak{D}+A\equiv B\]
#### 3.2.1. Reduction
There is a single reduction rule (Fig. 3b), for label application: the application of the label \(\mathfrak{L}\{\overline{M}\}\) to the argument \(N\) reduces to \(L[\overline{M}/\overline{x},N/x]\), where \(L\) is the body of the entry for \(\mathfrak{L}\) in the label context and \(\overline{M}\) is the closure of \(\mathfrak{L}\). A reduction sequence is noted as \(\mathfrak{D}\vdash M\mapsto^{*}N\), which means \(M\) reduces to \(N\) in zero or more steps.
Substitutions for variables, universes, \(\Pi\)-types and applications in DCC follow the conventional definition. Substitutions for labels are
\[\mathfrak{L}\{\overline{M}\}[N/x]\triangleq\mathfrak{L}\{\overline{M}[N/x]\},\]
where \(\overline{M}[N/x]\) is syntactic sugar for \(M_{1}[N/x],\cdots,M_{n}[N/x]\).
#### 3.2.2. Type judgements
DCC's type judgements are of the form \(\mathfrak{D};\Gamma\vdash M:A\), and typing rules are given in Fig. 3c. Rules for variables, universes, \(\Pi\)-types, applications, and conversion are identical
\begin{tabular}{l l l l} Universes & U & \(\mathrel{\mathop{:}}=\) & U\({}_{i}\) \\ Expressions & A, B, L, M, N & \(\mathrel{\mathop{:}}=\) & x \(\mid\) U \(\mid\)\(\sqcap\) Tx:A.B \(\mid\)\(\sqcap\) M \(\mid\)\(\sqcap\)\
to their counterpart rules in \(CC\), so we focus on the rule for labels. A label term \(\mathcal{Q}\{\overline{M}\}\) is well-typed in \(\mathfrak{D};\Gamma\) if the following conditions are satisfied.
1. The context \(\mathfrak{D};\Gamma\) is _well-formed_.
2. \(\mathcal{Q}\{(\overline{x}:\overline{A})\,,x:A\mapsto M:B\}\) is present in \(\mathfrak{D}\).
3. The length of the two lists \(\overline{M}\) and \(\overline{x}:\overline{A}\) are equal.
4. All expressions in \(\overline{M}\) are well-typed, and their types match the specified types of free variables \(\overline{A}\).
Specifically, condition (4) means:
\[\mathfrak{D};\Gamma\vdash M_{1}:A_{1},\] \[\mathfrak{D};\Gamma\vdash M_{2}:A_{2}[M_{1}/x_{1}],\] \[\cdots,\] \[\mathfrak{D};\Gamma\vdash M_{n}:A_{n}[M_{1}/x_{1},\cdots,M_{n-1}/ M_{n-1}].\]
Each \(A_{i+1}\) depends on \(x_{1},\cdots,x_{i}\), so \(M_{1},\cdots,M_{i}\) need to be substituted in \(A_{i+1}\) in the type judgement for \(M_{i+1}\). The type of \(\mathcal{Q}\{\overline{M}\}\) is \(\Pi x:A\big{[}\overline{M}/\overline{x}\big{]}.B\big{[}\overline{M}/\overline{ x}\big{]}\).
Note that values of free variables \(\overline{M}\) are substituted in \(\Pi\)x:A.B, the specified type of the label. We use \([\overline{M}/\overline{x}]\) as a syntactic sugar of \([M_{1}/x_{1},\ \cdots,M_{n}/x_{n}]\), and conditions (3) and (4) are abbreviated to \(\mathfrak{D};\Gamma\vdash\overline{M}:\overline{A}\) as a convention.
The DCC judgement for well-formed contexts is \(\vdash\)\(\mathfrak{D};\Gamma\) and its rules are given in Fig. (d)d. A context is _well-formed_ if every variable in the type context is associated with a valid type (in the previous context \(\mathfrak{D};\Gamma\)), and every label is associated with a well-typed data. In other words, if we have \(\mathcal{Q}\{(\overline{x}:\overline{A})\,,x:A\mapsto M:B\}\), \(M\) should have the type \(B\) as specified in the context formed by the previous label context and the free variables in \(M\) (namely \(\mathfrak{D};\overline{x}:\overline{A},x:A\)).
Two terms \(L\) and \(M\) are equivalent (Fig. (e)e) if they both reduce to the same term \(N\) in a reduction sequence or they are \(\eta\)-equivalent. DCC's \(\eta\)-equivalence rules are similar to that of CC. Rule (d-eq-Eta1) defines that \(L\) and \(M\) are equivalent if \(L\) reduces to a label \(\mathcal{Q}\{\overline{N}\}\), \(M\) reduces to \(M^{\prime}\), \(\mathcal{Q}\{(\overline{x}:\overline{A})\,,x:A\mapsto N:B\}\) is found in the label context \(\mathfrak{D}\), and \(M^{\prime}\) @ x is equivalent to \(N[\overline{N}/\overline{x}]\). Rules (d-eq-Eta1) and (d-eq-Eta2) are symmetrical.
Both the type context and the label context have the weakening property: a well-typed expression is still well-typed in an extended type or label context (by induction on the type derivation rules).
**Lemma 3.1** (Type weakening): _If \(\mathfrak{D};\Gamma\vdash M:A,\mathfrak{D};\Gamma\vdash B:U_{i}\), and \(x\) is fresh, then \(\mathfrak{D};\Gamma,x:B\vdash M:A\)._
**Lemma 3.2** (Label weakening): _If \(\mathfrak{D};\Gamma\vdash M:C,\mathfrak{D};\overline{x}:\overline{A},x:A\vdash N :B\), and \(\mathcal{Q}_{i}\) is fresh, then \(\mathfrak{D},\mathcal{Q}_{i}\{(\overline{x}:\overline{A})\,,x:A\mapsto N:B\} ;\Gamma\vdash M:C\)._
DCC is type-safe and consistent (we establish these properties in SS3.5). In addition, it is sufficiently expressive to support the compose function, but we must write it in defunctionalized style, since the calculus does not support lambda abstraction. There is one entry in the label context \(\mathfrak{D}\) for each \(\lambda\) in the CC definition of compose:
\[\mathfrak{D} ::= \mathcal{Q}_{5}\{(A,B,C,f,g)\,,x:A\mapsto(f\,@\,@\,@\,@\,@\,@\,@ \,@\,@\,@\,@\,@\,@\,@\,@\,@\,@\,@,@\,@\,@\,@\,@,@\,@\,@,@\,@,@\,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@,@@,@,@,@,@@,@,@,@@,@,@,@@,@,@,@@,@,@,@@,@,@,@,@@,@,@@,@,@@,@,@@,@,@@,@,@@,@@,@,@,@,@,@,@,@,@,@,@@,@,@,@@,@,@@,@,@@,@,@@,@@,@,@@,@,@@,@,@@,@,@@,@,@@,@,@,@@,@@,@,@@,@@,@,@@,@,@,@@,@,@@,@@,@@,@,@@,@,@,@@,@@,@@@,@@,@,@@,@@,@@,@@,@@,@@,@@,@@,@@,@@@,@@,@@,@@,@@,@@,@@,@@,@@@,@@@,@@@,@@,@@,@@@,@,@@,@@,@@,@@,@@,@@@,@,@@,@@@,@@,@@,@@,@@@,@@,@@,@@,@@@,@@@,@@,@@,@@@,@@@,@@@@,@@,@@@,@@@,@@,@@@,@@@,@@,@@@,@@@,@@,@@@@,@@@,@@@@,@@@@,@@@,@@@,@@@,@@,@@@,@@@@,@@@,@@@@,@@@@,@@@,@@@@,@@@@,@@@,@@@@,@@@@,@@@@,@@@@,@@@@,@@@,@@@@@,@@@,@@@@@,@@@@,@@@@@,@@@@,@@@@@,@@@@@@,@@@@@@,@@@@@@@,@@@@@@,@@@@@@,
The full definition appears in Appendix B.
### The Defunctionalization Translation
Fig. 4 shows the translation. It consists of two parts: a transformation \([\![-]\!]\) for expressions and a meta-function \([\![-]\!]_{d}\) that extracts function definitions from the source program. The expression transformation produces the target program and the meta-function \([\![-]\!]_{d}\) gives a label context.
#### Expression transformation.
Figure 4. The Defunctionalization Translation
**Definition 3.3**: _The expression transformation \([\![[-]\!]\!]\) takes a well-typed term in CC and an implicit argument of that term's type derivation. We define \([\![M]\!]\triangleq M\), where \(M\) is given by a new judgement of the form \(\Gamma\vdash M:A\leadsto M\) (Fig. (a)a)._
The transformation simply transcribes the variables, universes, \(\Pi\)-types, applications, and base types and values in CC to their counterparts in DCC compositionally. Functions in the source language are translated into labels in the target language.
Defunctionalization requires a unique correspondence between each label and each source-program function. We use a convention that every lambda in the transformation's input \(M\) is tagged with a unique identifier \(i\) (\(i\in\mathbb{N}\)), and its corresponding label's name is \(\mathfrak{L}_{i}\).
The transformation turns a function \(\lambda^{i}x\colon A.M\) into a label \(\mathfrak{L}_{i}\{\overline{x}\}\), where \(\overline{x}\) come from the function's free variables \(\overline{x}\) (\(\Gamma\)-LAMBDA). The meta-function FV (see Definition 3.4) computes all free variables and their types involved in a well-typed CC-expression. Note that FV is different from _fv_, the conventional free variable function that computes all the _unbound variables_ in an expression. In dependently typed languages, the type of a free variable may contain other free variables, and their types may still contain other free variables, and so on! Therefore, FV(\(M\)) must recursively work out all the variables needed for \(M\) to be well-typed.
**Definition 3.4**: \(\mathit{FV}(M)\) _takes \(\Gamma\vdash M:A\), the type judgement of \(M\), as an implicit argument. It firstly computes all the unbound variables \(x_{1},\cdots,x_{n}\) in \(M\) and in \(A\), then calls itself recursively on types of these variables, and finally returns the union of all free variables and their types it found._
\[\begin{array}{ll}\mathit{FV}(M)&=&\mathit{FV}(A_{1})\,\cup\,\cdots\cup \mathit{FV}(A_{n})\cup\Gamma_{f\circ}\\ &\text{where}&\operatorname{fv}\,(M)\,\cup\,\operatorname{fv}\,(A)\,=\,x_{1},\cdots,x_{n}\\ &&\Gamma\vdash x_{1}\,:\!A_{1},\cdots,\Gamma\vdash x_{n}\,:\!A_{n}\\ &&\Gamma_{f\circ}\triangleq x_{1}\,:\!A_{1},\cdots,x_{n}\,:\!A_{n}.\end{array}\]
Here, the union of two type contexts \(\Gamma_{1}\cup\Gamma_{2}\) is \(\Gamma_{1}\) appended with all the variable-expression pairs \(x\,:\!A\) that only appear in \(\Gamma_{2}\), preserving their order. Intuitively, \(\mathit{FV}(M)\) computes all the variables needed to correctly type \(M\). Therefore, \(M\) is still well-typed in its free-variable context \(\mathit{FV}(M)\).
**Lemma 3.5**: _If \(\Gamma\vdash M:A\), then \(\mathit{FV}(M)\vdash M:A\)._
#### Extracting function definitions
**Definition 3.6**: \([\![-]\!]\!]_{d}\) _takes a well-typed CC term and implicitly its type derivation. We define \([\![M]\!]_{d}\triangleq\mathfrak{D}\), where \(\mathfrak{D}\) is given by a new judgement of the form \(\Gamma\vdash M:A\leadsto_{d}\mathfrak{D}\) (Fig. (b)b)._
In a simply typed system, the only thing \([\![-]\!]_{d}\) has to do is finding every function \(\lambda^{i}x\colon A.M\) in the source program and placing them in the label context \(\mathfrak{D}\) in the following form
\[\mathfrak{L}(\{\overline{x}\,:\overline{A}\},\,x\colon A\,\mapsto M:B)\]
where \(\{\overline{x}\colon\overline{A}\}\), \(x\colon A\), \(M\), and \(B\) respectively correspond to the free variables \((\overline{x}\,:\overline{A})\) in the function, the bound variable \(x:A\), the function body \(M\), and the return type \(B\).
Alas, types may index over functions in our dependent type theory, and functions may appear in the type of an expression, even if the expression itself does not contain that function! For example, consider the following triple \((\Gamma,M,N)\) in CC (with built-in natural numbers and addition).
\[\begin{array}{ll}\Gamma&\triangleq\ \cdot\,\,A\colon(\mathit{Nat}\to Nat)\to U _{0},\ a\colon\,\Pi f\colon(\mathit{Nat}\to Nat).A\ (\lambda n\,:\! Nat.1+(f\ n))\\ M&\triangleq\ a\ (\lambda x\,:\! Nat.1+x)\\ N&\triangleq\ A\ (\lambda n\,:\! Nat.2+n)\end{array}\]
\(A\) is a family of types indexed by \(Nat\to Nat\) functions and \(a\)\(f\) constructs an element of type \(A\) (\(\lambda n\!:\!Nat.1+(f\ n)\)). According to the rule (Ty-Apply), the inferred type of \(M\) is
\[(A\ (\lambda n\!:\!Nat.1+(f\ n)))[(\lambda x\!:\!Nat.1+x)/f]\] \[=A\ (\lambda n\!:\!Nat.(1+(\lambda x\!:\!Nat.1+x)\ n)),\]
which reduces to \(A\ (\lambda n\!:\!Nat.2+n)\). We have \(\Gamma\vdash M:N\), yet \(N\) contains a function that is not in \(\Gamma\) or \(M\)! We should include this new function in \(\mathfrak{D}\), as it guarantees that we will never be in a situation where we need a non-existent label in \([\![M]\!]_{d}\) to type \([\![M]\!]\). In other words, the transformation defunctionalizes not just the source-language expression, but its entire type derivation tree.
Hence, we arrive at the rules in Fig. 4. Type derivations of universes do not involve functions at all (d-Universe). Function definitions in a variable \(x\) are just the definitions in its type \(A\) (d-Var). Definitions in a dependent function type \(\Pi x\!:\!A.B\) are the _union_ of definitions in \(A\) and \(B\) (d-Pl). The union here is defined in the same way as the union of contexts (see Definition 3.4), and there is no ambiguity since different functions correspond to different label names.
Definitions in an application \(M\)\(N\) are the union of definitions in \(M\), \(N\), and \(B[N/x]\), since the substitution \(B[N/x]\) may create new function definitions (d-Apply). For a lambda abstraction \(\lambda^{i}x\!:\!A.M\), the definitions it contains are the union of definitions in \(M\) and in \(A\) appended with \(\mathfrak{L}_{i}\), the definition of itself (d-Lambda). If \(M\) has type \(B\) by the conversion rule, then the definitions involved in the derivation of \(\Gamma\vdash M:B\) are the union of definitions in the derivation of \(\Gamma\vdash M:A\) and definitions in \(B\) (d-Equiv).
We define the _subset relation_ of label contexts to help state further definitions and theorems.
**Definition 3.7**: _For two well-formed label contexts \(\mathfrak{D}_{1}\) and \(\mathfrak{D}_{2}\), \(\mathfrak{D}_{1}\subseteq\mathfrak{D}_{2}\) if for all \(\mathfrak{L}_{i}(\{\overline{x}:\overline{A}\},\,x\!:\!A\ \mapsto N:B)\) in \(\mathfrak{D}_{1}\), \(\mathfrak{L}_{i}(\{\overline{x}:\overline{A}\},\,x\!:\!A\ \mapsto N:B)\) is also in \(\mathfrak{D}_{2}\)._
The notion of subsets gives a stronger weakening property to DCC: a well-typed expression is still well-typed in a larger label context.
**Lemma 3.8** (Label context weakening (subsets)): _If \(\mathfrak{D}_{1}\!:\!\Gamma\vdash M:A,\vdash\mathfrak{D}_{2}\), and \(\mathfrak{D}_{1}\subseteq\mathfrak{D}_{2}\), then \(\mathfrak{D}_{2}\!:\!\Gamma\vdash M:A\)._
Since the transformation defunctionalizes the entire type derivation tree of an expression, if \(\Gamma\vdash M:A\), then all elements in \([\![A]\!]_{d}\) are also in \([\![M]\!]_{d}\). We can prove this property by induction on the type derivation rules.
**Lemma 3.9**: _For any well-typed expression \(\Gamma+M:A\) in CC, \([\![A]\!]_{d}\subseteq[\![M]\!]_{d}\)._
The expression transformation and the process of extracting function definitions (\([\![-]\!]\) and \([\![-]\!]_{d}\)) act pointwise on CC contexts. In other words,
\[\begin{array}{rcl}[\![\cdot]\!]&\triangleq&\cdot,&[\![\Gamma,x\!:\!A]\ \!]& \triangleq&[\![\Gamma]\!],x\!:\![\![A]\!],\\ &[\![\cdot]\!]_{d}&\triangleq&\cdot,&[\![\Gamma,x\!:\!A]\!]_{d}\triangleq&[\![ \Gamma]\!]_{d}\cup[\![A]\!]_{d}.\end{array}\]
Now, we can see that the (tagged) composition function \(\lambda^{0}A.\lambda^{1}B.\lambda^{2}C.\lambda^{3}f.\lambda^{4}g.\lambda^{5}x.f\ x\ (g\ x)\) transforms to \(\mathfrak{L}_{0}\{\}\), a label with no free variables supplied, since the function is closed. The label context \(\mathfrak{D}\) for composition can be derived from the function extraction judgements with the sketch derivation tree shown below.
### Soundness
\[\begin{array}{ccccc} Expressions&::=&\cdots&\mid M\{x\mapsto N\}\\ \hline\Gamma\vdash M:A\\ \hline\Gamma\vdash M:\Pi x:A.\ B&\Gamma\vdash N:A\\ \hline\Gamma\vdash M\;N:B\{x\mapsto N\}&\mbox{\sf s-ty-Apply}&\frac{\Gamma,x:A \vdash M:B}{\Gamma\vdash M\{x\mapsto N\}:B\{x\mapsto N\}}\mbox{\sf s-ty-Subst} \\ \hline M\vdash N\\ \hline\end{array}\] ( _Reduction)_
\[\begin{array}{ccccc} Expressions&::=&\cdots&\mid M\{x\mapsto N\}\\ \hline\Gamma\vdash M:A\\ \hline\Gamma\vdash M\;N:B\{x\mapsto N\}&\mbox{\sf s-ty-Apply}&\frac{\Gamma,x:A \vdash M:B}{\Gamma\vdash M\{x\mapsto N\}:B\{x\mapsto N\}}\mbox{\sf s-ty-Subst} \\ \hline M\vdash N\\ \hline\end{array}\] ( _Typing_)
\[\begin{array}{ccccc}\Gamma\vdash M:A\\ \hline\Gamma\vdash M:A\\ \hline\Gamma\vdash M\;N:B\{x\mapsto N\}&\mbox{\sf s-ty-Apply}&\frac{\Gamma,x:A \vdash M:B}{\Gamma\vdash M\{x\mapsto N\}:B\{x\mapsto N\}}\mbox{\sf s-ty-Subst} \\ \hline M\vdash N\\ \hline\end{array}\] ( _Reduction)_
\[\begin{array}{ccccc}\hline\{x\{y\mapsto N\}\}\mbox{\sf s-red-Var1}&\frac{x \{x\mapsto N\}\mbox{\sf s-red-Var2}}{\Gamma\vdash M\{x\mapsto N\}\mbox{\sf s- red-Universe}}\\ \hline\end{array}\]
\[\begin{array}{ccccc}\hline\{x\{y\mapsto N\}\}\mbox{\sf s-red-Var1}&\frac{x \{x\mapsto N\}\mbox{\sf s-red-Var2}}{\Gamma\vdash M\{x\mapsto N\}\mbox{\sf s- red-Universe}}\\ \hline\end{array}\]
\[\begin{array}{ccccc}\hline\{x\{y\mapsto N\}\}\mbox{\sf s-red-Var1}&\frac{x \{x\mapsto N\}\mbox{\sf s-red-Var2}}{\Gamma\vdash M\{x\mapsto N\}\mbox{\sf s- red-Universe}}\\ \hline\end{array}\]
\[\begin{array}{ccccc}\hline\{x\{y\mapsto N\}\}\mbox{\sf s-red-Beta}&\frac{x \{x\mapsto N\}\mbox{\sf s-red-Beta}}{\Gamma\vdash M\{x\mapsto N\}\mbox{\sf s- red-Apply}}\\ \hline\end{array}\]
\[\begin{array}{ccccc}\hline\{(\lambda x\{:}A.M)\,\{x\mapsto N\}\mbox{\sf s- red-Beta}}&\frac{x\{x\mapsto N\}\mbox{\sf s-red-Capture}}{\Gamma\vdash M\{x\mapsto N\}\mbox{\sf s- red-Closure}}\\ \hline\end{array}\]
[MISSING_PAGE_POST]
\[\begin{array}{ccccc}\hline\{x\{x\mapsto N\}\mbox{\sf s-red-Var2}}&\frac{x \{x\mapsto N\}\mbox{\sf s-red-Var2}}{\Gamma\vdash M
all \(M_{i}\) translate into well-typed DCC expressions \(\mathsf{M}_{i}\) in \((\mathfrak{D}_{\Gamma}\cup\mathfrak{D}_{\mathsf{M}_{i}}\cup\mathfrak{D}_{ \mathsf{M}_{n}});\Gamma\), which makes the standard approach infeasible. Moreover, preservation of reduction sequences is a key lemma for showing type preservation, since CC's typing rules involve equivalence and the equivalence rule (eq-Reduce) is defined with reductions.
Fortunately, meta-theoretic substitution is the only means of creating new function definitions in CC's reduction sequences. There would be no problem if the source language did not evaluate substitutions into functions but kept them as primitive expressions. To apply this observation we define a helper language \(\mathrm{CC}_{S}\), which is an extension of CC with _explicit substitutions_[Abadi et al., 1991]. In addition, \(\mathrm{CC}_{S}\) does not reduce substitutions of expressions into functions.
Since \(\mathrm{CC}_{S}\) extends CC, every CC expression is trivially a \(\mathrm{CC}_{S}\) expression. We denote this trivial transformation from CC to \(\mathrm{CC}_{S}\) as \(\sigma\). Then, we define the defunctionalization transformation from \(\mathrm{CC}_{S}\) to DCC in a similar way as that from CC to DCC - an expression transformation \([[-]]\) and a meta-function \([[-]]_{d}\) for extracting definitions. Next, we show that \(\sigma\) and defunctionalization for \(\mathrm{CC}_{S}\) preserve reduction sequences and they commute with the transformation from CC into DCC. As a corollary, defunctionalization from CC to DCC preserves reduction sequences. In other words, we show that the following diagram commutes for all CC-expressions \(M\) and \(N\) (contexts omitted).
\(\mathrm{CC}_{S}\) is an extension of CC with new syntax, type derivation rules, reduction rules, and equivalence rules (Fig. 5). We write \(\mathrm{CC}_{S}\) expressions in a \(\mathit{teal},\mathit{mathematical}\mathit{font}\) to avoid ambiguity. \(\mathrm{CC}_{S}\) extends the CC syntax with _syntactic substitutions_ of the form \(M\{x\mapsto N\}\).
Type rules for variables, universes, \(\Pi\)-types, functions, and equivalence in \(\mathrm{CC}_{S}\) are the same as the standard rules in CC, except that the type of an application \(M\)\(N\) is \(B\{x\mapsto N\}\) with the syntactic substitution. The type of a substitution \(M\{x\mapsto N\}\) is the type of \(M\) with \(x\) substituted by \(N\) (s-ty-Subst).
\(\mathrm{CC}_{S}\) has five reduction rules for substitutions, which are the standard meta-theoretic substitution rules for variables, universes, \(\Pi\)-types, and applications being internalised into the language. Note that the meta-theoretic substitution in the CC's original beta-reduction rule \((\lambda x{:}A.M)\)\(N\)\(M\{x\mapsto N\}\) is also replaced by the syntactic one. \(\mathrm{CC}_{S}\) does not reduce substitutions into functions, but it \(\beta\)-reduces them when they are applied to arguments (s-red-Closure). We write \(M\{x_{1}\mapsto N_{1},x_{2}\mapsto N_{2}\}\) for a substitution followed by another substitution \((M\{x_{1}\mapsto N_{1}\})\{x_{2}\mapsto N_{2}\}\), and \(M\{\overline{y}\mapsto\overline{N}\}\) for a sequence of substitutions \((((M\{y_{1}\mapsto N_{1}\})\{x_{2}\mapsto N_{2}\})\cdots)\{y_{n}\mapsto N_{n}\}\).
Like in CC, two terms in \(\mathrm{CC}_{S}\) are equivalent if they \(\beta\)-reduce to the same expression or are \(\eta\)-equivalent. In addition, \(\mathrm{CC}_{S}\) has two symmetric rules (s-eq-Closure1) and (s-eq-Closure2) for determining when a sequence of substitutions into a function \((\lambda x{:}A.M)(\overline{y}\mapsto\overline{N}\}\) is equivalent to another expression. This is essentially a variant of the \(\eta\)-equivalence rules that is compatible with substitutions \(-(\lambda x{:}A.M)\{\overline{y}\mapsto\overline{N}\}\) is equivalent to \(N\) if applying \(N\) to \(x\) is equivalent to the function body \(M\) with \(\overline{y}\) being substituted for \(\overline{N}\).
Now, we define the defunctionalization transformation from \(\mathrm{CC}_{S}\) to DCC, which is the transformation from CC to DCC extended with the following two rules. We use \([[-]]\) and \([[-]]_{d}\) to stand
for the expression transformation and the metafunction for extracting function definitions, and we apply the convention of tagging lambdas with unique identifiers \(i\) (\(i\in\mathbb{N}\)) as usual.
\[\begin{array}{l}\Gamma\vdash M:A\leadsto M\\ \frac{\Gamma,x:A\vdash M:B\leadsto M\quad\Gamma\vdash N:A\leadsto N\\ \Gamma\vdash M\{x\mapsto N\}:B\{x\mapsto N\}\leadsto M[\mathbb{N}/\mathbb{x}] \end{array}\]
\[\begin{array}{l}\Gamma\vdash M:A\leadsto_{d}\mathfrak{D}\\ \frac{\Gamma,x:A\vdash M:B\leadsto_{d}\mathfrak{D}_{1}\quad\Gamma\vdash N:A \leadsto_{d}\mathfrak{D}_{2}}{\Gamma\vdash M\{x\mapsto N\}:B\{x\mapsto N\} \leadsto_{d}\mathfrak{D}_{1}\cup\mathfrak{D}_{2}}\end{array}\]
The transformation turns a syntactic substitution in \(\mathrm{CC}_{S}\) into a meta-theoretic substitution in DCC (s-t-Subst); the function definitions in a substitution \(M\{x\mapsto N\}\) are the union of the definitions in \(M\) and \(N\) (s-d-Subst). Since substitutions into functions do not reduce in \(\mathrm{CC}_{S}\), the transformation from it into DCC have the following strong properties by definition, which are not true for the transformation from CC into DCC.
\[M\mathbin{\raisebox{0.0pt}{\scalebox{1.0}{$\bullet$}}}^{*}N\Longrightarrow[[N]]_ {d}\subseteq[[M]]_{d} \tag{1}\]
\[[\![M\{x\mapsto N\}]\!]=[\![M]\!][[[N]\!]/\mathbb{x}]. \tag{2}\]
Next, we show that the transformation preserves small step reductions in \(\mathrm{CC}_{S}\) - if a \(\mathrm{CC}_{S}\) program \(M\) reduces to \(N\) in one step, then the translated program \(M\) evaluates to \(N\) in a sequence.
Lemma 3.10 (Preservation of small step reductions).: _If \(\Gamma\vdash M:A\) and \(M\mathbin{\raisebox{0.0pt}{\scalebox{1.0}{$\bullet$}}}N\), then \([\![\Gamma]\!]_{d}\cup[\![M]\!]_{d}\vdash M\mathbin{\raisebox{0.0pt}{\scalebox{ 1.0}{$\bullet$}}}^{*}N\)._
The transformation preserves sequences of reductions, and the proof follows from a trivial induction on the number of small steps in the sequence.
Lemma 3.11 (Preservation of reduction sequences (\(\mathrm{CC}_{S}\))).: _If \(\Gamma\vdash M:A\) and \(M\mathbin{\raisebox{0.0pt}{\scalebox{1.0}{$\bullet$}}}^{*}N\), then \([\![\Gamma]\!]_{d}\cup[\![M]\!]_{d}\vdash M\mathbin{\raisebox{0.0pt}{\scalebox {1.0}{$\bullet$}}}^{*}N\)._
The transformation is also coherent, i.e. it preserves the equivalence relation in \(\mathrm{CC}_{S}\).
Lemma 3.12 (Coherence (\(\mathrm{CC}_{S}\))).: _If \(\Gamma\vdash M:A\), \(\Gamma\vdash N:A\), and \(\vdash M\equiv N\), then \(\mathfrak{D}\vdash M\equiv N\), where \(\mathfrak{D}=[\![\Gamma]\!]_{d}\cup[\![M]\!]_{d}\cup[\![N]\!]_{d}\)._
Recall that \(\sigma\) denotes the trivial transformation from CC to \(\mathrm{CC}_{S}\). This trivial transformation commutes with the two term transformations by definition.
\[[\![\sigma(M)]\!]=[\![M]\!] \tag{3}\]
In addition, function definitions in \([\![\sigma(M)]\!]_{d}\) is a subset of the definitions in \([\![M]\!]_{d}\), because new function definitions appear in CC's type derivation trees as results of substitutions, but this does not happen in \(\mathrm{CC}_{S}\).
\[[\![\sigma(M)]\!]_{d}\subseteq[\![M]\!]_{d} \tag{4}\]
We show that \(\sigma\) also preserves sequences of reductions. As a convention, we write \(M\) for \(\sigma(M)\) when there is no ambiguity.
Lemma 3.13 (Preservation of reduction sequences (\(\sigma\))).: _If \(\Gamma\vdash M\mathbin{\raisebox{0.0pt}{\scalebox{1.0}{$\bullet$}}}^{*}N\), then \(\Gamma\vdash M\mathbin{\raisebox{0.0pt}{\scalebox{1.0}{$\bullet$}}}^{*}M^{\prime}\) where \(\Gamma\vdash M^{\prime}\equiv N\)._
We can finally prove the preservation of reduction sequences for dependently typed defunctionalization (from CC to DCC) using the lemmas above.
Lemma 3.14 (Preservation of reduction sequences).: _For all \(M\) and \(N\), if \(\Gamma\vdash M:A\) and \(M\triangleright^{*}N\), then we have_
\[\mathfrak{D}_{\Gamma}\cup\mathfrak{D}_{M}\cup\mathfrak{D}_{N}\vdash M \triangleright^{*}M^{\prime}, \tag{5}\]
\[\mathfrak{D}_{\Gamma}\cup\mathfrak{D}_{M}\cup\mathfrak{D}_{N}\vdash M^{\prime} \triangleright^{*}N \tag{6}\]
_for some \(M^{\prime}\) where \((\mathfrak{D}_{\Gamma}\cup\mathfrak{D}_{M}\cup\mathfrak{D}_{N})=([[\Gamma]]_{ d},[[M]]_{d},[[N]]_{d})\) and \((M,N)=([[M]],[[N]])\)._
Since ground types and values do not contain functions, \([[\![\varphi]\!]_{d}=\cdot\), and the correctness of the transformation is just a special case of Lemma 3.14.
Corollary 3.15 (Correctness).: _For all ground types \(A\) and values \(v\) of type \(A\),_
\[\cdot\vdash M:A\wedge M\triangleright^{*}v\Longrightarrow\mathfrak{D}_{\Gamma} \cup\mathfrak{D}_{M}\vdash M\triangleright^{*}v^{\prime}\text{ where }\vee^{\prime}\equiv v.\]
The proof of type-preservation requires three lemmas: _substitution, preservation of reduction sequences_, and _coherence_. Lemma 3.14 established that dependently-typed defunctionalization preserves reduction sequences with the help of \(\operatorname{CC}_{S}\), and now we prove the remaining two lemmas in a similar way. The substitution lemma states that defunctionalization is compatible with substitutions.
Lemma 3.16 (Substitution).: \(\operatorname{\mathit{If}}\Gamma,x:A\vdash M:B\text{ and }\Gamma\vdash N:A\)_, then \(\mathfrak{D}\vdash[\![M[N/x]]\!]\equiv M[\![N/x]\!]\), where \(\mathfrak{D}=\mathfrak{D}_{\Gamma}\cup\mathfrak{D}_{M}\cup\mathfrak{D}_{N} \cup\mathfrak{D}_{M}[\![N/x]\!]\)._
The coherence lemma states that defunctionalization is compatible with \(\operatorname{CC}\)'s coherence judgements.
Lemma 3.17 (Coherence).: _If \(\Gamma\vdash M:A\), \(\Gamma\vdash N:A\), and \(\vdash M\equiv N\), then \(\mathfrak{D}\vdash M\equiv N\), where \(\mathfrak{D}=\mathfrak{D}_{\Gamma}\cup\mathfrak{D}_{M}\cup\mathfrak{D}_{N}\)._
Finally, we show type preservation with an induction on \(\operatorname{CC}\)'s type derivation rules.
Theorem 3.18 (Type preservation).: _For all well-typed programs \(M\),_
\[\Gamma\vdash M:A\Longrightarrow\mathfrak{D}_{\Gamma}\cup\mathfrak{D}_{M}; \Gamma\vdash M:A,\]
_where \((\mathfrak{D}_{\Gamma},\mathfrak{D}_{M})=([[\Gamma]]_{d},[[M]]_{d})\) and \((\Gamma,M,A)=([[\Gamma]],[[M]],[[A]])\)._
### Consistency and Type Safety of DCC
As a dependent type theory, DCC should be type-safe when it acts as a programming language and consistent when interpreted as a logic. Following Boulier et al. (2017) and Bowman and Ahmed (2018), we prove these properties in this section by defining a _backward transformation_ from DCC to CC and showing that it preserves reduction sequences, so that reducing an expression in DCC is equivalent to reducing an expression in CC. The transformation is type-preserving and turns the logical interpretation of _false_ in DCC into that of CC, so that valid proofs (i.e. well-typed programs) in DCC correspond to valid proofs in CC. This reduces the problem of proving the type safety and consistency of DCC to proving that of CC, which is a standard result (Coquand and Huet 1988). In other words, we show that DCC can be modelled by CC in a consistent and meaning-preserving way. Type preservation for the backward transformation also requires the _substitution, preservation of reduction sequences_, and _coherence_ lemmas, similar to the proof of Theorem 3.18, whose proofs are straightforward.
We define the backward transformation \([\![-]\!]\) with a new judgement (Fig. 6) of the form \(\mathfrak{D};\Gamma\vdash M:A\leadsto_{b}M\) and \([\![M]\!]\triangleq M\). The translation maps variables, universes, \(\Pi\)-types, and applications back to their corresponding forms in CC, and maps label expressions \(\mathfrak{Q}\{\overline{M}\}\) where \(\mathfrak{Q}(\{\overline{x}:\overline{A}\},x:A\mapsto M:B)\in\mathfrak{D}\) into \(\lambda x\):\(A[\overline{M}/\overline{x}]\).\(M[\overline{M}/\overline{x}]\) - a function with all of its free-variable values substituted in, where \(A\), \(M\), and \(\overline{M}\) stand for \([\![A]\!]\), \([\![M]\!]\), and \([\![\overline{M}]\!]\) respectively
(b-Label). Intuitively, \([[-]]\) decompiles a label back to the function it represents. The backward transformation also acts pointwise on type contexts.
In CC, the interpretation of the logical _false_ is \(\Pi x{:}U_{0}.x\). There is no closed expression with the _false_ type. In DCC, the interpretation of _false_ is \(\Pi x{:}U_{0}.x\), so the backward transformation preserves falseness by definition.
Next, we show that the backward transformation is compatible with substitutions. As a convention in this section, we write \(M\) for \([[M]]\) when there is no ambiguity.
Lemma 3.19 (Backward transformation compatible with substitutions): _If \(\mathfrak{D};\Gamma,x:A\vdash M:B\) and \(\mathfrak{D};\Gamma\vdash N:A\), then \([[M[N/x]]]=M[N/x]\)._
Similar to proofs in Section 3.4, we show preservation of reduction sequences by showing that the transformation preserves small-step reductions. Using that, we show the coherence lemma for the backward transformation, and then the type preservation.
Lemma 3.20: _If \(\mathfrak{D};\Gamma\vdash M:A\) and \(\mathfrak{D}\vdash M\triangleright^{*}N\), then \(M\triangleright^{*}N\)._
Lemma 3.21: _If \(\mathfrak{D};\Gamma\vdash M:A\), \(\mathfrak{D};\Gamma\vdash N:A\), and \(\mathfrak{D}\vdash M\equiv N\), then \(\vdash M\equiv N\)._
Lemma 3.22: _If \(\mathfrak{D};\Gamma\vdash M:A\), then \(\Gamma\vdash M:A\)._
As a corollary of Lemma 3.20 and Lemma 3.22, DCC is type-safe and consistent since CC is.
Theorem 3.23 (Type safety): _If \(\mathfrak{D};\cdot\vdash M:A\), then \(\mathfrak{D}\vdash M\triangleright^{*}\nu\) for some irreducible value \(\nu\)._
That is, type safety guarantees that every well-typed closed DCC term reduces to a value in a finite number of steps.
Theorem 3.24 (Consistency): _There is no pair of a label context \(\mathfrak{D}\) and DCC term \(M\) such that \(\mathfrak{D};\cdot\vdash M:\Pi A{:}U.A\)._
Figure 6: Backward transformation
Interpreting DCC as a logic, the term \(\Pi\)A:U.A (which produces a term of any type A) corresponds to _false_; it is backward-transformed to \(\Pi\)A:\(U.A\), the representation of _false_ in CC. Consistency of DCC means that there is no closed term of type \(\Pi\)A:U.A; if there were, then translation would yield a corresponding term in CC, and CC would also be inconsistent.
## 4. Implementation
We provide a portable standalone implementation of the defunctionalization translation of SS3, written in OCaml and compiled to run in a web browser using js_of_ocaml(Vouillon and Balat, 2014). The implementation performs type checking of CC (SS3.1) and DCC (SS3.2) terms, abstract defunctionalization (SS3.3) and backwards translation from DCC to CC (SS3.5), allowing the interested reader to experiment with the effects of the translation on real examples. We include several ready-made examples, including dependent composition, dependent pairs and finite sets.
## 5. Related Work
_Type-preserving compilation._ Type-preserving compilation was initially developed for optimizing compilation and verifying the compiled code; it has been used extensively in compilers of simply-typed and polymorphic languages, and occasionally for dependently-typed languages. For example, Tarditi et al. (1996) present TIL (typed intermediate language), an ML compiler featuring type-directed code optimization of loops, garbage collections, and polymorphic function calls, and Morrisett et al. (1999) study a type-preserving translation from System F to the typed assembly language TAL. Xi and Harper (2001) later extended TAL to DTAL, an assembly language with a limited form of dependent types that serves as a compilation target for Dependent ML.
Guillemette and Monnier (2008) also present a type-preserving compiler from System F, but take a different approach, building typed intermediate representations using generalized algebraic data types and using the type system of the host language (GHC Haskell) to verify that each compiler phase preserves types. Embedding typed transformations in this way is a popular technique in the functional programming community, exemplified in work by Carette et al. (2009), which presents type-preserving CPS transformations of an embedded language along with type-preserving optimizations based on partial evaluation.
Necula (1997)'s proof-carrying code is another early method for generating reliable executables. It relies on an external logical framework to check the correctness of proofs attached with the code.
Bowman and collaborators have developed several type-preserving translations for dependently-typed languages, including CPS transformation (Bowman et al., 2018), closure conversion (Bowman and Ahme, 2018) (building on typed closure conversion for System F by Minamide et al. (1996)), and translation to ANF (Koronkevich et al., 2022).
_Defunctionalization._ Defunctionalization was first presented by Reynolds (1972) as a programming technique to translate a higher-order interpreter into a first-order one (Reynolds, 1972). It has been used in a variety of applications, from ML compilers (Cejtin et al., 2000; Chin and Darlington, 1996), to type-safe garbage collectors (Wang and Appel, 2001), and encodings of higher-kinded polymorphism (Yallop and White, 2014).
Defunctionalization was originally presented as an untyped translation. Using a family of monomorphic _apply_ functions to make simply-typed defunctionalization type-preserving is a standard workaround in the literature (Bell et al., 1997; Cejtin et al., 2000; Nielsen, 2000; Tolmach and Oliva, 1998).
Danvy and Nielsen (2001) survey more examples of defunctionalization in practice.
Formalization of defunctionalization has up to this point focused on proving type preservation and correctness of the transformation. Bell et al. (1997) have shown that the translation for simply
typed programs is type preserving. Nielsen (2000) has proved its partial correctness with denotational semantics, and Banerjee et al. (2001) have established total correctness using operational semantics. Pottier and Gauthier (2004) have formalized type-preserving polymorphic defunctionalization in System F extended with GADTs.
Closure conversionLike defunctionalization, _closure conversion_ transformations also involve representing a closure as a first-order value that pairs a kind of code identifier with a collection of free variables. The formulations of closure conversion in the work referenced above (Bowman and Ahmed, 2018; Minamide et al., 1996) differ markedly from defunctionalization: while defunctionalization involves a globally-defined map indexed by code identifiers (such as an _apply_ function or our label environment), these closure conversions instead locally transform functions into code-and-environment pairs that can then be applied using a standard elimination rule. However, other formulations of closure conversion (e.g. Appel, 1992; Siek, 2012) additionally lift functions to top-level, making the transformation more similar to defunctionalization.
Closure conversion plays a key role in compilers for many functional languages, including Scheme (Steele Jr, 1978), CAML (Mauny and Suarez, 1986), Standard ML (Cejtin et al., 2000), Haskell (Leshchins, 2006) and others. Recent work has focused on establishing sophisticated semantic properties, such as correctness of closure conversion in the presence of mutable state and control effects (even when linked with foreign-language code) (Mates et al., 2019), and preservation of time and space properties (Paraskevopoulou and Appel, 2019).
RefunctionalizationOur backward translation (See SS3.5) is related to _refunctionalization_(Danvy and Milliki, 2009), the left-inverse of defunctionalization. As in refunctionalization, we replace target applications M@N with source applications \(M\)\(N\), and labels \(\mathfrak{L}\{\overline{M}\}\) with abstractions \((\lambda x{:}A.M)\,[\overline{M}/\overline{x}]\) based on their implementations \(\mathfrak{L}(\{\overline{x}:\overline{A}\},x{:}A\,\mapsto M:\overline{B})\) in the label context.
## Acknowledgments
We thank David Sheets, Andras Kovacs, and Marcelo Fiore for helpful comments.
|
2305.13321 | A brief overview of Turkiye Earthquake: insight into the building damage | The month of February 2023 had been a nightmare for the people of Turkiye. On
6th Feb, there was a devastating earthquake which jolted Turkiye like never
before. The aftershock which followed later added more fuel to the overall
damages. Till now, there has been reports several aftershocks which is believed
to continue for another considerable span of time. In terms of damage, the loss
was totally irreparable. Millions of people were rendered homeless, and a large
chunk of population had lost their lives due to building collapse. This mini
communication overviews the past seismicity of Turkiye. As per preliminary
report, there has been substantial liquefaction and ground subsidence in and
around the epicenters. Accordingly, the liquefaction is also briefly detailed
along with types of prevalent sediments Further, the damages as well as
building collapsed are detailed here along with probable causes as well as
lacunae observed in building construction. Future strategies may possibly
involve such as base isolation or isolators can be introduced in order to make
the buildings more resilient earthquakes. However, strict monitoring and
compliance of structures to building code should be implemented with strict
measures. Any violation of such codes should be penalized. All these measures
can lead to earthquake resilient society. | Rajib Biswas | 2023-05-15T13:44:51Z | http://arxiv.org/abs/2305.13321v1 | # A brief overview of Turkiye Earthquake: insight into the building damage
###### Abstract
The month of February, 2023 had been a nightmare for the people of Turkiye. On 6th Feb, there was a devastating earthquake which jolted Turkiye like never before. The aftershock which followed later added more fuel to the overall damages. Till now, there has been reports several aftershocks which is believed to continue for another considerable span of time. In terms of damage; the loss was totally irreparable. Millions of people were rendered homeless and a large chunk of population had lost their lives due to building collapse. This mini communication overviews the past seismicity of Turkiye. As per preliminary report, there has been substantial liquefaction and ground subsidence in and around the epicenters. Accordingly, the liquefaction is also briefly detailed along with types of prevalent sediments Further, the damages as well as building collapsed are detailed here along with probable causes as well as lacunae observed in building construction. Future strategies may possibly involve such as--base isolation or isolators can be introduced in order to make the buildings more resilient earthquakes. However, strict monitoring and compliance of structures to building code should be implemented with strict measures. Any violation of such codes should be penalized. All these measures can lead to earthquake resilient society.
Aftershock, building, damage, earthquake, shaking, frequency
## 1 Introduction
A 7.8-magnitude earthquake of February 6, 2023, occurred in southern Turkey, close to Syria's northern border. A magnitude 6.7 aftershock occurred 11 minutes after the initial earthquake. An earthquake of magnitude 7.8 was caused by shallow strike-slip faulting. A near-vertical left-lateral fault striking northeast-southwest or a right-lateral fault striking southeast-northwest were both ruptured by the event. According to preliminary information, the earthquake occurred close to a triple-junction of the African, Arabian, and Anatolia plates. The earthquake's mechanism and epicentre are consistent with it having happened on either the Dead Sea transform fault zone or the East Anatolia fault zone. Turkey's westward extrusion into the Aegean Sea is accommodated by the East Anatolia fault, and the Arabian Peninsula's northward motion in relation to the African and Eurasian plates is accommodated by the Dead Sea Transform [1-6].
In 10 provinces across Turkey, there were at least 46,104 fatalities, 114,991 injuries, approximately 1,5 million people made homeless, at least 164,000 buildings severely damaged or destroyed, and 150,000 commercial facilities considerably affected. At least 490 structures were demolished and many more were damaged in northwest Syria, resulting in at least 6,795 deaths, 14,500 injuries, and 5,37 million people being made homeless. Turkey's Golbasi and Hatay experienced liquefaction and land subsidence. With wave heights of 17 cm at Famagusta, Cyprus; 13 cm at Erdemli; and 12 cm at Iskenderun, Turkey, a minor tsunami was produced. IXth highest intensity. Despite the fact that they are frequently represented on maps as single spots, earthquakes actually rupture planes with dimensions. A fault that is 190 km long and 25 km wide is frequently ruptured by a magnitude 7.8 strike slip earthquake. Figure 1 shows the waveform of the main event.
The aim of this short communication is to briefly overview the seismicity of Turkiye in relation to the devastating main event assisted by liquefaction observation along with the outlining of the causes of building damages which resulted in huge fatalities in the decade so far. Accordingly, the first section deals past seismicity. The 2\({}^{\rm nd}\) section dwells upon liquefaction pattern observed due to earthquake. The third section analyzes the building damage followed by recommendations.
Figure 1. Seismic waveform of the M\({}_{\rm W}\)7.8 earthquake (Courtesy: USGS)
## 2 Past seismicity
The earthquake that happened on February 6 occurred in a seismically active area. Since 1970, there have only been three earthquakes with a magnitude of 6 or higher that have happened within 250 kilometers of the February 6 quake. On January 24, 2020, the largest of these, with a magnitude of 6.7, occurred to the northeast of the February 6 earthquake. These earthquakes all happened around or along the East Anatolia fault. Southern Turkey and northern Syria have previously been subjected to big and destructive earthquakes, notwithstanding the relative seismic quiescence of the epicentral region of the February 6 earthquake. Although the particular locations and magnitudes of these earthquakes are unknown, Aleppo, in Syria, has traditionally been devastated by big earthquakes. In 1138 and 1822, respectively, earthquakes with estimated magnitudes of 7.0 and 7.1 both hit Aleppo. There were 20,000-60,000 fatality estimates for the earthquake of 1822 [1-5].
Figure 2. Liquefaction Map (Courtesy: USGS)
Figure 2 depicts the level of liquefaction as observed. Unexpectedly, most liquefaction and lateral spreading sites manifested along and near coastal sites, fluvial valleys and drained lake/swamp areas, covered by Holocene loose sediments. The distribution of mapped liquefaction sites along the fault rupture and shows that most concentrations are found in Holocene sediment-filled basins [4-5].
## 3 Looking into building damages: probable loopholes
The Turkey earthquake was an eye opener. The two earthquakes 7.8 and 7.5 actually devastated the infrastructure of two urban settlements. It was such a massive earthquake that all buildings nearly suffered major to minor damages. To be precise, a copious no. of buildings had given in and collapsed in response to the heavy shaking of the two earthquakes. As the saying goes, earthquakes do not kill people; but buildings do. The statement has been proven again.
Each disaster provides lessons. There is a need to strengthen building, seismic codes. Past earthquakes led Turkey to revamp their code in 1998 as well as 2007. Although in practice, however, reinforcement problem impairs it to a great level.
The capital, Istanbul's booming population and rapid construction are a bottleneck for the ensuing rescue operations.
Design codes and professional codes when implemented by architects, engineers and contractors can clearly reduce risks and thus lives and damages. Structures must be maintained. Poor maintenance which often occurs with infrastructure as a cost saving measure will reduce effective mess of a structural improvement. Similarly, structures have design lives. They wear out over time, especially if poorly maintained. Many disasters are highly foreseeable. Just because, it has not happened before, does not mean safe. Yet, infrastructure is often neglected and overlooked as long as no problems have occurred.
Looking at the pattern of building damages that occurred in Turkey, most of them are soft story building. There were ground level parking with pillars in between them. If we look at the tectonic settings of turkey; it is quite complex. There are three plates which actually collided with each other leading to the occurrence of these huge earthquakes.
As a strict action, the Govt. of Turkey had been on a mission to identify the unscrupulous builders--engaged in the construction o these damaged structures due to earthquake. Accordingly, a considerable chunk of people has been summoned in order to execute criminal proceedings against them. Now the pertinent question is--is this enough? Damage as well as huge causalities have already occurred. What had already happened cannot be undone.
We know that every high-rise structure is prone to shaking. All such structures have a fundamental frequency. When seismic waves travel through a medium of low density; its velocity decreases. Thereby, it spends more time in that medium. As a result, if the predominant frequency matches with frequency of the above structure; there occurs resonance and the building sway with maximum amplitude--leading to eventual collapse.
Buildings that collapsed during the tragedy because of poor construction, inferior materials, and a failure to adhere to building rules have sparked indignation across the country. The earthquake caused some brand-new apartment buildings, which were marketed as being built to the greatest earthquake specifications, to collapse.
Experts are attempting to piece together data on the compliance level of buildings in the geologically vulnerable region more than a month after three fatal earthquakes slammed Turkey. According to a recent report summarizing the first results of the damage assessment following the earthquakes, violations of the building code's requirements during the previous 20 years were a major factor in the significant loss of life and infrastructure damage.
The Turkish Earthquake Code (2018) design levels were not met by the buildings in the provinces of Gaziantep, Hatay, Kahramanmaras, and Adiyaman, according to the report by a team of scientists from Middle East Technical University (METU), Ankara, and colleagues. Following the destruction, Turkey's building codes and construction methods have come under criticism.
Notwithstanding the fact that the earthquakes were exceptional, the experts highlighted that buildings should have survived and not collapsed in the manner that they did.
According to the international team's "Preliminary Reconnaissance Report," buildings built after 2002 are likely to fare better during earthquakes than earlier structures. The analysis reveals that more than 1,000 buildings built after 2000 suffered significant damage or collapsed, contravening the performance aim set forth in the code that evaluates the seismic risk of a building in relation to its location. This looked to be a significant observation, according to the report, necessitating more research into the caliber of those structures' design and construction. Again, inadequacies could also emerge due to the existence of soft stories--which are entrances or basements without continuous walls with those of the upper storeys. It was found that the "pancake" collapses of numerous structures were, in fact, caused by inadequate foundations. This type of structural collapse known as a "pancake collapse" happens when upper floors of a building sink into lower ones. Examples of "severe alterations" that were categorically unacceptable in the amended requirements include the use of low-quality materials, unribbed reinforcement bars used in construction, and insufficient stirrup tightening (which is intended to laterally constrain steel reinforcement). Many new structures constructed after 2000 that were not adequately engineered, well inspected, or whose soil-structure link remained unestablished, were damaged or destroyed beyond expectations. 6 cm-long stones were found in concrete samples recovered from a fallen structure in Adiyaman. These were employed to bulk out the concrete and came from a nearby river.
4. Way Forward
Given the level of damage that had occurred due to the earthquakes, the main agenda before Turkiye Govt. is to rebuild the area which may amount to $100 billion--as per UN reports. The common mass should be more aware and vigilant so that maximum damage can be reduced. The fundamental tenet of construction is to let some degree of damage inside the structure. This damage guarantees that the building still stands erect but does not collapse by absorbing the earthquake's force. Likewise, it is possible to include elements like dampers, which function as shock absorbers
as the building sways, and rubber bearings, which are installed underneath buildings and absorb earthquake energy. Similarly, base isolation or isolators can be introduced in order to make the buildings more resilient earthquakes. However, strict monitoring and compliance of structures to building code should be implemented with strict measures. Any violation of such codes should be penalized.
## 5 Final Remarks
This brief communication overviews the recent Turkishye Earthquake and liquefaction caused thereof. The past seismicity is also delineated. The probable causes/pitfalls associated with collapsed structures are reviewed. As we know, earthquakes can not be averted, however, the seismic hazard can be mitigated. With strict measures as well as earthquake resilient structures, the devastated nation Turkishye is believed to recover in the coming days.
|
2304.12316 | Constraining the onset density for the QCD phase transition with the
neutrino signal from core-collapse supernovae | The occurrence of a first-order hadron-quark matter phase transition at high
baryon densities is investigated in astrophysical simulations of core-collapse
supernovae, to decipher yet incompletely understood properties of the dense
matter equation of state (EOS) using neutrinos from such cosmic events. It is
found that the emission of a nonstandard second neutrino burst, dominated by
electron antineutrinos, is not only a measurable signal for the appearance of
deconfined quark matter but also reveals information about the state of matter
at extreme conditions encountered at the supernova (SN) interior. To this end,
a large set of spherically symmetric SN models is investigated, studying the
dependence on the EOS and the stellar progenitor. General relativistic
neutrino-radiation hydrodynamics is employed featuring three-flavor Boltzmann
neutrino transport and a microscopic hadron-quark hybrid matter EOS class.
Therefore, the DD2 relativistic mean-field hadronic model is employed, and
several variations of it, and the string-flip model for the description of
deconfined quark matter. The resulting hybrid model covers a representative
range of onset densities for the phase transition and latent heats. This
facilitates the direct connection between intrinsic signatures of the neutrino
signal and properties of the EOS. In particular, a set of linear relations has
been found empirically. These potentially provide a constraint for the onset
density of a possible QCD phase transition from the future neutrino observation
of the next galactic core-collapse SN, if a millisecond electron anti-neutrino
burst is present around or less than 1s. | Noshad Khosravi Largani, Tobias Fischer, Niels Uwe F. Bastian | 2023-04-24T17:59:41Z | http://arxiv.org/abs/2304.12316v2 | Constraining the onset density for the QCD phase transition with the neutrino signal from core-collapse supernovae
###### Abstract
The occurrence of a first-order hadron-quark matter phase transition at high baryon densities is investigated in astrophysical simulations of core-collapse supernovae, to decipher yet incompletely understood properties of the dense matter equation of state using neutrinos from such cosmic events. It is found that the emission of a non-standard second neutrino burst, dominated by electron-antineutrinos, is not only a measurable signal for the appearance of deconfined quark matter but also reveals information about the state of matter at extreme conditions encountered at the supernova interior. To this end, a large set of spherically symmetric supernova models is investigated, studying the dependence on the equation of state and on the stellar progenitor. General relativistic neutrino-radiation hydrodynamics is employed featuring three-flavor Boltzmann neutrino transport and a microscopic hadron-quark hybrid matter equation of state class, that covers a representative range of parameters. This facilitates the direct connection between intrinsic signatures of the neutrino signal and properties of the equation of state. In particular, a set of novel relations have been found empirically. These potentially provide a constraint for the onset density of a possible QCD phase transition, which is presently one of the largest uncertainties in modern investigations of the QCD phase diagram, from the future neutrino observation of the next galactic core-collapse supernova.
Supernova dynamics (1664), Compact objects (288), High energy astrophysics (739), Supernova neutrinos (1666), Hydrodynamics (1963)
## 1 Introduction
Stars more massive than about 9 M\({}_{\odot}\) end their life as a core-collapse supernova (SN). Thereby, a hydrodynamic shock wave forms. It first propagates quickly outwards but stalls later due to the energy losses from the release of the \(\nu_{e}\) burst, which is associated with the shock propagation across the \(\nu_{e}\)-neutrinosphere, and the dissociation of nuclei from the collapsing outer layers of the stellar core. The SN problem is related to the revival of the stalled bounce shock through the transfer of energy from the central proto-neutron star (PNS) into the post-shock layer (c.f. Langanke et al., 2003; Janka et al., 2007; Mirizzi et al., 2016; Burrows and Vartanyan, 2021, and references therein). Several scenarios for the shock revival have been proposed, the magneto-rotational mechanism by LeBlanc and Wilson (1970), the acoustic mechanism by Burrows et al. (2006), and the currently considered standard neutrino heating mechanism by Bethe and Wilson (1985). In addition, a fourth mechanism has been proposed by Sagert et al. (2009), due to a first-order phase transition, from normal nuclear (in general hadronic) matter, where quarks and gluons are confined to hadrons, to deconfined quark matter.
Thereby, the essential aspect is the presence of instability in the hadron-quark matter coexistence region, with a significantly reduced polytropic index. It arises from the commonly employed two-phase approach, with separate equations of state (EOS) for hadronic and quark matter, and the subsequent phase transition construction. This causes the PNS to collapse with the formation of a second shock wave. It forms when the pure quark matter phase is reached where the EOS stiffens, i.e. the polytropic index increases. The initial propagation of the second shock to increasingly larger radii, and taking over the standing bounce shock, initiates the SN explosion. The previously employed bag model EOS in Sagert et al. (2009), and later in Fischer et al. (2011), is incompatible with observations of massive pulsars of about 2 M\({}_{\odot}\)(Antoniadis et al., 2013; Fonseca et al., 2021). This caveat has been overcome with the development of a microscopic quark matter EOS in Kaltenborn et al. (2017), with
the implementation of repulsive interactions (see also Benic et al., 2015; Klahn & Fischer, 2015; Klahn et al., 2017) and a mechanism that mimics confinement through divergent quark masses. The extension of this EOS to finite temperatures and arbitrary isospin asymmetry gave rise to SN explosions of progenitor stars in the zero-age main sequence (ZAMS) mass range of 30-\(75\) M\({}_{\odot}\)(see Fischer et al., 2018, 2020; Kuroda et al., 2022; Fischer, 2021).
One observable feature that all these simulations have in common is the release of a second millisecond neutrino burst (see also Zha et al., 2020; Jakobus et al., 2022), associated with the propagation of the second shock across the neutrinospheres, with a certain delay after the \(\nu_{\rm e}\) bounce burst. Unlike the latter, the second neutrino burst is dominated by electron antineutrinos, which have the largest detection prospects within the current generation of water-Cherenkov detectors through the inverse beta-decay. Furthermore, such a phase transition has a distinct gravitational wave signal (Zha et al., 2020; Kuroda et al., 2022; Zha & O'Connor, 2022; Jakobus et al., 2023).
The present paper extends the previous analyses of SN driven by a first-order hadron-quark phase transition, establishing a novel connection between properties of the observable second neutrino burst and gross thermodynamics properties of the hadron-quark matter hybrid EOS. The phase transition is investigated systematically, employing the newly developed relativistic density functional (RDF) model of Bastian (2021). This approach enables us to constrain the onset density of the QCD phase transition using the SN neutrino signal. It is complementary to Blacker et al. (2020), where a (lower) bound for the possible onset density of quark matter was deduced from the gravitational wave signals of binary neutron star mergers featuring a first-order phase transition (see also Most et al., 2019; Bauswein et al., 2019, 2020).
The manuscript is organized as follows. In sec. 2 our core-collapse SN model will be reviewed briefly together with the hadron-quark hybrid EOS, followed by the progenitor discussion in sec. 3. The systematic variations of the impact of QCD phase transition in SN simulations are discussed in sec. 4 and the subsequent neutrino signatures are elaborated in sec. 5. The manuscript closes with a summary in 6. Supplementary material regarding the EOS and the SN simulations is provided in appendices A and B.
## 2 QCD phase transition in simulations of core-collapse supernovae
For the present study of the hadron-quark phase transition in core-collapse SN, the spherically symmetric and general relativistic neutrino radiation hydrodynamics model AGILE-BOLTZTRAN is employed (Mezzacappa & Bruenn, 1993a, b, c; Liebendorfer et al., 2004). It features an adaptive baryon mass mesh (Liebendorfer et al., 2001; Fischer et al., 2009). For the neutrino transport, the Boltzmann equation for three neutrino flavors is solved assuming massless particles. The weak reactions for the collision integral used are listed in Table (1) of Ref. Fischer et al. (2020), together with the corresponding references for the weak rates. Weak rates in quark matter are treated as hadronic processes, for which neutron and proton properties, e.g., particle fractions and chemical potentials, are reconstructed from the corresponding quark quantities. Further details are provided in Appendix A.
AGILE-BOLTZTRAN employs a flexible EOS module, which was implemented in Hempel et al. (2012). Here the set of relativistic mean field (RMF) EOS provided by Hempel & Schaffner-Bielich (2010) is employed. The latter is based on a medium modified nuclear statistical equilibrium (NSE) model for several thousand nuclear species, implemented at low densities and temperatures. The transition to homogeneous nuclear matter in the vicinity of nuclear saturation density and high temperatures is modeled via an excluded volume approach. The outer parts of the SN domain are taken into account assuming silicon-sulfur composition when the temperature is below \(T\leq 0.45\) MeV. This corresponds to the NSE to non-NSE transition in accordance with the stellar progenitors. Additional contributions from electrons, positrons, photons, and Coulomb correlations are added following Timmes & Arnett (1999).
For the analysis of the hadron-quark matter phase transition, the class of microscopic EOS from Bastian (2021) is implemented into AGILE-BOLTZTRAN. Quark matter is modeled using the RDF approach of Kaltenborn et al. (2017) and confinement has been taken into account approximately by string-like quark-quark interactions in the scalar self-energy following Ropke et al. (1986). Linear and higher-order repulsive interactions are included based on the quasi-particle approach of Benic et al. (2015). These are known from Nambu-Jona-Lasinio models (Nambu & Jona-Lasinio, 1961; Buballa, 2005) and the class of vector-interaction enhanced bag models of Klahn & Fischer (2015), which give rise to additional pressure contributions with increasing density and are hence essential for the stability of compact hybrid stars--neutron stars with quark matter cores--in agreement with high-precision observations of massive pulsars of about 2 M\({}_{\odot}\)(c.f. Antoniadis et al., 2013; Fonseca et al., 2021, and references therein). The RDF EOS sets include isovector interactions. These control the isospin asymmetry dependence of the phase transition. The values for the different nine RDF EOS parametrizations are listed in Table I of Bastian (2021), including the isovector coupling. The latter are selected to minimize the jump of the electron fraction, \(Y_{e}\), across the phase transition. A phase transition construction is applied from the DD2 RMF EOS of Typel et al. (2010) as well as variations of it such as DD2F and DD2Fev. Further details about the nine RDF EOS are given in Appendix A.
## 3 Stellar Model Dependence
SN simulations are launched from massive progenitors with ZAMS masses in the range of 25-40 M\({}_{\odot}\). This progenitor mass range has been suggested in previous studies as potentially most likely for the QCD phase transition to occur due to the generally more compact PNS achieved in these SN simulations, with higher central densities compared to lighter progenitors. We investigate the stellar models s25a28, s30a28 and s40a28 from Rauscher et al. (2002) and the model s40.0 of Woosley et al. (2002). The former differ by the implementation of updated nuclear reaction rates, revised opacity tables, neutrino losses, and weak interaction rates.
slowest PNS mass growth and hence the highest central densities during the post-bounce evolution as well as the lowest \(T_{\rm max}\). Hence, such stellar models are most favorable to yield stable remnants after a possible hadron-quark phase transition, in particular if the enclosed PNS mass is below the EOS critical mass at the phase-transition onset.
It is interesting to note that despite the same ZAMS mass and similar iron-core properties for the 40 M\({}_{\odot}\) progenitors, s40a28 and s40.0, the different implementation of nuclear rates and mass loss gives rise to different structures in terms of the restmass density, the entropy profiles in the silicon-sulfur layers and the mass of the carbon-oxygen core. This has consequences on the QCD phase transition in SN simulations, which will be discussed further below.
## 4 Systematics of the hadron-quark phase transition
In order to obtain a systematic understanding of the potential impact of the hadron-quark phase transition on SN phenomenology, simulations are discussed that are launched from the progenitors introduced in sec. 3, for all nine RDF EOS under investigation. Table 5 in Appendix B contains a summary of selected quantities from all these runs. Note that SN simulations for s35a28 of Rauscher et al. (2002) are performed too, however, found no quantitative differences to the one of s30a28. Both progenitors' stellar core structures are nearly identical. Quantitative results obtained with respect to the hadron-quark phase transition and the subsequent SN evolution for s30a28 apply equally to s35a28.
Simulations featuring a high mass accretion rate quickly reach the phase transition onset densities during the post-bounce evolution. In particular, s25a28 has the highest post-bounce mass accretion rate and hence reaches the onset conditions earliest for all EOS, see \(t_{\rm PT}\) in Table 5, by several hundreds of milliseconds in comparison to s30a28, s40a28 and s40.0. Related, the enclosed baryon masses are higher at the moment of PNS collapse, \(M_{\rm collapse}\), for s255a28 in comparison to all other stellar progenitors. For all runs launched from s25a28, the enclosed mass exceeds the maximum mass of the RDF EOS at the onset of the PNS collapse. Hence, the remnants collapse into black holes. The only exception is RDF-1.9, for which the phase transition onset occurs as early as 226 ms after the core bounce, with the enclosed mass below the maximum mass of RDF-1.9 (see Table 5).
In the following analysis, we distinguish between exploding and failed models. As an example for the former, Fig. 2 (top panel) shows the radial profiles of the velocity at selected post-collapse times for s30a28 RDF-1.2. All other exploding models qualitatively agree with the evolution of this one. The evolution shown in Fig. 2 (top panel) corresponds to the shock break out. The latter is characterized by the rapid shock expansion, on the order of a few milliseconds, to increasingly
Figure 2: Early shock evolution for a representative explosion model (top panel) and two failed models, separated into prompt black hole formation (middle panel) and delayed (bottom panel), showing radial profiles of the velocity \(u\), in units of the speed of light \(c\), and lapse function \(\alpha\).
larger radii. Thereby, taking over and merging with the standing bounce shock and reaching relativistic velocities on the order of the speed of light. Furthermore, the shock expands to radii on the order of several \(10^{4}\) km on a timescale of a few hundred milliseconds (see also the right panel in Fig. 5 in Appendix B). Note that initially, massive quark cores form on the order of \(M_{\rm quark}\simeq\)1.5-1.8 M\({}_{\odot}\) (see also the left panel in Fig. 5 in Appendix B). This is well above the cold neutron star onset masses for the hybrid branches (see Table 4 in Appendix A), due to the high temperatures reached the PNS interiors, on the order of \(50\)-\(60\) MeV. During the later evolution of the PNS deleptonization, on the order of several tens of seconds, the core temperatures decrease, and \(M_{\rm quark}\) will reduce towards the cold neutron star onset masses for the hybrid branches.
Values of the diagnostic explosion energy, \(E_{\rm explosion}\), are listed in Table 4 for all explosion runs. These are computed following the standard procedure (c.f. Fischer et al., 2010, and references therein), where the radial profiles of the total specific energy are integrated from the stellar surface towards the center. Note that here \(E_{\rm explosion}\) contains the contributions from the stellar progenitor envelopes. Therefore, the stellar envelopes are matched carefully with the SN simulation domains, which are considered up to on the order of \(10^{5}\) km for all progenitors under investigation. The values for the diagnostic explosion energy are obtained at asymptotically late times on the order of a few seconds after the explosion onset, however, excluding the later evolution of the PNS deleptonization.
Representative cases of black hole formation are shown in Fig. 2 (middle and bottom panels). However, unlike in previous studies of failed SN associated with the QCD phase transition (c.f. Zha et al., 2021; Jakobus et al., 2022), we identify two different scenarios of black hole formation. One, prompt collapse (middle panel in Fig. 2) before the possible shock breakout, and two, delayed black hole formation (bottom panel in Fig. 2). Table 5 in Appendix B marks the two cases of black hole formation with different labels, in order to distinguish them. In the former case, illustrated here at the example of the RDF-1.1 EOS and s25a28, the enclosed mass exceeds the critical mass already at the onset of the PNS collapse initiated due to the formation of a massive quark matter core. This is shown via the lapse function, \(\alpha\), decreasing already below \(\alpha\leq 0.2\) before positive shock velocities could be obtained and before the shock could break out from the core. In the case of a delayed black hole formation, illustrated for the case s30a28 RDF-1.6, the second shock wave accelerates to larger radii with relativistic positive matter velocities for several tens of a millisecond, before the PNS collapses and the black hole appears, i.e. before the lapse function decreases also for these cases \(\alpha\leq 0.2\). Note that in several cases the second shock takes over the SN bounce shock, reaching radii on the order of more than 100 km.
It is interesting to note that SN simulations launched from s40.0 behave qualitatively similarly to those launched from s40a28. However, only RDF-1.8 and RDF-1.9 result in explosions, while RDF-1.2 leads to an explosion for s40a28, it belongs to the failed branch for s40.0, precisely, to the delayed scenario in which the expanding second shock does take over the bounce shock. The reason for the different evolution between s40.0 and s40a28 is the larger enclosed mass and the somewhat faster growth of the PNS mass, as is illustrated in Fig. 1. This is caused by a higher late-time post-bounce mass accretion rate for s40.0 prior to the phase transition, due to a slightly higher density in the silicon-sulfur layer of the progenitor. These findings demonstrate the sensitivity of the QCD phase transition SN explosion mechanism on the stellar progenitor.
## 5 Neutrino Signal
Unlike the \(\nu_{e}\) deleptonization burst released shortly after the stellar core bounce, the second neutrino burst, associated with the PNS collapse and the formation of the second shock, is present in all neutrino flavors. It is dominated by \(\bar{\nu}_{e}\), which provides ideal prospects for their detection (c.f. Dasgupta et al., 2010; Fischer et al., 2018; Pitik et al., 2022). Here
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multicolumn{1}{c}{ Progenitor} & RDF EOS & \(t_{\rm burst}^{\rm{1}}\) & \(L_{e_{\rm e},{\rm peak}}^{\rm{2}}\) & \(E_{\rm expl}^{\rm{3}}\) \\ & & [s] & [\(10^{53}\) erg s\({}^{-1}\)] & [\(10^{51}\) erg] \\ \hline s25a28 & 1.9 & 0.345 & 6.36 & 5.54 \\ s30a28 & 1.2 & 1.056 & 4.80 & 2.21 \\ s30a28 & 1.8 & 0.833 & 5.64 & 2.36 \\ s30a28 & 1.9 & 0.580 & 8.30 & 4.15 \\ s40a28 & 1.2 & 0.895 & 4.15 & 2.14 \\ s40a28 & 1.8 & 0.717 & 2.06 & 1.71 \\ s40a28 & 1.9 & 0.491 & 4.25 & 4.01 \\ s40.0 & 1.8 & 0.694 & 5.61 & 2.87 \\ s40.0 & 1.9 & 0.443 & 8.50 & 4.88 \\ \hline u50\({}^{4}\) & 1.1 & 1.227 & 3.90 & 2.3 \\ u50\({}^{5}\) & 1.2 & 0.819 & 5.37 & 3.8 \\ \hline s75.0\({}^{6}\) & 1.2 & 1.803 & 3.06 & 1.0 \\ \hline \end{tabular} \({}^{1}\) post-bounce time of the second neutrino burst release, sampled in the co-moving frame at 500 km, when \(L_{\nu_{e}}>10^{53}\) erg s\({}^{-1}\)
\({}^{2}\) maximum value of the \(\bar{\nu}_{e}\) luminosity in the burst
\({}^{3}\) diagnostic explosion energy
\({}^{4}\) data from Fischer et al. (2018), launched from 50 M\({}_{\odot}\) progenitor (Umeda & Nomoto, 2008)
\({}^{5}\) data from Fischer et al. (2020), launched from 50 M\({}_{\odot}\) progenitor (Umeda & Nomoto, 2008)
\({}^{6}\) data from Fischer (2021), launched from 75 M\({}_{\odot}\) progenitor (Woosley et al., 2002)
\end{table}
Table 2: Summary of the explosion models.
we want to focus on these second neutrino bursts and derive connections between the intrinsic properties of these bursts and both, the explosion dynamics as well as the EOS. SN simulation results of the neutrino signals, luminosities, and average energies, are illustrated in Fig. 4 for the s30a28 and s40a28 as representative exploding models, in Appendix B.
The further analysis of the large sample of exploding models, including results from previous publications (see Fischer et al., 2018, 2020; Fischer, 2021), reveal several linear relations between observables and EOS quantities, as shown in Fig. 3. These include the relation between the explosion energy \(E_{\rm explosion}\) and the peak of the \(\bar{\nu}_{e}\)-luminosity \(L_{\bar{\nu}_{e},{\rm peak}}\), shown in Fig. 3(b), further linear relations have been found to hold empirically between the onset density for the PNS collapse \(\rho_{\rm collapse}\) and the post bounce time for the release of the second neutrino burst \(t_{\rm burst}\), shown in Fig. 3(a) as well as between the onset density for the PNS collapse and the peak \(\bar{\nu}_{e}\)-luminosity, shown in Fig. 3(c),
\[\rho_{\rm collapse} \simeq c_{1}t_{\rm burst}+d_{1}\, \tag{1}\] \[L_{\bar{\nu}_{e},{\rm peak}} \simeq c_{2}E_{\rm explosion}+d_{2}\,\] (2) \[\rho_{\rm collapse} \simeq c_{3}L_{\bar{\nu}_{e},{\rm peak}}+d_{3}\, \tag{3}\]
with the coefficients listed in Table 3.
A direct connection between the release of the second neutrino burst and the hybrid EOS is the correlation between the post-bounce time for the emission and the onset density. EOS with a high onset density, such as RDF-1.1 and RDF-1.2, feature a late PNS collapse and hence a late second neutrino burst release, whereas the opposite holds for EOS with a low onset density, such as RDF-1.9. The RDF-1.8 EOS is somewhat in between RDF-1.2 and RDF-1.9, as shown in Fig. 3(a). The actual values for onset density \(\rho_{\rm onset}\) and the PNS collapse \(\rho_{\rm collapse}\) are listed in Table 5 in Appendix B, and the post-bounce times for the release of the burst \(t_{\rm burst}\) are given in Table 4. This linear dependence can be understood since the mass accretion rate, which determines the post-bounce evolution modulo the compressibility of the hadronic EOS, depends linearly on density.
In order to determine this correlation quantitatively, one has to take the progenitor model dependence into account. Comparing s30a28, s40a28 and s40.0, even though this correlation holds qualitatively, the appearance of the second bursts is shifted by several hundreds of milliseconds, despite similar central PNS densities for the hadron-quark phase transition as well as the PNS collapse (see Table 5 in Appendix B). The reason is the different post-bounce evolution prior to the phase transition. Relevant here is the higher temperature for
Figure 3: Linear correlations between the diagnostic explosion energy \(E_{\rm explosion}\) and the onset density for the PNS collapse \(\rho_{\rm collapse}\) in graph (a), between the peak of the \(\bar{\nu}_{e}\) luminosity \(L_{e_{x},{\rm peak}}\) in graph (b), the post-bounce time of the release of the second neutrino burst \(t_{\rm burst}\) and as well as between \(L_{\bar{\nu}_{e},\ {\rm peak}}\) and \(\rho_{\rm collapse}\) in graph (c), for all exploding models under investigation (see text for further details).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dependencies & label & \(c\) & \(d\) \\ \hline \(\rho_{\rm collapse}(t_{\rm burst})\) & 1 & 1.304 & 3.922 \\ \(L_{\bar{\nu}_{e},{\rm peak}}(E_{\rm explosion})\) & 2 & 1.059 & 1.903 \\ \(\rho_{\rm collapse}(L_{\bar{\nu}_{e},{\rm peak}})\) & 3 & -0.172 & 5.889 \\ \hline \end{tabular}
\end{table}
Table 3: Linear fitting parameters for Eqs. (2)–(3).
s40a28 and the rather strong temperature dependence of the onset densities for the phase transition within the RDF EOS (c.f. Bastian, 2021), i.e. the onset density shifts towards lower values for increasing temperatures. Consequently, the second neutrino burst is launched earlier for models that have a higher post-bounce mass accretion rate, i.e. a higher density in the silicon-sulfur layer, prior to the onset of the phase transition.
Comparing the \(\bar{\nu}_{e}\) luminosity peaks of the second neutrino burst with the diagnostic explosion energies, both listed in Table 4 and plotted in Fig. 3(b), a linear correlation was found too. It is related to the velocities of the second shock. Models with a high(low) explosion energy correspond to high(low) velocities in the post-shock layer and hence high(low) shock heating occurs. This, in turn, results in the conversion of kinetic energy into thermal energy, which is related linearly, and gives rise to high(low) peak luminosity in the second neutrino burst.
The \(\bar{\nu}_{e}\) luminosity is selected as an indicator for the phase transition because it represents an actual observable for any future galactic event and since the second neutrino burst is dominated by \(\bar{\nu}_{e}\), i.e. in the second neutrino burst the following hierarchy holds, \(L_{\bar{\nu}_{e}}>L_{\nu_{\mu}}>L_{\nu_{e}}\), whereas for the average energies, \(\langle E_{\nu_{\mu}}\rangle>\langle E_{\bar{\nu}_{e}}\rangle>\langle E_{\nu_ {e}}\rangle\). Note that this ordering is independent of the EOS.
Note that equivalently to the onset density for the PNS collapse, one can choose the onset density for the phase transition as both densities are related (see Tables 5). Values for the onset densities for the models u50 are taken from Fischer et al. (2018) for RDF-1-1, \(\rho_{\rm collapse}=6.2\times 10^{14}\) g cm\({}^{-3}\), from Fischer et al. (2020) for RDF-1.2, \(\rho_{\rm collapse}=5.4\times 10^{14}\) g cm\({}^{-3}\), and for s75.0 from Fischer (2021) using RDF-1.2, \(\rho_{\rm collapse}=5.6\times 10^{14}\) g cm\({}^{-3}\).
## 6 Summary and Conclusions
Microscopic hybrid matter EOS of Bastian (2021), featuring a first-order hadron-quark phase transition, are employed in general relativistic neutrino radiation hydrodynamics simulation of core-collapse supernovae, exploring the dependence on the underlying EOS and the stellar models. Progenitors are selected from two different stellar evolution calculations, with ZAMS masses of 25-40 M\({}_{\odot}\), with differences in the core structures, in particular the density in the silicon-sulfur layers, resulting in different post-bounce evolution. This has a direct impact on the appearance of the QCD phase transition.
Models with a low(high) post-bounce mass accretion rate result in a slow(fast) growth of the enclosed PNS mass. This aspect is critical. It results in conditions in which the enclosed PNS mass exceeds the maximum mass, which is given by the hybrid EOS, for most EOS under investigation at the moment when the PNS becomes gravitationally unstable due to entering the hadron-quark coexistence region. The latter is characterized by a substantially reduced adiabatic index. Consequently, the PNS collapse results in black hole formation, for which two scenarios are found. One, prompt collapse in which the black hole forms before the second shock break out, and two, with the expansion of the second shock before the central PNS collapses into a black hole. Note that in general it is confirmed that there is a delay from the onset of the phase transition and the subsequent PNS collapse, up to several 100 ms.
In cases when the enclosed PNS mass remains below the maximum mass, stable PNS remnants are obtained after the PNS collapse, now featuring a quark matter core. These are initially very massive, exceeding 1.5 M\({}_{\odot}\) for all EOS under investigation due to the high temperatures. Later, during the PNS deleptonization phase, when the temperature decreases due to the emission of neutrinos of all flavors, the quark core masses approach the cold \(\beta\)-equilibrium limits.
Note that positive values for the diagnostic explosion energy are obtained only after the successful shock expansion, in some cases on the order of several hundreds of milliseconds after the PNS collapse (see Fig. 5 in Appendix B). It remains to be shown whether positive explosion energies might be obtained for the failed models that belong to the delayed scenario, for which it will be required to simulate beyond the appearance of the event, more precisely the apparent horizon (Rahman et al., 2022). Such models might be candidates for the collapsar scenario, i.e. the presence of a black hole in the center while the explosion proceeds as the second shock wave continues to expand to increasingly larger radii. This idea has long been investigated in the context of rotational magnetized failed SN (c.f. MacFadyen and Woosley, 1999; Proga et al., 2003; Ott et al., 2011; Aloy and Obergaulinger, 2021, and references therein), often in connection with the emission of gamma-ray bursts.
For all explosion models, a second neutrino burst is released, which is absent in any other SN scenario. The exception might be the scalarization of bosonic degrees of freedom (c.f. Kuroda and Shibata, 2023). In cases of the first-order QCD phase transition, the second neutrino burst emission is associated with the propagation of the second shock across the neutrinospheres. The present paper establishes a number of novel phenomenological linear relations between signatures of the second neutrino burst, in particular for \(\bar{\nu}_{e}\), and the explosion dynamics as well as EOS. It enables the determination of the onset density for quark matter from a future observation of the next galactic SN neutrino signal, from the peak of the \(\bar{\nu}_{e}\) luminosity. On the contrary, the absence of a second neutrino burst will provide a lower bound for the onset density of quark matter, i.e. providing a constrain for the QCD phase diagram for SN matter featuring \(Y_{e}\simeq 0.2-0.3\), with \(\rho_{\rm onset}\gtrsim 4\times 10^{14}\) g cm\({}^{-3}\) and in the temperature range of \(T\simeq 40\pm 10\) MeV. The present analysis covers a wide range of possible degeneracy. It includes the post-bounce mass ac
cretion phase, due to different stellar models, the hadronic EOS in terms of the DD2 RMF EOS variations DD2F and DD2Fev, both of which represent softer EOS than DD2, and yet incompletely understood aspects of the quark matter EOS such as onset densities and the magnitude of the density jump as a consequence of the phase transition construction. There are several further aspects, such as different quark matter descriptions (c.f. Blaschke et al., 2005; Ruster et al., 2005; Ivanytskyi and Blaschke, 2022, 2) as well as other phase transition constructions (Maslov et al., 2019), which extend beyond the scopes of the present study and are left for future explorations. Note that here \(\bar{\nu}_{e}\) are selected as observable since they have the best prospects for detection through the current generation of operating water-Cherenkov detectors via the inverse beta-decay.
The present analysis demonstrated that multi-messenger SN observables are an alternative route to constrain the dense matter EOS. It is complementary to future planned heavy-ion collider programs FAIR (Facility for Antiproton and Ion Research) at GSI in Darmstadt (Germany) and NICA (Nuclotron Ion Collider) in Dubna (Russia), which aim to probe the QCD phase diagram in the baryon rich regime, however, at somewhat lower isospin asymmetry.
The authors acknowledge support from the Polish National Science Centre (NCN) under grant number 2020/37/B/ST9/00691 (T.F., N.K.L). All computer simulations were performed at the Wroclaw Center for Scientific Computing and Networking (wcss.pl). This publication is based upon work from COST Action COSMIC WISPers CA21106, supported by COST (European Cooperation in Science and Technology).
## Appendix A Eos
Different representations of the original hadronic DD2 EOS from Typel et al. (2010) have been implemented for the RDF quark matter model's phase transition construction. The flow-constrain corrected DD2 EOS (see Danielewicz et al., 2002), known as DD2F (Alvarez-Castillo et al., 2016) is used for RDF-1.1 and 1.3 to 1.7. The DD2F EOS experiences a substantial softening of the high-density phase, in excess of about twice nuclear saturation density, compared to DD2. For RDF-1.2 an excluded volume modification, DD2Fev, is used. The latter results in the deviation from the DD2F EOS at densities in excess of about \(\rho\simeq 3\times 10^{14}\) g cm\({}^{-3}\). For the RDF-1.8 and RDF-1.9 models, both of which feature the lowest onset densities for the first-order phase transition, the original stiff DD2 EOS was employed.
The phase transition construction for this two-phase approach, from the hadronic DD2/DD2F/DD2Fev EOS to the quark matter RDF EOS, is modeled via the assumption of mechanical equilibrium, realized through equal pressures in both phases and chemical phase equilibrium in which the chemical potentials equal in both phases. This is known as Maxwell construction. However, since under supernova conditions, there are two conserved currents, namely the baryon and the charge numbers, given in terms of the restmass density (the following relation holds between restmass density and baryon density, \(\rho=m_{\rm B}n_{\rm B}\), for which we assume in AGILE-BOLTZTRAN\(m_{\rm B}=938\) MeV) and the leptonic charge fraction, under the assumption of charge neutrality, \((\rho,Y_{e})\). Hence, we are dealing with two associated chemical potentials, \((\mu_{\rm B},\mu_{\rm Q})\). For practical purposes, the phase transition construction is performed for a constant value of the charge chemical potential, in order to determine pressure equilibrium with respect to the baryon chemical potential. However, since the multi-purpose astrophysical hadronic EOS are provided in terms of densities, they are first mapped from densities \((\rho,Y_{e})\) to the chemical potentials \((\mu_{\rm B},\mu_{\rm Q})\), for the calculation of the phase transition construction, and later mapped back from chemical potentials to the densities. This introduces truncation errors that must be monitored well. Furthermore, the Maxwell conditions result in a jump of all thermodynamic quantities, from the onset of the phase transition in the hadronic phase to the quark matter phase. However, in astrophysical simulations, data must be provided also in the density domain in between the two hadronic and quark matter phases. Therefore, a quark matter volume fraction, denoted as \(\chi\), has defined (for details, see Bastian, 2021), based on which a hadron-quark mixed phase is designed. Typical for a first-order phase transition is the sudden slope change, i.e. the sudden drop of the polytropic index at the transition from the hadronic to the mixed phase. This feature is particularly present for the zero-temperature EOS, while it is substantially milder for the finite temperature, more precisely the finite entropy case, explored here at \(s=3\). Selected properties of the phase transition for all 9 RDF EOS are listed in Table 4 in Appendix A. RDF EOS sets with a particularly low onset density are the ones RDF-1.8 and RDF-1.9, which are achieved due to a low linear vector coupling for RDF-1.8 as well as lacking higher-order vector repulsion for RDF-1.9 (further details can be found in Bastian, 2021). From these data, it becomes evident that there is a strong temperature dependence on the phase transition, in particular on the onset density. Note the temperatures for the different EOS reaching values on the order of \(40\)-\(60\) MeV. Especially for the two
RDF EOS with a low onset density at \(T=0\), RDF-1.8, and RDF-1.9, with onset density on the order of nuclear saturation density for \(s=3~{}k_{\rm B}\). Table 4 also lists the maximum masses for all EOS, from which it becomes evident that the effect of finite temperature enhances the maximum masses slightly, with only a few exceptions, for the entropy explored here as a representative value. This is in agreement with what has been reported previously, also based on the simplistic thermodynamic bag model EOS (c.f. Khosravi Largani et al., 2022, and references therein).
Relevant for the supernova applications is not only the dependence on temperature but also on \(Y_{e}\). The latter is modulated within the string-flip model due to the inclusion of the \(\rho\)-meson equivalent term, for which the parameters are chosen to minimize the dependence on \(Y_{e}\). This has been discussed in great detail in Bastian (2021). A consequence is, e.g., that the phase boundaries for the onset of the phase transition are literally independent of \(Y_{e}\).
Weak interactions in quark matter must be considered. Therefore, we reconstruct baryonic degrees of freedom, using the following relations between up- and down-quark chemical potentials, (\(\mu_{\rm u}\)) and (\(\mu_{\rm d}\)) respectively, and the baryon and charge chemical potentials,
\[\mu_{\rm B}=\mu_{\rm u}+2\mu_{\rm d}\,\qquad\mu_{\rm Q}=\mu_{\rm u}-\mu_{\rm d}\.\] (A1)
This approximation is justified here since neutrinos in the quark matter phase are completely trapped for the entire simulation times considered in the present study. It has been applied in all previous supernova studies of quark matter (Sagert et al., 2009; Fischer et al., 2011; Zha et al., 2020; Kuroda et al., 2022; Fischer, 2021). Only after \(\mathcal{O}(10~{}{\rm s})\) after the explosion onset, neutrinos will begin to decouple in the quark matter phase (c.f. Fischer et al., 2020) after which this simplification can no longer be applied and neutrino opacities in quark matter must be employed for any prediction of the PNS evolution in particular for the neutrino signal and the associated deleptonization.
## Appendix B Summary of the supernova simulation results
Table 5 lists the post-bounce times for the onset of the QCD phase transition \(t_{\rm PT}\) and the onset of the PNS collapse \(t_{\rm collapse}\), which is delayed by \(\sim\)100 ms for some EOS. Further listed is the enclosed PNS mass at the onset of collapse \(M_{\rm collapse}\) and the corresponding central density \(\rho_{\rm collapse}\) as well as the time delay for shock break out \(\triangle t_{\rm breakout}\) together with the remnant masses \(M_{\rm remnant}\), determined at the moment of black hole formation for the failed models and at asymptotically late times on the order of several seconds after the explosion onset for the exploding runs.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & EOS & condition & \(\rho_{\rm onset}^{2}\) & \(\rho_{\rm final}^{3}\) & \(M_{\rm onset}^{4}\) & \(M_{\rm max}\) \\ Hadronic & Quark & & \([10^{14}\ {\rm g~{}cm^{-3}}]\) & \([10^{14}\ {\rm g~{}cm^{-3}}]\) & \([\rm M_{\odot}]\) & \([\rm M_{\odot}]\) & \([\rm M_{\odot}]\) \\ \hline DD2F & RDF-1.1 & \(T=0\) & 8.8 & 10.5 & 1.55 & 2.13 \\ DD2F & RDF-1.1 & \(s=3~{}k_{\rm B}\) & 6.1 & 10.0 & 1.64 & 2.12 \\ \hline DD2Fev & RDF-1.2 & \(T=0\) & 7.3 & 8.6 & 1.35 & 2.15 \\ DD2Fev & RDF-1.2 & \(s=3~{}k_{\rm B}\) & 4.9 & 7.8 & 1.45 & 2.17 \\ \hline DD2F & RDF-1.3 & \(T=0\) & 9.0 & 10.4 & 1.56 & 2.02 \\ DD2F & RDF-1.3 & \(s=3~{}k_{\rm B}\) & 6.0 & 9.5 & 1.63 & 2.03 \\ \hline DD2F & RDF-1.4 & \(T=0\) & 9.7 & 11.0 & 1.66 & 2.02 \\ DD2F & RDF-1.4 & \(s=3~{}k_{\rm B}\) & 6.2 & 9.9 & 1.69 & 2.02 \\ \hline DD2F & RDF-1.5 & \(T=0\) & 8.2 & 9.9 & 1.46 & 2.03 \\ DD2F & RDF-1.5 & \(s=3~{}k_{\rm B}\) & 5.5 & 9.5 & 1.57 & 2.04 \\ \hline DD2F & RDF-1.6 & \(T=0\) & 9.0 & 11.0 & 1.58 & 2.00 \\ DD2F & RDF-1.6 & \(s=3~{}k_{\rm B}\) & 6.1 & 10.2 & 1.67 & 2.01 \\ \hline DD2F & RDF-1.7 & \(T=0\) & 8.9 & 9.8 & 1.61 & 2.11 \\ DD2F & RDF-1.7 & \(s=3~{}k_{\rm B}\) & 5.4 & 8.6 & 1.57 & 2.12 \\ \hline DD2 & RDF-1.8 & \(T=0\) & 4.8 & 8.6 & 0.95 & 2.04 \\ DD2 & RDF-1.8 & \(s=3~{}k_{\rm B}\) & 3.8 & 8.4 & 1.35 & 2.09 \\ \hline DD2 & RDF-1.9 & \(T=0\) & 4.6 & 7.5 & 0.81 & 2.16 \\ DD2 & RDF-1.9 & \(s=3~{}k_{\rm B}\) & 3.3 & 7.4 & 1.25 & 2.21 \\ \hline \end{tabular} \({}^{1}\) Data from Khosravi Largani et al. (2022)
\({}^{2}\) Onset density for the phase transition at a non-negligible value of the quark volume fraction
\({}^{3}\) Density for reaching the pure quark matter phase
\({}^{4}\) Onset mass of the hybrid branches
\end{table}
Table 4: Thermodynamic properties of the RDF class of hadron-quark hybrid EOS, assuming \(\beta\)-equilibrium for the zero-temperature configurations and constant electron lepton number of \(Y_{\rm L}=0.3\) for the finite entropy cases\({}^{1}\).
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Progenitor}} & \multicolumn{1}{c}{EOS} & \(t_{\rm PT}^{1}\) & \(t_{\rm collapse}^{2}\) & \(M_{\rm collapse}^{3}\) & \(\rho_{\rm collapse}^{4}\) & \(\triangle t_{\rm breakout}^{5}\) & \(M_{\rm remnant}^{6}\) \\ & \multicolumn{1}{c}{hadronic} & quark & [s] & [s] & [M\({}_{\odot}\)] & [\(10^{14}\) g cm\({}^{-3}\)] & [ms] & [M\({}_{\odot}\)] \\ \hline s25a28\({}^{\dagger}\) & DD2F & RDF-1.1 & 0.780 & 0.879 & 2.22 & 6.08 & 2.16 & 2.22 \\ s25a28\({}^{\dagger}\) & DD2F & RDF-1.2 & 0.540 & 0.621 & 2.13 & 5.18 & 4.17 & 2.13 \\ s25a28\({}^{\dagger}\) & DD2F & RDF-1.3 & 0.782 & 0.823 & 2.20 & 6.00 & 0.94 & 2.20 \\ s25a28\({}^{\dagger}\) & DD2F & RDF-1.4 & 0.845 & 0.889 & 2.22 & 6.07 & 4.89 & 2.22 \\ s25a28\({}^{\dagger}\) & DD2F & RDF-1.5 & 0.774 & 0.774 & 2.18 & 5.63 & 1.01 & 2.18 \\ s25a28\({}^{\dagger}\) & DD2F & RDF-1.6 & 0.845 & 0.899 & 2.23 & 6.14 & 2.58 & 2.23 \\ s25a28\({}^{\dagger}\) & DD2F & RDF-1.7 & 0.688 & 0.743 & 2.17 & 5.59 & 1.59 & 2.17 \\ s25a28\({}^{\dagger}\) & DD2F & RDF-1.8 & 0.412 & 0.540 & 2.10 & 4.48 & 3.73 & 2.10 \\ s25a28\({}^{\ast}\) & DD2 & RDF-1.9 & 0.226 & 0.323 & 1.99 & 4.21 & 4.47 & 1.98 \\ \hline \hline s30a28\({}^{\ddagger}\) & DD2F & RDF-1.1 & 1.393 & 1.533 & 2.04 & 5.93 & 0.80 & 2.03 \\ s30a28\({}^{\ast}\) & DD2F & RDF-1.2 & 0.914 & 1.054 & 1.92 & 5.79 & 1.45 & 1.91 \\ s30a28\({}^{\ddagger}\) & DD2F & RDF-1.3 & 1.394 & 1.429 & 2.02 & 5.86 & 2.03 & 2.02 \\ s30a28\({}^{\ddagger}\) & DD2F & RDF-1.4 & 1.395 & 1.623 & 2.06 & 6.24 & 0.85 & 2.06 \\ s30a28\({}^{\dagger}\) & DD2F & RDF-1.5 & 1.211 & 1.358 & 2.00 & 5.63 & 1.93 & 2.00 \\ s30a28\({}^{\ddagger}\) & DD2F & RDF-1.6 & 1.394 & 1.599 & 2.05 & 5.92 & 2.01 & 2.05 \\ s30a28\({}^{\ddagger}\) & DD2F & RDF-1.7 & 1.184 & 1.302 & 1.99 & 2.81 & 0.75 & 1.99 \\ s30a28\({}^{\ast}\) & DD2 & RDF-1.8 & 0.700 & 0.830 & 1.87 & 4.79 & 1.75 & 1.85 \\ s30a28\({}^{\ddagger}\) & DD2 & RDF-1.9 & 0.354 & 0.578 & 1.81 & 4.27 & 1.34 & 1.78 \\ \hline \hline s40a28\({}^{\ddagger}\) & DD2F & RDF-1.1 & 1.181 & 1.286 & 2.08 & 6.09 & 0.95 & 2.08 \\ s40a28\({}^{\ddagger}\) & DD2F & RDF-1.2 & 0.772 & 0.893 & 1.98 & 5.68 & 1.35 & 1.96 \\ s40a28\({}^{\ddagger}\) & DD2F & RDF-1.3 & 1.211 & 1.223 & 2.07 & 5.84 & 1.59 & 2.07 \\ s40a28\({}^{\ddagger}\) & DD2F & RDF-1.4 & 1.139 & 1.362 & 2.10 & 6.08 & 1.92 & 2.10 \\ s40a28\({}^{\dagger}\) & DD2F & RDF-1.5 & 1.079 & 1.116 & 2.04 & 5.54 & 2.51 & 2.04 \\ s40a28\({}^{\dagger}\) & DD2F & RDF-1.6 & 1.184 & 1.370 & 2.10 & 6.12 & 1.82 & 2.10 \\ s40a28\({}^{\ddagger}\) & DD2F & RDF-1.7 & 1.002 & 1.083 & 2.04 & 5.50 & 2.17 & 2.03 \\ s40a28\({}^{\ast}\) & DD2 & RDF-1.8 & 0.477 & 0.713 & 1.92 & 4.54 & 3.31 & 1.90 \\ s40a28\({}^{\ast}\) & DD2 & RDF-1.9 & 0.224 & 0.487 & 1.85 & 4.54 & 2.75 & 1.83 \\ \hline \hline s40.0\({}^{\dagger}\) & DD2F & RDF-1.1 & 1.175 & 1.208 & 2.11 & 6.30 & 2.00 & 2.10 \\ s40.0\({}^{\ddagger}\) & DD2F & RDF-1.2 & 0.626 & 0.911 & 2.02 & 5.59 & 3.30 & 2.02 \\ s40.0\({}^{\dagger}\) & DD2F & RDF-1.3 & 1.120 & 1.181 & 2.10 & 5.82 & 1.93 & 2.10 \\ s40.0\({}^{\dagger}\) & DD2F & RDF-1.4 & 1.151 & 1.270 & 2.12 & 5.92 & 1.29 & 2.12 \\ s40.0\({}^{\dagger}\) & DD2F & RDF-1.5 & 0.985 & 1.074 & 2.07 & 5.55 & 3.95 & 2.07 \\ s40.0\({}^{\dagger}\) & DD2F & RDF-1.6 & 1.151 & 1.270 & 2.12 & 5.92 & 1.78 & 2.12 \\ s40.0\({}^{\ddagger}\) & DD2F & RDF-1.7 & 0.984 & 1.030 & 2.07 & 5.54 & 2.08 & 2.06 \\ s40.0\({}^{\ast}\) & DD2 & RDF-1.8 & 0.439 & 0.690 & 1.97 & 4.50 & 1.77 & 1.95 \\ s40.0\({}^{\ast}\) & DD2 & RDF-1.9 & 0.206 & 0.438 & 1.90 & 4.50 & 3.00 & 1.88 \\ \hline \hline \end{tabular} \({}^{1}\) post bounce time for the phase transition to occur
\({}^{2}\) post bounce time for PNS collapse
\({}^{3}\) enclosed baryon mass at the onset of PNS collapse
\({}^{4}\) central density at the onset of PNS collapse
\({}^{5}\) time delay relative to \(t_{\rm collapse}\) for shock break out or black hole formation
\({}^{6}\) enclosed PNS remnant baryon mass after the phase transition evaluated at a density of \(10^{11}\) g cm\({}^{-3}\) at the end of the supernova simulations
\({}^{\dagger}\) prompt black hole formation
\({}^{\ddagger}\) delayed black hole formation
\({}^{\ast}\) explosion model
\end{table}
Table 5: Summary of the DD2/DD2F/DD2Fev-RDF EOS Supernova simulations.
Figure 4: Neutrino luminosity \(L_{\nu}\) (top panels) and average energy \(\langle E_{\nu}\rangle\) (bottom panels) evolution with respect to the post bounce time of the release of the second neutrino burst \(t_{\rm burst}\), for the explosion models of s3@a28 in graph (a) and s4@a28 in graph (b), distinguishing \(\nu_{e}\), \(\bar{\nu}_{e}\) and collectively \(\nu_{x}\) as representative for all heavy lepton neutrinos, comparing the RDF-1.2 (blue lines), 1.8 (red lines) and 1.9 (green lines) hybrid EOS. The quantities are sampled in the co-moving frame of reference at a radius of 500 km.
Furthermore, as a representative case for an exploding model, the left panel of Fig. 5 shows the evolution of the diagnostic explosion energy \(E_{\rm explosion}\) and the enclosed PNS mass as well as the quark core mass. The right panel shows the entire evolution for this run, from core bounce up to 5 s, with the entropy per particle color coded. Marked also are the shock locations, bounce shock, and second shock, as well as the PNS surface, defined when \(\rho=10^{11}\) g cm\({}^{-1}\).
Figure 4 shows the evolution of the neutrino luminosities (top panels) and average energies (bottom panels) for all neutrino flavors, where \(\nu_{x}\) denotes collectively \(\nu_{\mu/\tau}\) while \(\bar{\nu}_{\mu/\tau}\) are omitted here for simplicity, for s30a28 (left panel) and s40a28 (right panel), as representative cases for all exploding models. The times are gauged to the post-bounce time for the release of the second neutrino burst, denoted as \(t_{\rm burst}\), for which the values are listed in Table 5 for all exploding models under investigation. These include also those of previous works, such as simulations with the RDF-1.1 and RDF-1.2 EOS launched from the 50 M\({}_{\odot}\) of Umeda & Nomoto (2008), published in Fischer et al. (2018) and Fischer et al. (2020), as well as simulations published in Fischer (2021) launched form the solar metallicity progenitor with ZAMS mass of 75 M\({}_{\odot}\) from the stellar evolution series of Woosley et al. (2002).
The evolution of the neutrino fluxes and average energies show the behavior that was reported previously from simulations of SN explosions that feature a first-order QCD phase transition, i.e. the canonical post-bounce mass accretion prior to the phase transition with high luminosities on the order of few times \(10^{52}\) erg s\({}^{-1}\) for all flavors and the sudden rise of the luminosities of all flavors due to the passage of the second shock across the neutrinospheres shortly after the PNS collapse as a direct consequence of the phase transition. The second burst is present in all flavors, however, dominated by \(\bar{\nu}_{e}\) and heavy-lepton flavors, due to the high neutron degeneracy of matter and the low value of \(Y_{e}\). Furthermore, while the magnitudes of the \(\nu_{e}\) deleptonization burst are nearly the same for all SN models, they are determined by the hadronic EOS and the progenitor structure, the second bursts differ largely in both magnitude as well as width, as can be identified in the inays in Fig. 4. Models with an early(late) onset of the phase transition, i.e. low(high) onset densities for the phase transition, show a high(low) peak luminosity in the second burst. This aspect is the foundation for the discussion in sec. 5 that gives rise to the linear relations obtained.
## Appendix D ORCID IDS
Noshad Khosravi Largani [https://orcid.org/0000-0003-1551-0508](https://orcid.org/0000-0003-1551-0508)
Tobias Fischer [https://orcid.org/0000-0003-2479-344X](https://orcid.org/0000-0003-2479-344X)
Nield Uwe F. Bastian [http://orcid.org/0000-0001-9793-240X](http://orcid.org/0000-0001-9793-240X)
|
2301.09856 | Macroeconomic forecasting and sovereign risk assessment using deep
learning techniques | In this study, we propose a novel approach of nowcasting and forecasting the
macroeconomic status of a country using deep learning techniques. We focus
particularly on the US economy but the methodology can be applied also to other
economies. Specifically US economy has suffered a severe recession from 2008 to
2010 which practically breaks out conventional econometrics model attempts.
Deep learning has the advantage that it models all macro variables
simultaneously taking into account all interdependencies among them and
detecting non-linear patterns which cannot be easily addressed under a
univariate modelling framework. Our empirical results indicate that the deep
learning methods have a superior out-of-sample performance when compared to
traditional econometric techniques such as Bayesian Model Averaging (BMA).
Therefore our results provide a concise view of a more robust method for
assessing sovereign risk which is a crucial component in investment and
monetary decisions. | Anastasios Petropoulos, Vassilis Siakoulis, Konstantinos P. Panousis, Loukas Papadoulas, Sotirios Chatzis | 2023-01-24T08:09:51Z | http://arxiv.org/abs/2301.09856v1 | # Macroeconomic forecasting and sovereign risk assessment using deep learning techniques
###### Abstract
In this study, we propose a novel approach of nowcasting and forecasting the macroeconomic status of a country using deep learning techniques. We focus particularly on the US economy but the methodology can be applied also to other economies. Specifically US economy has suffered a severe recession from 2008 to 2010 which practically breaks out conventional econometrics model attempts. Deep learning has the advantage that it models all macro variables simultaneously taking into account all interdependencies among them and detecting non-linear patterns which cannot be easily addressed under a univariate modelling framework. Our empirical results indicate that the deep learning methods have a superior out-of-sample performance when compared to traditional econometric techniques such as Bayesian Model Averaging (BMA). Therefore our results provide a concise view of a more robust method for assessing sovereign risk which is a crucial component in investment and monetary decisions.
## 1 Motivation
Deep Learning models have a short history, however, they have found application in a variety of scientific fields. In particular, Deep Learning algorithms have dramatically improved the capabilities of performing pattern recognition (like speech recognition, image recognition) and forecasting, so that they offer state of the art performance in various scientific fields like biology, engineering etc. Their structure offers the ability to adjust in streaming sequences using continuous learning algorithms, and recognize new and evolving patterns in time series data. In addition, deep learning is proven to effectively deal with high dimensional data. Their capacity to learn and adapt to new data can lead to a better predictive performance in financial time series modelling problems where non-linear relationships and observational noise often exist.
Furthermore, this new generation of statistical algorithms offers the necessary flexibility in modelling multivariate time series, as its structure includes a cascade of many layers with non-linear processing agents. Deep learning networks base their functionality on the interaction of layers that simulate the abstraction and composition functionalities of the human brain. Therefore, via capturing the full spectrum of information contained in financial datasets, they are capable of exploring in depth the inherent complexity of the underlying dynamics in big and high dimensional time series data.
In this study we explore a novel approach to now-casting and forecasting the macroeconomic status of a country using deep learning techniques. We focus particularly on the US economy but the methodology can be applied also to other economies. Specifically US economy has suffered a severe recession from 2008 to 2010 which practically breaks out single time series model attempts. Our approach can simultaneously simulate a country's key financial variables in a holistic way, under a dynamic balance sheet assumption, and by utilizing deep learning algorithms. Experimental results give strong evidence that deep learning applied in financial datasets creates a state of the art paradigm, which is capable to simulate real word scenarios in a holistic and more efficient way. Deep learning has the advantage that it models all macro variables simultaneously taking into account all interdependencies among them and detecting non-linear patterns which cannot be easily addresses under univariate modelling framework.
The main contribution of this study is that it proposes a holistic framework for macroeconomic forecasts. Our research analysis lies at the intersection of computational finance and statistical machine learning, leveraging the unique properties and capabilities of deep learning networks to increase the prediction efficacy and minimizing the modelling error. Under the proposed approach, forecasting of macroeconomic situation can be heavily supported by artificial intelligence algorithms simulating better the propagation channels across different parts economy i.e. banking system, consumer confidence and state interaction. In a nutshell, the proposed innovation lies in the use of advanced deep learning techniques for the simultaneous projection of Macroeconomic variables, while benchmarking vs with traditional methods of econometrics frequently employed in financial practice (i.e. Bayesian Model averaging).
The remainder of this study is structured as follows. In section 2, we focus on the related literature review. Section 3, describes the data collection and processing. In section 4, we provide details regarding the estimation process of the various
alternative models developed. In section 5, we compare the employed methodologies. Finally, in concluding section 6, we summarize our findings and identify any potential weaknesses or limitations, while we also discuss areas for future research extensions.
## 2 Related Work
Macroeconomic Forecasting is a very useful tool effectively utilized in banking, finance, business, and other areas. Academic studies have established a close link between sovereign risk and the expected GDP growth of a country. Vice versa a country's economic growth exhibits a significant response to sovereign risk changes driven by the interest rate and capital-flow channels[5]. Furthermore, the borrowing costs of the economy are statistically and economically affected by an increase in sovereign risk as measured through firm's credit spreads, based on a recent analysis of credit default swap data in the Eurozone[2].
Macroeconomic satellite models are also the cornerstone of bank stress testing methodologies as they are the pipeline through which macroeconomic scenarios are converted to tangible risk factors. In this framework macro models are used either for scenario projections under specific assumptions or dynamic forecasting. Usually banks develop a group of satellite models each for every one macro variable based on which projections and mapping to risk factors are performed. Multi models usually ignore the interrelations among variables except for the cases where a joint distribution (copula) is employed for correlation modelling. Underestimating the correlation impact may adversely affect the validity of stress test results as second order effects deployed during a crisis period will be ignored and the estimated impact will deviate from realized losses.
With respect to sovereign risk assessment, the majority of the academic literature employs conventional econometric and statistical techniques to tackle the problem of government debt credit risk assessment. These methods range from simple regression model to classification-regression trees[10].
The recent developments of Machine and Deep Learning methodologies and the availability of large scale macroeconomic data have led a number of researchers to depart from the field of traditional econometric techniques and employ novel techniques both in forecasting and now-casting macroeconomic variables. Bontempi et al.[3] make an overview of machine learning techniques in time series forecasting by focusing on three aspects: the formalization of one-step forecasting problems as supervised learning tasks, the discussion of local learning techniques as an effective tool for dealing with temporal data and the role of the forecasting strategy when we move from one-step to multiple-step forecasting. Atiya et al.[1], perform a large scale comparison study for the major machine learning models for time series forecasting by applying the models to the monthly M3 time series competition data. Katris.[7] benchmarks machine learning techniques vs traditional econometric models in the prediction of Unemployment Rates.
Liao[9] used Artificial Neural Network in time-series forecasting, by combining First order Markov Switching Model and K-means algorithms and found that machine learning has outperformed the benchmark of time-series inflation rate forecasting. Medeiros et al.[11] showed that ML models with a large number of covariates are systematically more accurate than the benchmarks in US inflation since they capture better potential nonlinearities between past key macroeconomic variables and inflation. Smalter[6] points that supplied with diverse and complex data, a machine learning model can outperform simpler time-series models, with better performance at shorter horizons. In particular, his results show that a machine learning model can identify turning points in the unemployment rate earlier than competing methods.
Kaushik et al.[8] come up with a multivariate time series approach to forecast the exchange rate. His results show that Support Vector Machines and Recurrent Neural Networks outperform widely used traditional method of econometric forecasting such as Vector Autoregressive Models. In a similar vein, Chen et al. [4] employ a two stage model in exchange rate forecasting where in the first stage, a time series model generates estimates of the exchange rates whereas in the second stage, General Regression Neural Network is used to correct the errors of the estimates. Both empirical and trading simulation experiments suggest that the proposed hybrid approach not only produces better exchange rate forecasts but also results in higher investment returns than the single-stage models.
## 3 Data Collection and Model Structures
In order to calibrate a Deep Neural Network to a series of macroeconomic series we have collected monthly series from macroeconomic and financial indicators for the US economy from January 1973 to December 2018. The described dataset comprised of 540 observations and was split into two parts: An in-sample train dataset, comprising data pertaining to the \(65\%\) (1973-2005) and an out-of-sample dataset that comprises the rest \(35\%\) of the observations (2006-2018). The latter sample was used for evaluation of the performance of all the statistical techniques implemented. In the split of the sample the 2008-2010 crisis period was deliberately left out in order to challenge our model capacity in foreseeing an exceptional structural break in the macroeconomic status. Thus, the developed system is expected to exhibit more stability through the cycle behaviour. The macroeconomic series incorporated into the network depict all relevant aspects of the economic status of a country. Namely we include in the network the following 9 macroeconomic time series:
* GDP: Gross Domestic Product yearly growth
* DEBT: Government Debt as % of GDP
* RRE: Real Estate prices yearly growth
* UNR: Yearly change in Unemployment Rate
* INFLAT: Price inflation i.e. Consumer Price Index yearly growth
* YIELD10Y: 10 year Government bond yield changes
* GOVEXP: Government Expenses as % of GDP
* EXPORT: Exports yearly growth
* STOCKS: Annual return of S&P 500
Our aim is to effectively forecast the evolution of the abovementioned time series both under a univariate framework by employing a Bayesian Moving Average model and under a multi-variate framework employing Deep Neural Networks taking into account simultaneous interdependencies. In the next step we benchmark the forecasting performance of the univariate vs the multivariate framework both under a static and dynamic view. We employ an explanatory set of variables for each of the 9 macro series 1 to 2 yearly lags of both the dependent and the rest of the other macro series based on the assumption that macro and financial variables ratios carry all the information necessary to describe and predict the macroeconomic state of a country. We also include as independent variables a set of variables, on current and lagged values, which account for the effect of banking system in the real economy, namely
* DEP: Bank Deposits yearly growth
* LOAN: Commercial Loans yearly growth
* INTLOAN: Interbank loan yearly growth
This is a crucial supplement as the interlinkage between banking sector and economic activity is both empirically and theoretically justified. Confidence of the public and the market in the banking sector will lead to liquidity provision through the disposition of customer deposits and interbank funding which in turn will be to commercial loans which will fuel economic activity. On the other hand, low banking confidence leads to deposit outflows and shrinkage of the interbank market, rendering problematic the financing of the economy from the banking system. This is the main reason why a crisis in the banking sector could lead to an economic crisis which in turn could send feedback effects on the banking system initiating a new second order crisis. In all, the individual models estimated through Bayesian Model Averaging are shown below in eq (1).
\[\begin{split}\text{DEPVAR}_{t}&=\text{GDP}_{t-1,2}+ \text{DEBT}_{t-1,2}+\text{RRE}_{t-1,2}+\text{UNR}_{t-1,2}\\ &+\text{INFLAT}_{t-1,2}+\text{YIELD}10\text{Y}_{t-1,2}+\text{ GOVEXP}_{t-1,2}\\ &+\text{EXPORT}_{t-1,2}+\text{STOCKS}_{t-1,2}+\text{DEP}_{t-0,1,2 }\\ &+\text{LOAN}_{t-0,1,2}+\text{INTLOAN}_{t-0,1,2}+\epsilon\end{split} \tag{1}\]
where DEPVAR\({}_{t}\) comprises each one of the following variables (GDP\({}_{t}\), DEBT\({}_{t}\), RRE\({}_{t}\), UNR\({}_{t}\), INFLAT\({}_{t}\), YIELD10Y\({}_{t}\), GOVEXP\({}_{t}\), EXPORT\({}_{t}\), STOCKS\({}_{t}\)) leading to 9 separate models, one for each macroeconomic factor.
Modeling separately the macroeconomic variables has the drawback that it does not take into account the simultaneous interdependencies among variables. For example it could the case that GDP is not affected solely by the lagged value of RRE but also by the RRE\({}_{t}\). Traditional econometrics handle this case through the use of SUR (Seemingly Unrelated Regression) models or under a VAR framework. In our proposed approach for dynamic macro forecasting we introduce a Deep neural networks architecture as an innovative way to take into account contemporaneous dependencies among variables.
In all, we identify the main channels of risk propagation in a recurrent form to account of all the existing evidence of feedback effects in a macroeconomic system by putting all the components together in a multivariate structure. On the other hand, the use of classical econometric techniques offers limited capabilities for simulating complex systems. Our approach accounts for temporal patterns in the economy providing a dynamic modelling approach. This is achieved through the multivariate training of deep neural networks which takes into account the dynamic nature of the economy. The approach proposed is composed of multivariate input and output layers able to capture the cross correlation between macroeconomic variables. Training is performed as one big complex network minimizing estimation errors and double counting effects among various financial variables.
To account for non-linear relationships that materialize under different macroeconomic conditions machine learning techniques like deep learning can provide more efficient estimations. Based on academic literature Deep Neural networks are capable of simulating real life phenomena where relationships are complex, so our proposed framework by using multilayer deep networks envisages capturing the dynamics inherent in the economy. In addition the architecture of aims to capture the amplification channels leading to structural breaks.
In the general structure of our model lagged macroeconomic indicators along with current values for Loan Growth, Deposit Growth and Interbank Loan growth will be inserted in the input layers and their now-casted values will be produced in the output layer.
## 4 Model Development
### Single Variable Forecasting - Bayesian Model Averaging (BMA)
We employ the Bayesian Model Averaging (BMA) methodology for estimating single equation satellite models which are used for univariate predictions for the macroeconomic variables. The BMA method accounts better than linear regression for the uncertainty surrounding the main determinants of risk dynamics. Using BMA, a pool of equations is generated using a subgroup of determinants randomly selected. In the next step, a weight is assigned to each model that reflects their relative forecasting performance. Aggregating all equations using the corresponding weights produces a posterior model probability, provided that the number of equations estimated in the first step is large enough to capture all possible combinations of a predetermined number of independent variables. Thus Bayesian model averaging addresses model uncertainty and misspecification better than a simple linear regression problem.
To further illustrate BMA, suppose a linear model structure with \(Y_{t}\) being the dependent variable, \(X_{t}\) the explanatory variables, \(\alpha\) constant, \(\beta\) the coefficients, and \(\epsilon_{t}\) a normal error term with variance \(\sigma\).
\[Y_{t}=\alpha_{\gamma}+\beta_{\gamma}X_{\gamma,t}+\epsilon_{t},\qquad\epsilon_ {t}\sim\mathcal{N}(0,\sigma^{2}I) \tag{2}\]
In an ordinary linear regression problem, the existence of many potential explanatory variables in a matrix \(X_{t}\) renders
their correct combination a quite burdensome task. Including all the variables does not provide a feasible solution as it can lead to overfitting and multicollinearity especially when there is a limited number of observations. BMA tackles the problem by estimating models for all possible combinations of \(\{X\}\) and constructing a weighted average over all of them.
Under the assumption that \(X\) contains \(K\) potential explanatory variables, BMA estimates \(2^{K}\) combinations and thus \(2^{K}\) models. Applying Bayes' theorem, model averaging is based on the posterior model probabilities.
\[p(M_{\gamma}\cup Y,X) =\frac{p(Y\cup M_{\gamma},X)p(M_{\gamma})}{p(Y\cup X)} \tag{3}\] \[=\frac{p(Y\cup M_{\gamma},X)p(M_{\gamma})}{\sum_{s=1}^{2^{K}}p(Y \cup M_{s},X)p(M_{s})}\]
In Equation (3), \(p(Y,X)\) denotes the integrated likelihood which is constant over all models and is thus simply a multiplicative term. Therefore, the posterior model probability (PMP) is proportional to the integrated likelihood \(p(Y\cup M,X)\) which reflects the probability of the data given the model \(M\). Thus, the corresponding weight assigned to each model is measured by using \(p(M_{\gamma}\cup Y,X)\) in Eq. (3). In equation (3), \(p(M)\) denotes the prior belief of how probable model \(M\) is before analyzing the data. Furthermore, to estimate \(p(Y,X)\) integration is performed across all models in the model space and to estimate the probability \(p(Y\cup M,X)\) integration is performed given model \(M\) across all parameter space. By performing renormalization of the product in equation (3), PMPs can be inferred and subsequently the model's weighted posterior distribution for estimator \(\beta\) is given by
\[p(\beta\cup Y,X)=\sum_{\gamma=1}^{2^{K}}p(\beta\cup M_{\gamma},Y,X)p(M_{\gamma }\cup X,Y) \tag{4}\]
The priors, posteriors and the marginal likelihood employed in the estimation are described analytically in Appendix A. In Bayesian Model Averaging estimation we employ unit information prior (UIP), which sets g=N commonly for all models. We use also a birth/death MCMC algorithm (20000 draws) due to the large number of covariates included since using the entire model space would lead to a large number of iterations. We fix the number of burn-in draws for the MCMC sampler to 10000. Finally, the models prior employed is the "random theta" prior by Ley and Steel, who suggest a binomial-beta hyperprior on the a priori inclusion probability. This has the advantage that is less tight around prior expected model size (i.e. the average number of included regressors) so it reflects prior uncertainty about model size more efficiently. In order to develop all the satellite models for this approach we employ the utilities of BMS R package.
### Multiple Variable Forecasting - Deep Neural Networks
We propose the use of multilayer deep neural networks in order to simultaneously forecast the basic macroeconomic and financial variables, capturing the dynamics inherent in the economy by taking into account the contemporaneous interdependencies among them. Deep learning applications in the domain of finance is rather limited but it has been an active field of research in recent years, as it has achieved significant breakthroughs in the fields of computer vision and language understanding. Specifically, our paper constitutes one of the first works presented in the literature that considers application of deep learning to address the challenging task of macroeconomic prediction.
Deep Neural Networks are built on the basis of nonlinear activation functions typical choices of which are the logistic sigmoid, hyperbolic tangent, and rectified linear unit (ReLU). The first two (logistic sigmoid and hyperbolic tangent) activation functions are closely related as they belong both to the sigmoid family. The sigmoid family of activation functions has the disadvantage of saturating the network with large positive or negative values. To alleviate this problem, practitioners have derived linear activation functions, like the popular ReLU, which are now the standard choice in deep learning research.
The activation layers increase the ability and flexibility of a DNN to capture non-linear relationships in the training dataset but on the other hand, the huge number of trainable parameters could lead to overfitting. Therefore the use of simple, effective, and efficient regularization techniques is necessary to avoid poor out of sample performance. Dropout is the most popular regularization technique for DNNs. In essence, it consists in randomly dropping different units of the network on each iteration of the training algorithm, so in this way, only the parameters related to a subset of the network units are trained during each iteration. This strategy reduces the associated network overfitting tendency ensuring that all network parameters are effectively trained. Inspired by these merits, we employ Dropout DNNs with ReLU activations to train and deploy feed forward deep neural networks. We postulated deep networks that are up to five hidden layers deep and comprise various numbers of neurons.
A supplementary way to improve the fitting efficacy of the trained network is the incorporation prior information in a Bayesian framework in a similar way to BMA linear models. Conventional network architectures compute point estimates of the unknown values without taking into consideration any prior information and without any uncertainty estimation of the produced values. The Bayesian treatment of a particular model has been shown to increase its capacity and potential, while offering a natural way to assess the uncertainty of the resulting estimates. To this end, we augment the conventional model architectures of the previous sections by also relying on the Bayesian framework.
Specifically, we impose a prior Normal distribution over network weights, seeking to infer their posterior distribution given the data. Since the marginal likelihood is intractable for the considered architectures, from the existing Bayesian methods, we rely on approximate inference and specifically on Variational Inference. Since the true posterior of the model cannot be computed in Variational Inference, we introduce an auxiliary variational posterior distribution of a family of distributions and then try and find the optimal member of the considered family to match the true posterior. The matching is achieved through the minimization of the Kullback-Leibler (KL) divergence between the true and the introduced varia
tional posterior1. Minimizing the KL divergence is equal to the maximization of the Evidence Lower Bound (ELBO), a well-known bound on the marginal likelihood derived using Jensen's inequality. Thus, for training the following architectures, we resort to ELBO maximization.
Footnote 1: The KL divergence is a metric of similarity between two distributions and is a non-negative value; KL is zero, if and only if, the two considered distributions match exactly.
Non-linear activation functions such as ReLUs are a mathematically convenient tool for training deep networks but nevertheless, they do not come with strong biological plausibility. Current research has shown that in real life neurons with similar functionality and structure tend to group and compete with each other for their output. To this end, researchers have devoted significant effort to explore this type of competition between neurons and apply it in existing models. The resulting procedure is referred to as Local Winner-Takes-All (LWTA) and has been shown to provide competitive, or even better, results in benchmarks architectures in different domains of Deep Learning applications. Thus, apart from the conventional ReLU activations of the previous section, we additionally explore the potency of the LWTA in our work. The linear units after the affine transformation in each layer are grouped together and compete from their outputs. This competition is performed in a probabilistic way, by employing a softmax nonlinearity, obtaining the probability of activation of each unit in each block.
## 5 Model Validation
In order to assess the robustness of our approach we perform a thorough validation procedure. The dataset comprised of 540 observations was split into two parts: An in-sample train dataset, comprising data pertaining to the 65% (1973-2005) and an out-of-sample dataset that comprises the rest 35% of the observations (2006-2018). The latter sample was used for evaluation of the performance of all the statistical techniques implemented. In the split of the sample, the 2008-2010 crisis period was deliberately left out in order to challenge our model capacity in foreseeing an exceptional structural break in the macroeconomic status. Models compared in this benchmark exercise are Bayesian Model Average (BMA), Deep Learning Model with ReLU activation function and Drop out terms trained using the MXNET algorithm (MXNET), Bayesian Deep Learning model using ReLU activation function (Bayesian ReLU) and Bayesian Deep Learning model using Local Winner-Takes-All activation function (Bayesian LWTA).
Note that, in order to train the deep learning algorithms in the current study, the in-sample dataset is further split randomly in train and validation set. The validation dataset is used to find the best set of hyper parameters of the models and select the best candidate model for performing the out-of-sample evaluation.
The validation process is performed following two different strategies, namely static and rolling forecast. In the static case, all models are estimated only one time in the train sample (1973-2005) and the whole test sample is used to produce the out of time (2006-2018) predictions in a one-off way. In the rolling forecast framework the models are re-estimated in each step of the out of time sample (2006-2018) and the prediction is produced for one month ahead each in each step. For example in order to produce the January 2006 forecast the sample 1973 till December 2005 is used, whereas in the next step the train sample is enlarged to 1973 till January 2006 and the February 2006 forecast is generated.
From Table 1 where the errors for each forecasted macro variable are shown we deduce that Deep Learning algorithms clearly outperform the benchmark BMA approach, especially in cases of variables such as the government bond yield (YIELD10Y), the real estate price growth (RRE) and the stock market growth (STOCKS) which showed a massive structural outbreak during the Subprime crisis (2008-2010) in the US. The Bayesian Deep Neural Network with Local Winner Takes All activation function outperforms not only the BMA model but also more conventional Deep Learning algorithms showing the benefit of applying more biologically plausible activation functions.
In order to compare the prediction of the two models with actual value of each variable, the diagrams shown in the Appendix have been created. In particular, from the graph in Figure 1, which shows the scaling of the actual value of YIELD10Y from August 2016 to November 2018, it can be clearly seen that the Bayesian LWTA follows the trend of the actual value much better than the BMA, which deviates significantly, given that it does not take into account the contemporaneous interdependencies among variables in the long run. Of course, there are cases such as Government Expense and Debt to GDP where Deep Learning model also overshoots but those are variables heavily dependent on political decisions which may diverge from the pattern observed especially in crisis periods. One should also take into account the challenging nature of the testing sample which includes a structural break (Subprime crisis) that has not been observed in the train sample, during which important fiscal decisions were taken from the US government in order to mitigate the crisis repercussions.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Method & \multicolumn{2}{c|}{**YIELD10Y**} & \multicolumn{2}{c|}{**UNR**} & \multicolumn{2}{c|}{**RRE**} \\ \hline & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline Satellite Modeling (BMS) & 13.31 & 2.54 & 1.40 & 0.90 & 79.14 & 7.09 \\ \hline Deep Learning (MXNET) & 1.14 & 0.81 & 1.95 & 0.97 & 40.42 & 5.21 \\ \hline Deep Learning (Bayesian ReLU) & 1.76 & 0.93 & 0.95 & 0.73 & 36.78 & 4.83 \\ \hline Deep Learning (Bayesian LWTA) & **1.08** & **0.80** & **0.87** & **0.69** & **28.56** & 4.33 \\ \hline & \multicolumn{2}{c|}{**NFLAT**} & \multicolumn{2}{c|}{**STOCKS**} & \multicolumn{2}{c|}{**GDP**} \\ \hline & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline Satellite Modelling(BMA) & 6.73 & **1.90** & 47.83 & 17.32 & 8.80 & 2.42 \\ Deep Learning (MXNET) & 12.61 & 2.95 & **158.55** & **8.69** & 4.55 & 1.74 \\ Deep Learning (Bayesian ReLU) & 10.14 & 2.46 & 172.06 & 9.26 & **4.30** & **1.68** \\ Deep Learning (Bayesian LWTA) & **5.81** & 1.92 & 171.01 & 9.86 & 6.58 & 2.00 \\ \hline & \multicolumn{2}{c|}{**EXPORT**} & \multicolumn{2}{c|}{**DEBT**} & \multicolumn{2}{c|}{**GOVEXP**} \\ \cline{2-6} & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline Satellite Modelling(BMA) & 159.57 & 9.73 & 13.30 & 2.63 & 14.2 & 1.47 \\ Deep Learning (MXNET) & 103.81 & **7.88** & 16.53 & 2.73 & 2.28 & 1.18 \\ Deep Learning (Bayesian ReLU) & 100.36 & 8.10 & **12.10** & **2.41** & **1.31** & **0.84** \\ Deep Learning (Bayesian LWTA) & **93.70** & 8.01 & 18.57 & 2.93 & 2.32 & 1.16 \\ \hline & \multicolumn{2}{c|}{**GNPK**} & \multicolumn{2}{c|}{**DEBT**} & \multicolumn{2}{c|}{**GOVEXP**} \\ \hline & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline Satellite Modelling(BMA) & 159.57 & 9.73 & 13.30 & 2.63 & 14.2 & 1.47 \\ Deep Learning (MXNET) & 103.81 & **7.88** & 16.53 & 2.73 & 2.28 & 1.18 \\ Deep Learning (Bayesian ReLU) & 100.36 & 8.10 & **12.10** & **2.41** & **1.31** & **0.84** \\ Deep Learning (Bayesian LWTA) & **93.70** & 8.01 & 18.57 & 2.93 & 2.32 & 1.16 \\ \hline \end{tabular}
\end{table}
Table 1: Static Forecast Error Metrics for each variable: MSE stands for Mean Square Error and MAE stands for Mean Absolute Error.
Focusing on the rolling forecast experiment in Table 2, we notice, as expected, that errors have reduced significantly vs the static case as more current information space is available in each step of the forecasting process. If the static experiment represents a situation of forecasting a macro variable full path on the basis of a model and a scenario provided, the rolling forecast experiment simulates a situation of a practitioner that re-trains a model on a monthly basis so as to have a hint on what will happen in the next month. From a benchmarking perspective even in the rolling forecast case Bayesian Deep Learning algorithms clearly outperform the BMA model in all variables as they better account for contemporaneous relationships across variables.
By examining Figs. 7 to 12 in Appendix B we notice that both BMA and the best performing Deep Learning algorithm (Bayesian ReLU) follow closer the trend in dependent variables evolution but the former in many cases overshoots rendering the Deep Learning alternative more robust for macroeconomic projections under a crisis situation.
## 6 Conclusions
In this study we explore a novel approach of now-casting and forecasting the macroeconomic status of a country using deep learning techniques focusing particularly on the US economy but the methodology can be applied also to other countries. Deep Learning algorithms have a short history in economics but this new generation of statistical algorithms offer the necessary flexibility in modelling multivariate time series, as its structure includes a cascade of many layers with non- linear processing agents.
In particular, we identify the main channels of risk propagation in a recurrent form to account of all the existing evidence of feedback effects in a macroeconomic system by putting all the components together in a multivariate structure. Our approach takes into account the dynamic nature of the economy, through the multivariate training of deep neural networks, that employ multivariate input and output layers which are able to capture the cross correlation between macroeconomic variables. Training is performed as one big complex network minimizing estimation errors and double counting effects among various financial variables. Benchmarking a series of Deep Learning algorithms versus Bayesian Model regressions on a test sample that includes a financial turbulent period in the US (2008 - 2012) we find that Deep Learning models provide better forecast both on a static perspective (model train in 1973-2005 period and forecast on 2006 - 2018 period) and a dynamic perspective (initial model train in 1973-2005 period and rolling forecast with continuous re-training during the 2006 - 2018 period). Examining both error metrics and relevant plots it is evident that deep learning algorithms capture better the realized trends, especially in cases where the absence of linearities and the contemporaneous dependencies cause traditional modes to overshoot.
This first attempt at employing Deep Learning in macroeconomic time series forecasting shows that potential benefits may pave the ground for a wider spectrum of application in economic sciences. Of course, deep learning techniques even though they better address non-linear patterns they are not a panacea, especially in such challenging problem as the prediction of the Sub-prime crisis in the US, but they certainly lie in the correct path. Criticism could rely on the black box nature of the algorithm which when compared to traditional econometrics does not provide a clear view of the economic relationships, but in any case, as it has been proven empirically,the economy is not dominated by clear linear patterns but from non-linear interactions which constantly evolve. Especially under the rolling forecast framework where both techniques follow the realized trend, one could use the results of the 2 techniques (Linear models - Deep Learning models) in a combined way, so that the linear model provides a first order approximation of the problem at hand revealing the economic rational and use also the more precise non-linear model to correct for temporal fluctuations.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Method & \multicolumn{2}{|c|}{**YiELD10Y**} & \multicolumn{2}{|c|}{**UNR**} & \multicolumn{2}{|c|}{**RRE**} \\ \cline{2-6} & MSE & MAE & MSE & MAE & MSE & MAE \\ \hline Satellite Modeling (IMMS) & 3.34 & 1.29 & 0.4 & 0.48 & 18.02 & 3.32 \\ \hline Deep Learning (MXNET) & 1.34 & 0.87 & 0.74 & 0.66 & 29.40 & 4.34 \\ \hline Deep Learning (Bayesian ReLU) & **0.50** & **0.49** & **0.09** & **0.23** & **15.50** & **3.08** \\ \hline Deep Learning (Bayesian LWTA) & 1.44 & 0.96 & 0.26 & 0.38 & 22.16 & 3.77 \\ \hline \multicolumn{6}{|c|}{**NET-LAT**} & **STOKS** & **GDP** \\ \hline \multicolumn{6}{|c|}{MSEL MALE} & MSE & MAE & MSE & MAE \\ \hline Satellite Modelling(IMA) & 3.07 & 1.30 & 203.02 & 11.43 & 2.59 & 1.26 \\ Deep Learning (MXNET) & 5.40 & 1.82 & **142.5** & 9.29 & 2.59 & 1.28 \\ Deep Learning (Bayesian ReLU) & **2.96** & **1.07** & 151.93 & **7.99** & **6.48** & **0.51** \\ Deep Learning (Bayesian LWTA) & 4.62 & 1.49 & 157.08 & 8.49 & 3.25 & 1.38 \\ \hline \multicolumn{6}{|c|}{**EXPORT**} & \multicolumn{2}{|c|}{**BERT**} & \multicolumn{2}{|c|}{**GOVEXP**} \\ \hline MSE & MAE & MSE & MAE & MSE & MAE \\ \hline Satellite Modelling(BMA) & **72.56** & **6.96** & 6.98 & 2.05 & 14.40 & 1.12 \\ Deep Learning (MXNET) & 92.16 & 7.43 & 1.495 & 2.72 & 1.10 & 0.82 \\ Deep Learning (Bayesian ReLU) & 87.93 & 7.26 & **2.26** & **1.13** & **0.40** & **0.50** \\ Deep Learning (Bayesian LWTA) & 102.19 & 8.18 & 2.99 & 1.42 & 1.14 & 0.90 \\ \hline \end{tabular}
\end{table}
Table 2: Rolling Forecast Error Metrics for each variable: MSE stands for Mean Square Error and MAE stands for Mean Absolute Error.
Figure 1: 10 year Government bond yield change- Forecast Comparison. Actual value vs Bayesian Model Average (BMA) vs Bayesian Neural Network with Local Winner-Takes-All activation function (Bayesian LWTA). |
2303.09791 | ChameleonIDE: Untangling Type Errors Through Interactive Visualization
and Exploration | Dynamically typed programming languages are popular in education and the
software industry. While presenting a low barrier to entry, they suffer from
run-time type errors and longer-term problems in code quality and
maintainability. Statically typed languages, while showing strength in these
aspects, lack in learnability and ease of use. In particular, fixing type
errors poses challenges to both novice users and experts. Further,
compiler-type error messages are presented in a static way that is biased
toward the first occurrence of the error in the program code. To help users
resolve such type errors, we introduce ChameleonIDE, a type debugging tool that
presents type errors to the user in an unbiased way, allowing them to explore
the full context of where the errors could occur. Programmers can interactively
verify the steps of reasoning against their intention. Through three studies
involving real programmers, we showed that ChameleonIDE is more effective in
fixing type errors than traditional text-based error messages. This difference
is more significant in harder tasks. Further, programmers actively using
ChameleonIDE's interactive features are shown to be more efficient in fixing
type errors than passively reading the type error output. | Shuai Fu, Tim Dwyer, Peter J. Stuckey, Jackson Wain, Jesse Linossier | 2023-03-17T06:24:52Z | http://arxiv.org/abs/2303.09791v1 | # ChameleonIDE: Untangling Type Errors Through Interactive Visualization and Exploration
###### Abstract
Dynamically typed programming languages are popular in education and the software industry. While presenting a low barrier to entry, they suffer from runtime type errors and longer-term problems in code quality and maintainability. Satically typed languages, while showing strength in these aspects, lack in learnability and ease of use. In particular, fixing type errors poses challenges to both novice users and experts. Further, compiler type error messages are presented in a static way that is biased toward the first occurrence of the error in the program code. To help users resolve such type errors we introduce ChameleonIDE, a type debugging tool that presents type errors to the user in an unbiased way, allowing them to explore the full context of where the errors could occur. Programmers can interactively verify the steps of reasoning against their intention. Through three studies involving actual programmers, we showed that ChameleonIDE is more effective in fixing type errors than traditional text-based error messages. This difference is more significant in harder tasks. Further, programmers actively using ChameleonIDE's interactive features are shown to be more efficient in fixing type errors than passively reading the type error output.
types, type errors, debugging, visualization, exploration
Dynamically typed programming languages such as JavaScript and Python have risen in popularity in recent decades [1]. These languages present a low barrier of entry, especially to beginner programmers: they require no type declaration, variable types or object structures can be modified dynamically, and functions can deal with dynamic input using ad-hoc polymorphism and runtime reflection. However, studies show that dynamically typed languages negatively affect development productivity [2], code usability [3], and code quality [4, 5, 6]. They are often found to produce error-prone code [7, 8, 9] and require strong programmer discipline to avoid pitfalls [7]. For these reasons, many modern dynamically-typed languages have introduced static typing annotations as part of the core language features in recent years (e.g. _TypeScript_[10] and _mypy_[11]).
Functional programming languages have long enjoyed rigorous type systems and expressive type-level features. Techniques such as type inference and algebraic types have been standard practice for decades in functional languages such as ML and Haskell, and more recently in multi-paradigm languages, such as Rust and TypeScript. Various type system advances were introduced in Haskell and ended up in mainstream languages years or even decades after, leading many to consider Haskell the "type-system laboratory" [12]. Type classes, an implementation of generic programming, were introduced to Haskell in 1988 [12], and now can be found in most popular languages such as C# [13], Java [14], and TypeScript [15].
One crucial challenge of programming in statically-typed languages is that type errors can sometimes be difficult to resolve [16, 17]. In particular, they may point to locations that are not the root causes of the type error, expose errors in cryptic language, or provide misleading fixing suggestions [18].
This paper introduces ChameleonIDE, an interactive type debugging tool for Haskell. It can visualize the relevant context of a type error: where it happens or could have happened and which parts of the code cause it. In addition, ChameleonIDE allows programmers to interactively explore all the parts of code where multiple types can be inferred and to resolve ambiguity. The most noticeable features are the type compare tool (Section II-A0a), the candidate expression card (Section II-A0b), and the deduction step (Section II-A0c). These features are integrated into a debugging environment and can be enabled or disabled separately based on the programmers' preferences and debugging needs. ChameleonIDE is open-source and is available at [19].
This paper makes the following contributions:
* We provide the design and implementation of the ChameleonIDE to visualize the relevant context of a type error and allow programmers to explore and verify the error locations in small chunks interactively.
* We report the results of three experiments designed to evaluate ChameleonIDE.
Our experiments showed that programmers using ChameleonIDE fix type errors faster than with traditional text-based error messages. This difference is more significant when solving harder tasks. Further, programmers who actively use ChameleonIDE interactive features fix type errors faster than simply reading the type error output. Although ChameleonIDE is designed to work with the Haskell language, we plan to extend the underlying ideas to work with other strongly typed languages, such as Rust or TypeScript..
## I Motivation
The design requirements of ChameleonIDE are motivated by limitations of traditional type errors, as documented in a number of studies (e.g. [17, 20]), but which we illustrate here with a few motivating examples.
#### I-1 **Traditional type errors show only limited location**
Haack and Wells [23] noted that "_Identifying only one node or subtree of the program as the error location makes it difficult for programmers to understand type errors. To choose the correct place to fix a type error, the programmer must find all the other program points that participate in the error._" The type error in Fig. 1 can be fixed in multiple locations. For instance replacing ['0'.'.'9'] on line 1 with [0..9], or replacing fst x and snd x on line 2 with read (fst x) and read (snd x). In the type error message, only the addPair expression on line 4 was blamed. In this small example, the whole context is visible, but it can become problematic in large programs where the lines contributing to the type error are far apart in the source code.
#### I-2 **Traditional type errors are biased**
A common form of bias happens when a type error is reported in one expression, but it can occur in multiple other expressions as well. In Fig. 1, the error message arbitrarily focuses on only addPair, while ignoring that the literals in the definition of u may be incorrect. Another form of bias is that traditional type errors are often framed as conflicts between Expected type and Actual type. This framing is standard practice in most typed languages. However, what is expected and what is actual are a side effect of different unification orders rather than the intention of the programmer. In both forms, the error message may lead programmers to falsely believe the validity of parts of code and wrongly accuse others.
#### I-3 **Traditional type errors give poor explanations**
When the compiler rejects a program, the internal state of type checking is the result of a complex computation. But the details of this process are hard to explain to users and are usually not reported by compilers. For the typical type error shown in Fig. 1, the evidence for the type error is gathered from the previous declarations. These have to be rediscovered by programmers using less rigorous methods.
### _Design Goals of ChameleonIDE_
Based on the limitations of traditional type errors, we give the following design requirements for ChameleonIDE:
**Show** all the possible locations where the type error happened or could have happened.
**Explain** type errors avoiding jargon and internal constructs of the type checker.
**Do not presume** which expression is to blame for the type error based on the order of computation or which possible type for an expression is 'actual' or 'expected'.
## II Chameleon IDE
ChameleonIDE comprises two parts: a type inference engine and a novel interactive debugging interface. The debugging interface is designed from the ground up; the type inference engine is a re-implementation of the original Chameleon with several novel improvements, as described in Section II-B.
### _The Debugging Interface_
The ChameleonIDE debugging interface provides three main features to visualize and explain type errors.
Type compare toolThe type compare tool shows conflicting types in different colors, each type associated with one or more error locations highlighted in a matching color (Fig. 3). If the programmers know the expression's intended type (they usually do), they will be able to eliminate half of the possible locations. A hover interaction over one of the possible types facilitates such bisection, causing only the relevant locations that contribute to that type to be highlighted.
Candidate Expression CardsA candidate expression is an expression that can be inferred to have two conflicting types. When a type error is detected, ChameleonIDE provides a list of all candidate expressions, and programmers are free to choose the problem to resolve by clicking on one candidate expression card. In the example shown in Fig. 4, x and y are both candidate expressions. Fixing either type error can make both expressions well-typed.
Programmers select a candidate expression card by clicking on one card. Once a card is selected, the information in the conflicting types block changes to reflect the change of
Fig. 1: A type error displayed in Visual Studio Code [21] and the Haskell Vscode extension [22]. The expression addPair is blamed for causing the type error. This may not match the programmers’ intention.
candidate expression. In the editor pane, some error locations change highlight colors based on the updated candidate expression. Alternatively, programmers can preview the change of a candidate expression by hovering on one card. The hover effect is reverted once the cursor moves away.
Deduction stepsDeduction steps allow programmers to explore all the error locations one at a time (Fig. 5). Steps are shown as a list of sequentially numbered circular buttons (step buttons) and an explanation layer in the editor window. In
Fig. 4: **ChameleonIDE with candidate expression cards enabled.** Indicates the type error can occur in the definition of \(\mathrm{x}\) or \(\mathrm{y}\).
Fig. 3: **ChameleonIDE with type compare tool enabled.** ChameleonIDE identified the conflicting types for the expression \(\mathrm{u}\) and associated the relevant locations with each type. Compare the output with the traditional type error message in fig. 1.
Fig. 2: **The anatomy of ChameleonIDE.** The editor pane (left) is similar to a traditional code editor. Fragments of source code may have a highlight color. (A). Additionally, an explanation layer (B) displays if deduction steps are enabled. The debugging pane contains three blocks. First, the error statement block contains an error statement (D), optionally, a list of candidate expression cards (E), a list of deduction steps (F), and a control bar (G) to increment/decrement deduction step. Second, the conflicting types block shows two alternative types (H). Third, the relevant type information block shows additional information (I) that may help understand type errors.
the explanation layer, the two locations under examination are outlined, and a line is drawn to connect these two locations. This line is accompanied by a human-readable text explanation of their semantic connection. Programmers are free to activate any step. The active step is shown in green. When activating a step, some highlights switch color. The message in the explanation layer changes accordingly. A program in Fig. 5 generates a list of steps shown in Fig. 6 left.
Programmers can use mouse and keyboard shortcuts to increment or decrement the step number or jump to any step. Programmers resolve type errors by navigating through all the deduction steps and verifying whether each explanation aligns with their intention. Eventually, they will find a step that does not match, and the type error can be fixed by modifying one of the two outlined locations.
Internally, deduction steps are different ways to divide the error locations into two groups, denoted by the two colors. Each color infers a different type of the candidate expression. Each increment/decrement of the step changes the splitting point (dotted lines in Fig. 6) of the two colors.
Multiple ModesNielson pointed out that the two most important issues in designing for usability are understanding the users' tasks and the differences in users [24]. From analyzing how users use ChameleonIDE, we realized that the ideal debugging interface should adapt to the specific programmer and programming task. There are cases where a programmer wants the debugger to simply "show the answer", and others to dive deeper into the problem domain and search for the optimal solution. To accommodate the need to customize the level of information density and granularity of control, ChameleonIDE provides three modes: basic, balanced, and advanced. Programmers can switch between modes by clicking on the mode switching toggles (Fig. 2-C). The features accessible from different modes are summarized in table I.
### _The Type Inference Engine_
Chameleon was originally a command-line tool developed in the early 2000s to improve type error reporting for the Haskell programming language. Unlike traditional type errors produced by the Glasgow Haskell Compiler (GHC) [25], which uses a Hindley-Milner type inference system, Chameleon infers types using constraint solving. In Chameleon, constraints are generated from the source code based on typing rules. In addition, each constraint is labeled with the location where it is generated. This set of constraints is consistent if the program is well-typed and inconsistent otherwise. When a type error occurs, an efficient algorithm is used to derive a minimal subset of the constraints that still contain inconsistencies. This subset is called a Minimal Unsatisfiable Subset (MUS). From this, Chameleon can report a list of locations, using the labels of constraints that are in the MUS. Stuckey et al. [26] showed that program locations linked to the constraints from an MUS are all relevant to the type error and must include the cause of the error.
Despite successfully borrowing the underlying ideas, we could not reuse the original implementation of Chameleon since the project language standard and libraries used were
Fig. 5: **ChameleonIDE with deduction steps enabled.** ChameleonIDE explains the type error in four steps. In the screenshot, the active step is step 2, where ChameleonIDE shows that the expression x and y should have the same type.
Fig. 6: Deduction steps if they are shown all at once. In practice, steps are shown on set at a time. Programmers increment or decrement the step number using the step control bar (Fig. 2-G) or by directly clicking on a step button (Fig. 2-F). To increment or decrement the deduction step can be intuitively thought of as moving the position of the _splitting point_ (dotted lines) where the blue and orange highlights divide.
out of date. Our ChameleonIDE implementation extends the original Chameleon approach in a number of ways.
Recovering concrete types from type errorsUsing only constraints from the MUS is sufficient to locate the type error, but to recover types from type errors we need constraints from parts of the program that are irrelevant to the type error. For instance, consider an ill-typed 2-tuple where two possible types can be assigned: (Int, Int) and (Int, String). The types reconstructed from Chameleon may be (a, Int) and (a, String). Although the recovered types are theoretically correct, they introduce the notation a, which denotes a generic type variable that can be any type, making the error message harder to understand. To solve this issue in ChameleonIDE, for each constraint c in the MUS, we find a maximally satisfiable subset (MSS) from all the constraints that contain every other element of MUS but not c. These maximally satisfiable subsets, while not helpful in error localization, will produce the most concrete types, see Fig. 7. Concrete types, such as Int and String, often provides extra information to programmers. With a type of (Float, Float), programmers may want to convey a point in 2d space. However, a type of (a, Float) does not preserve such information.
Type error explanationIn addition, ChameleonIDE provides support for type explanation. Similar to the type explanation system in [27], ChameleonIDE is able to produce a human-readable explanation, but for type errors. This is achieved by annotating nodes in the abstract syntax tree with constraints and the type inference rules used. We generate an inference history from constraints and accompanying annotations.
For instance, for the program in Listing 1, ChameleonIDE generates the following constraints and labels (in brackets) \(T_{a}=Bool\) (if condition), \(T_{b}=T_{c}\) (if branches), \(T_{a}=String\) (definition). Clearly, as \(T_{a}\) can not unify with both _Bool_ and _String_, this program is not well typed. ChameleonIDE can construct a human-readable explanation from the MUS. An example output for Listing 1 can be: a has type Bool because a is the condition of an if statement; however, a has type String because a is defined as the string literal "True". This explanation facilitates the deduction steps (Section II-A0c).
## III Walkthrough
In this section, we showcase ChameleonIDE by walking through examples of its use. The examples are given from the perspective of a hypothetical Haskell programmer Maxine.
### _Basic mode_
Maxine writes a function to calculate the sum of a list of numbers, but ChameleonIDE shows there is a type error (Fig. 8). After reading the error reports, Maxine realizes that the error revolves around the expression xs. That is: xs can be either [a] or Int. By matching the color in the conflicting type block (Fig. 2-H) and the highlighted error locations Maxine knows that the [a] results from the pattern matching of the : operator, while Int results from using + to add two expressions.
At this point, Maxine knows the possible type 1 aligns with her intention, and therefore, the error locations with blue highlights must be erroneous. After examining the program, it comes clear that Maxine forgets to apply the sum function recursively at the right-hand side of the addition.
### _Balanced mode_
Maxine writes additional code to add only even numbers in a list of integers, reusing the sum function she wrote earlier. After saving the file, ChameleonIDE shows a type error in the expression sum (Fig. 9). However, this is not helpful because Maxine has just verified the implementation of sum. Switching to balanced mode, ChameleonIDE shows two cards: sum and evens.
Maxine therefore clicks on the evens card and ChameleonIDE reports two possible types for the expression [Int] and [Int] -> [Int] (Fig. 10). Knowing the expression evens holds a temporary list of even integers (hence it is of [Int] types), Maxine concludes that the Possible type 2 is unintended. The locations with blue highlights must contain the cause. It does not take long for Maxine to realize the list l is not supplied to the filter function.
### _Advanced mode_
To illustrate the deduction steps with the task shown in section III-B, first, Maxine clicks on step 5 (Fig. 11) and
Fig. 8: Maxine’s code to calculate the sum of a list of integers; ChameleonIDE reports an error on the expression xs.
Fig. 7: Reporting the same type error, Chameleon uses more abstract types Int -> a and Char -> a, while ChameleonIDE uses the concrete types (types that do not contain type variables) Int -> Bool and Char -> Bool.
verifies that the two occurrences of evens are supposed to be identical, and the second use means evens is a list of integers. Second, she clicks on step 6 (Fig. 12) and verifies that evens should be the same type as the declaration on the right-hand side.
Lastly, Maxine clicks on step 7 (Fig. 13), and it shows that the filter function is applied to one argument isEven. By consulting the relevant type information, Maxine identifies that filter is expecting two arguments while only one is provided.
## IV Evaluation
We conducted three user studies, iteratively refining the ChameleonIDE UI and evaluating several research questions as per Fig. 14.
### _Experiment Design_
_Recruitment:_ Participants were recruited via the Reddit _r/haskell_ and _r/programminglanguages_ communities. Participation is fully anonymized; detailed ethical implications of these experiments are reviewed and approved by the IRB of the authors' institution.
_Experiment setting:_ Experiments were conducted online and unsupervised. All user studies use a web-based debugging environment developed by the authors.
_Training and group assignment:_ After consent, participants received interactive training on the tool interface and interactive features. Participants were also shown a cheat
Fig. 11: Maxine’s code to calculate only the sum of even numbers in advanced mode. The current step is step 5, ChameleonIDE explains that the two appearances of expression evens should have the same type.
Fig. 12: In step 6, ChameleonIDE explains that evens is defined as the expression filter isEven. The left-hand side and the right-hand side should have the same type.
Fig. 10: Clicking on the evens card (5) results in the changes in the conflicting types panel to show the possible types for evens, and the changes highlight color to reflect the assumption that the definition of evens is the cause of the error.
Fig. 9: Maxine’s code to calculate only the sum of even numbers. ChameleonIDE reports an error with two candidate expressions.
sheet summarizing the key functionality of the interface, and had access to the cheat sheet at all times during the study. Participants were given 4 trial runs (2 for each setting) before the data collection started. All the studies used a within-subject design to evaluate the effectiveness of different tools or feature sets while counterbalancing the difference in programming proficiency between participants. In each study, participants were required to complete a series of programming tasks (8 for studies 1a and 1b, 9 for study 2). At each task, a participant receives a single Haskell file that contains one or more type errors. They were then asked to correct the code with the help of the given tool.
_Data Collection:_ Time is measured from the start of each task to the first time the program is successfully type-checked and also passes all the functional tests. Participants are able to skip a task if they are stuck. After completing all tasks, participants are prompted to complete a debriefing survey. The survey questions include their Haskell experience and feedback on the tools.
We used a browser session recording tool [28] to record the study sessions. This allows us to identify usability issues in the study and to recognize general patterns.
### _ChameleonIDE Human Studies_
#### Iv-B1 **ChameleonIDE 1**
An earlier version of the UI than that depicted in Figs. (2-13), it featured the type inference engine that recovers most concrete types after type errors occur and a minimal set of debugging features. Key features in ChameleonIDE 1 include showing two (or more) alternative types, showing all possible error locations, dividing possible error locations into groups based on alternative types, and concrete type restoration. In short, ChameleonIDE 1 is equivalent to ChameleonIDE 2 set to basic mode.
Two studies (1a & 1b) were conducted to compare the effectiveness of solving type errors using ChameleonIDE 1 and GHC compiler error messages. We choose GHC compiler error messages as the baseline because it is the canonical tool for working with type errors in Haskell.
Eight tasks were given in both studies. In study 1a, the tasks were taken from the exercises of the Haskell programming class in the authors' institute. In the second study, the tasks are sourced from the top 20 Haskell topics on GitHub [29]. The authors then manually added type errors into the program. In both studies, the type errors include simple mismatch, confusing syntax, missing instance, precedence and fixation, infinite types, and confusing list versus element. These categories follow the common type errors in Tirronen's study [16].
Studies (1a & 1b) address the research question:
**RQ1.**_Do programmers solve type errors faster with ChameleonIDE than GHC compiler error messages?_
_Results:_ The data collected during study 1a, Fig. 15 does not show significant differences across Tasks 1-7. In hindsight, these tasks were trivial challenges for most users, and the individual differences among participants are generally more significant than the differences between treatments. However, one interesting observation is task 8, where the ChameleonIDE group outperformed the GHC group. We attribute this significant difference to the difficulty of Task 8. The source file is longer and involves more language features (abstract data types and high-level functions). GHC struggles to produce a relevant error message for this type of error. From this result, we hypothesized that we might observe a more significant
Fig. 14: The timeline of ChameleonIDE evaluation.
Fig. 13: In step 7, ChameleonIDE explains that filter is applied to the function isEven. Assisted by the type of filter in the Relevant Type Information panel on the bottom right, Maxine can find the type error that filter expects two arguments but receives one.
Fig. 15: Study 1a task completion time (secs.) with 95% confidence interval.
difference using tasks with lengthier and more realistic source code. This hypothesis is also supported by the most common feedback claiming that the tasks were too trivial to invite meaningful evaluation. One participant said, "Looks nicer than GHC, but without trying it on something more complicated, I cannot conclude whether it would help me in practice."
Therefore, in study 1b we introduced more difficult challenges and indeed observed that the ChameleonIDE group was faster than the GHC group in almost all tasks (figure 16), barring task 1. A two-sample paired t-test was performed to compare the completion time between ChameleonIDE and GHC groups. There was a significant difference between the two groups: \(t(23)=-3.86,p=0007\). For task 1, it is suspected that some participants spent more time exploring the interface of ChameleonIDE due to its unfamiliarity. For all other tasks, from the video recordings, we saw many ChameleonIDE users confidently skip reading unrelated chunks of code, while GHC users generally read through the whole program. In harder problems and messier code, we notice programmers start to report the benefits of ChameleonIDE. "It's most useful feature that I noticed was that it points out the locations of both conflicting uses; GHC often makes it difficult to figure out how it's coming to a conclusion about a type." reported one participant. "I think ChameleonIDE does a much better job than GHC's error messages. I like that it shows the sources for the type judgments. This makes it quite easy to figure out how to rectify errors." reported another participant.
#### Vi-B2 **ChameleonIDE 2**
Based on observations of Study 1 we introduced several new features to ChameleonIDE, eventually resulting in the UI depicted in Figs. (2-13). Interactive features were available in this iteration, such as deduction steps, candidate expressions, and mode switching. A few other user interfaces [30] were designed and prototyped between the development of ChameleonIDE 1 and ChameleonIDE 2. Study 2 addresses the research question:
**RQ2:**_How do programmers use the interactive features in ChameleonIDE 2?_
More specifically:
* **RQ2.1** How do programmers use the advanced features provided by ChameleonIDE 2?
* **RQ2.2** Do programmers prefer switching modes during debugging type errors?
* **RQ2.3** What are programmers' preferences among the three modes provided by ChameleonIDE 2?
During each run, the initial mode of each task alternated through the three different modes and repeated three cycles in nine tasks. The order of the three modes in each cycle is counterbalanced among all participants. However, participants can switch to other modes at any time.
_Results:_ Study 2 is more exploratory in methodology than Study 1. We encouraged programmers to discover their way of using the tool. In post hoc analysis of the collected log data, we were able to extrapolate some interesting patterns of how the tool was used.
**RQ2.1**. The most striking feature of the data is that users tend to vary wildly in their use of the tool. Some users used the features extensively, while others completed the tasks without actively exploring the given information. Based on this discrepancy, we divided the users into three groups in table II.
As shown in Fig. 17, the time to complete each task roughly relates to the interaction level of participants. Participants with higher interaction levels generally performed better, and the lowest interaction level was worse. Tukey's HSD Test for multiple comparisons found that the completion time was significantly different between the minimal interaction group and the high interaction group (\(p\leq 0.001\), 95% C.I. = [18.26, 31.41]), and between the minimal interaction group and the low interaction group (\(p\leq 0.001\), 95% C.I. = [11.96, 26.67]). The results from three tasks stand out from the general trend: in Tasks 4 and 6, higher interaction users performed worse, and in task 9, the general trend is exaggerated. As with Study 1a and 1b, this difference is likely related to task difficulty. Tasks 4 and 6 are shorter than other tasks. The ideal fixes for these two tasks are placed relatively early in the source code (both in the first two lines of the source code). Users simply reading top to bottom could quickly identify the error without needing to skip unrelated sections of code using the information provided by ChameleonIDE. This reduced the apparent benefit of ChameleonIDE in these tasks. On the other hand, task 9 is the lengthiest task of all. It also involves deeply nested type definitions that are harder to follow in mind.
Another observation is when using the mode switching feature of ChameleonIDE, we show this by presenting the
Fig. 16: Study 1b task completion time (secs.) with 95% confidence interval.
starting mode and finishing mode of each task and each participant in a correlation matrix (Fig. 18). This observation suggests two characteristics of using multi-mode debugging tools. First, to answer **RQ2.2** programmers are roughly splitted in this matter: 53% changing modes vs. 47% staying in the same mode. Second, to answer **RQ2.3** when changing modes, programmers generally switch to the more informative modes instead of the more concise ones.
### _Limitations_
One threat to the validity of the evaluation is the number of participants. Although for each study we received hundreds of online participants, the studies suffered from a high abandonment rate (especially study 1b). This was expected: the programming challenges are difficult, and our volunteer participants are unterminated. Because we recruited participants online and anonymized all the participants, it is possible for participants of a previous study to enter a later one. This creates variation in familiarity. We offset this by using new code challenges in every study and conducting trial runs before data collection to bring new participants up to speed. Conducting studies remotely and unsupervised left us no means to intervene when users encounter usability issues. To mitigate this, we conducted cognitive walkthroughs and sandbox pilots before running each study.
Future evaluation would benefit from using more realistic tasks. The tasks in our human studies do not get as complex as professional Haskell programmers may face in a typical production codebase. It would be interesting to see how ChameleonIDE is used against type errors that span multiple files and packages and include more confusing abstractions, like Monads, Monad transformers, and Lenses.
## V Discussion
This paper presents the interactive type debugging tool ChameleonIDE and charts the evolution of its design across several iterations in response to user evaluation and feedback, as well as examines the effectiveness of the general approach compared to traditional static type error messages. We found that programmers using ChameleonIDE are able to debug errors faster than using traditional text-based error messages. This effect is shown more clearly when the task is not trivial. We found that programmers who actively use ChameleonIDE's interactive features are more efficient in fixing type errors than passively reading the type error output. In this section, we will discuss a few interpretations of the results.
### _Effect on Reading Source Code_
From the results of Study 1a, we observed that the choice of debugging tool had little effect on how fast programmers solve simple type errors. Conversely, when facing more realistic problems (longer source code, error locations more scattered) in study 1b, programmers are more effective using ChameleonIDE. One explanation is that ChameleonIDE reduces the amount of reading time by taking programmers more directly to the problem. Earlier studies [31, 32] showed that reading source code is generally the initial step of solving programming problems and is done in several passes. Although traditional compiler error message tools initially show fewer locations, these may be incomplete, meaning that programmers have to expand the reading span without clear guidance. In contrast, ChameleonIDE shows more error locations initially. However, the completeness of error locations assures programmers which part of the source code can be safely skipped.
### _Forming Debugging Plans_
From the results of Study 2, we found that programmers who use the interactive tool fix type errors faster than the ones who passively read the error output. This effect is stronger in harder tasks. We speculate that one factor of this result is that ChameleonIDE helps to develop debugging plans. We observed that when working with ChameleonIDE, programmers form different debugging plans to attack the problem. Among the _high_ interactivity participants in user study 2, some programmers cycle through deduction steps as a guide to reading source code; some navigate to both ends of the deduction chain where types are normally grounded and concrete. In contrast, _minimal_ interactive participants generally form similar plans, including carefully reading the program text and manually annotating expressions based on their understanding of the program.
### _Externalize Intermediate Typing Information_
We speculate another factor of the effectiveness of ChameleonIDE interactive debugging tools is that they help programmers effectively chunk intermediate information. With
Fig. 17: Study 2 task completion time (secs.) with 95% confidence intervals.
Fig. 18: Study 2 mode switches by starting mode. Users overwhelmingly switched to the more sophisticated interface mode.
the program shown in Listing 2, ChameleonIDE offers two candidate expressions: f can be typed as Int -> Bool or Char -> Bool; z can be typed as Int or Char. Although these two statements are equivalent in theory, programmers are often required to compute the latter from the former or vice versa. And this computation may carry out multiple layers. Programmers have to remember all the intermediate types and their reasoning throughout such mental gymnastics. Assisted by candidate expression cards and deduction steps, this intermediate information is externalized on screen and can be retrieved anytime. A recent study on working memory [33] suggested this approach may provide a positive effect in helping programmers manage cognitive load and free up working-memory space for high-level thinking.
## VI Related Work
### _Finding all type error locations_
Many have studied the approach of finding all locations that contribute to a type error [23, 26, 34, 35]. Type error slicing [23] is a technique that finds locations that are complete and minimal for the type error. Internally labeled constraints and Minimal Unsatisfiable Subset (MUS) generation are used to generate these slices. The language supported in Haack's work was a subset of Standard ML. The original Chameleon [26] used Constraint handling rules (CHR) to support the computing of type error slices in Haskell. Chameleon also supported advanced type-level features (type classes and functionally dependent types). The project also introduced the ability to query type information through a command line interface. Although Chameleon was firmly grounded in results from type theory, its designs were never evaluated with user studies. While finding all error locations is useful in comprehending type errors, it is only 1 of the 7 properties listed in the proposed manifesto of good type error reporting [20]. To the best of our knowledge, ours is the first user-centered evaluation of an interactive type debugging system involving type-error slicing.
### _Producing high-quality error explanation_
One weakness of compiler error messages, in general, is that they fail to explain the error in human language. As put in [36], "Error messages appear to take the form of natural language, yet are as difficult to read as source code." A well-studied approach to producing better error explanations is through ECEM (Enhanced compiler error message). Through a series of mixed-method studies, Prather showed [37] that ECEM has a positive result in understanding compiler errors. Decaf [38] is a tool that can rephrase Java compiler error messages into an enhanced version. In a study of over 200 CS1 students, Decaf was shown to reduce overall errors in their coding practices. Berik proposed a framework [39] for constructing compiler error messages based on argumentation theory, and showed that error messages following a simple argumentation layout or an extended argumentation layout are more human-friendly. These works show the significance of improving the language in the compiler error messages. Most principles and suggestions are followed in ChameleonIDE in constructing error statements. However, these earlier studies were not targeting type errors alone but general compiler errors (some even include runtime errors). The nuances of type errors, such as alternative typing, were not considered. Moreover, these explanation systems were designed specifically for novice users.
### _Interactive Debugging_
Modern programming tools can offer alternative methods of code authoring, display real-time feedback and reveal complex programming contexts through visualizations. Many tools aim to improve the debugging experience using such capabilities. We list two. Hazel Tutor [40] is an interactive type-driven environment for the OCaml language. It can automatically fill type holes by suggesting template expressions (called "strategies" by the authors) through a popup window. It also provides a cursor-based type inspector that allows programmers to query the types of different parts of the program. Whyline [41] is a Java debugging system that allows a user to ask questions like "why does variable X have value Y." It also allows users to interactively ask follow-up questions to gain further knowledge of the nature of an error. These debugging tools are important motivations for developing ChameleonIDE. However, they focus on different aspects of the debugging process. Java Whyline mainly tackles the problem of unintended runtime behavior, while Hazel Tutor specializes in development assistance supported by type holes.
## VII Conclusion
We present ChameleonIDE, a type debugging tool for the Haskell programming language. Its constraint-based type inference engine provides unbiased and comprehensive error location reporting. Our studies evaluated the tool's design with programmers. We found that, particularly for more complex tasks, ChameleonIDE helped programmers to fix type errors more quickly than traditional text-based error messages. Further, programmers actively using ChameleonIDE interactive features are shown to fix type errors faster than simply reading the type error output. ChameleonIDE currently works with the Haskell language, but in the future, we plan to extend the type-checking system to work with other strongly typed languages, such as Rust or TypeScript.
#### Acknowledgments
The work of Peter Stuckey was partially supported by the OPTIMA ARC ITTC, Project ID IC200100009. |
2306.03304 | Plasma flows during the ablation stage of an over-massed
pulsed-power-driven exploding planar wire array | We characterize the plasma flows generated during the ablation stage of an
over-massed exploding planar wire array, fielded on the COBRA pulsed-power
facility (1 MA peak current, 250 ns rise time). The planar wire array is
designed to provide a driving magnetic field (80-100 T) and current per wire
distribution (about 60 kA), similar to that in a 10 MA cylindrical exploding
wire array fielded on the Z machine. Over-massing the arrays enables continuous
plasma ablation over the duration of the experiment. The requirement to
over-mass on the Z machine necessitates wires with diameters of 75-100 $\mu$m,
which are thicker than wires usually fielded on wire array experiments. To test
ablation with thicker wires, we perform a parametric study by varying the
initial wire diameter between 33-100 $\mu$m. The largest wire diameter (100
$\mu$m) array exhibits early closure of the AK gap, while the gap remains open
during the duration of the experiment for wire diameters between 33-75 $\mu$m.
Laser plasma interferometry and time-gated XUV imaging are used to probe the
plasma flows ablating from the wires. The plasma flows from the wires converge
to generate a pinch, which appears as a fast-moving ($V \approx {100}$
kms$^{-1}$) column of increased plasma density ($\bar{n}_e \approx 2 \times
10^{18}$ cm$^{-3}$) and strong XUV emission. Finally, we compare the results
with three-dimensional resistive-magnetohydrodynamic (MHD) simulations
performed using the code GORGON, the results of which reproduce the dynamics of
the experiment reasonably well. | R. Datta, J. Angel, J. B. Greenly, S. N. Bland, J. P. Chittenden, E. S. Lavine, W. M. Potter, D. Robinson, T. W. O. Varnish, E. Wong, D. A. Hammer, B. R. Kusse, J. D. Hare | 2023-06-05T23:13:53Z | http://arxiv.org/abs/2306.03304v2 | Plasma flows during the ablation stage of an over-massed pulsed-power-driven exploding planar wire array
###### Abstract
We characterize the plasma flows generated during the ablation stage of an over-massed exploding planar wire array, fielded on the COBRA pulsed-power facility (1 MA peak current, 250 ns rise time). The planar wire array is designed to provide a driving magnetic field (\(80-100\) T) and current per wire distribution (about 60 kA), similar to that in a 10 MA cylindrical exploding wire array fielded on the Z machine. Over-massing the arrays enables continuous plasma ablation over the duration of the experiment. The requirement to over-mass on the Z machine necessitates wires with diameters of \(75-100\,\mathrm{\SIUnitSymbolMicro m}\), which are thicker than wires usually fielded on wire array experiments. To test ablation with thicker wires, we perform a parametric study by varying the initial wire diameter between \(33-100\,\mathrm{\SIUnitSymbolMicro m}\). The largest wire diameter (\(100\,\mathrm{\SIUnitSymbolMicro m}\)) array exhibits early closure of the AK gap, while the gap remains open during the duration of the experiment for wire diameters between \(33-75\,\mathrm{\SIUnitSymbolMicro m}\). Laser plasma interferometry and time-gated XUV imaging are used to probe the plasma flows ablating from the wires. The plasma flows from the wires converge to generate a pinch, which appears as a fast-moving (\(V\approx 100\,\mathrm{km\,s^{-1}}\)) column of increased plasma density (\(\bar{n}_{e}\approx 2\times 10^{18}\,\mathrm{cm^{-3}}\)) and strong XUV emission. Finally, we compare the results with three-dimensional resistive-magnetohydrodynamic (MHD) simulations performed using the code GORGON, the results of which reproduce the dynamics of the experiment reasonably well.
## I Introduction
Inverse (or "exploding") cylindrical wire arrays are a commonly-used pulsed-power-driven source of magnetized plasma for laboratory astrophysics applications. These arrays consist of a cylindrical cage of thin conducting wires surrounding a central cathode. This magnetic field configuration drives radially-diverging outflows into a vacuum region, providing good diagnostic access.[1] These arrays have previously been fielded on 1-MA university-scale facilities to study a variety of astrophysical phenomena, including magnetized plasma shocks,[2; 3; 4; 5] laboratory magnetospheres,[6] and magnetic reconnection.[7; 8] For such applications, the wire arrays are typically over-massed, so that they provide continuous sustained plasma flows over the duration of the experiment.[9]
On larger pulsed-power machines, such as the Z machine (30 MA peak current, Sandia National Labs),[10; 11] over-massed exploding wire arrays require a larger initial mass due to the higher driving current.[9] For the same wire material, this necessitates more wires and/or the use of larger diameter wires. Although arrays with thin (\(5-40\,\mathrm{\SIUnitSymbolMicro m}\) diameter) wires have been characterized extensively in pulsed-power-driven experiments,[9; 12; 13] there has been little systematic effort to study ablation from thick (\(>50\,\mathrm{\SIUnitSymbolMicro m}\) diameter) wires, especially with Z-relevant \(>100\) T driving magnetic fields.[11]
In cylindrical wire arrays, the maximum driving magnetic pressure is limited by the size of the central cathode and the AK gap (the gap between the anode/wires and cathode). The driving magnetic field in a cylindrical array can be determined from Ampere's law: \(B(t)=\mu_{0}I(t)/(2\pi R)\), which shows that it varies inversely with the radius \(R\) of the array.[9] For a \(R=10\,\mathrm{mm}\) array, the peak driving field on a typical 1-MA university-scale machine is \(B=20\) T. The finite size of the central cathode makes it difficult to achieve \(\sim 100\) T driving magnetic fields using cylindrical arrays on 1-MA university-scale facilities. In order to overcome this limitation, we explore the use of planar wire arrays to test ablation from thick wires in Z-relevant driving magnetic fields.
These planar wire arrays consist of a linear arrangement of wires separated by a small AK gap from a planar return electrode. This "exploding" planar geometry, which has previously been fielded on 1-MA pulsed-power devices,[14] allows us to achieve higher driving magnetic fields than in cylindrical arrays. We also investigate the use of planar wire arrays as a platform for laboratory astrophysics experiments. Since exploding cylindrical arrays generate azimuthally symmetric flows, the majority of the plasma (and therefore the stored energy) is lost in directions that are not of interest.[7; 8] Moreover, due to radially-diverging flows, the density and advected magnetic field decrease rapidly with distance from the wires.[3; 5] In contrast, planar wire arrays could provide directed flows of denser plasma with higher advected magnetic fields, which can be desirable for many laboratory astrophysics applications. In magnetic reconnection experiments, for instance, this would increase dissipation in the current sheet, which is necessary for studying radiative-cooling effects.[15; 16]
Planar wire arrays were primarily developed as an efficient X-ray radiation source for indirect-drive inertial confinement fusion (ICF) experiments.[17; 18; 19; 20; 21] In contrast to the "exploding" geometry used in this paper, the wire arrays used for X-ray generation typically consist of a linear row of wires between the cathode and anode of the pulsed-power device, without the planar return electrode placed adjacent to the wires. Furthermore, these arrays use thin \(5-20\,\mathrm{\SIUnitSymbolMicro m}\) diameter wires, which implode during the course of the experiment.[17; 19; 20] The arrays are designed such that the implosion time matches the
time of peak current, in order to maximize X-ray emission (hence, they are also called matched arrays).[20] The implosion stage typically proceeds in a cascade-like fashion, where the imploding wires, starting from the outermost wires, accelerate toward the geometric center of the array, to form a strongly-radiating inhomogeneous plasma column.[18; 20] These array configurations have been reported to exhibit peak X-ray power and yield higher than imploding cylindrical arrays with similar number of wires.[19; 20] Bland _et al_. were the first to field the planar wire array in an exploding geometry. This geometry, which consisted of a matched planar array with thin \(7.5\,\mathrm{\SIUnitSymbolMicro m}\) tungsten wires, exhibited a \(5-6\times\) higher ablation rate compared to cylindrical wire arrays, consistent with the increased driving magnetic pressure inside the AK gap.[14] Furthermore, the ablating plasma converged to form a magnetic precursor column offset from the plane of the wires, before exhibiting the cascade-like implosion described above.[14]
In contrast to the previous planar wire array experiments described above, which use matched arrays with thin wires, we use over-massed arrays with thick \(33-100\,\mathrm{\SIUnitSymbolMicro m}\) aluminum wires. This suppresses the implosion stage, and generates continuous plasma ablation over the course of the experiment. In wire arrays, the initial flow of current through the wires forms dense cold wire cores surrounded by low-density coronal plasma.[9; 12] Current density is concentrated in a thin skin region that surrounds the wire cores, and includes the coronal plasma. The coronal plasma is redirected by the \(\mathbf{j}\times\mathbf{B}\) force of the driving magnetic field in the AK gap between the wires and the cathode. In a planar wire array, this generates plasma flows directed away from the AK gap.[14] The ablating plasma advects some magnetic field from the AK gap as it flows outwards, creating outflows of magnetized plasma. In matched arrays, when the stationary wire cores begin to run out of mass (typically \(\sim 50-80\%\) of the initial mass), periodic breaks appear in the wires, driven by the growth of a modified \(m=0\)-like axial instability.[22; 23; 24] This marks the end of the ablation phase, and the beginning of the implosion phase.[9]
Large wire cores can be undesirable for the ablation process. A large wire core diameter relative to the inter-wire separation inhibits the ablation of mass and the advection of magnetic field with the ablating plasma.[25] A large core size may also increase the likelihood of AK gap closure in pulsed-power-driven systems. In wire arrays, closure of the AK gap is undesirable, as it short-circuits the current path, leading to decreased current flow through the wires and reduced/terminated ablation. Previous experiments aimed at characterizing wire core size in imploding wire arrays show that core diameter varies with wire material and initial wire diameter, but is largely independent of the current per wire, and the inter-wire separation.[12]
In this paper, we explore the use of an over-massed exploding planar wire array as a platform for laboratory astrophysics experiments, and as a scaled experiment to investigate the ablation of thick wires in cylindrical wire arrays driven by Z-relevant driving magnetic fields. The array is driven by the COBRA pulsed-power machine (1 MA peak current, 250 ns rise time),[26] and is designed to exhibit a magnetic driving pressure, current per wire, and inter-wire separation, comparable to that of a \(40\,\mathrm{mm}\) diameter exploding wire array driven by a 10 MA current pulse from the Z machine.[15; 16] These experiments, therefore, allow us to investigate wire ablation on smaller 1 MA facilities, in loads designed for use on \(\sim 10\) MA machines. We note that Bland _et al_. also aimed to match the driving magnetic field of a 20 MA, 100 ns rise time current pulse on the Z Machine, to understand imploding cylindrical wire array ablation at higher current per wire and driving magnetic fields. In this paper, we target the driving conditions generated when Z operates in a synchronous long-pulse configuration, with 20 MA peak current (split between two arrays) and a 300 ns rise time.[15; 16] As such, we use the long-pulse mode on COBRA, as described in Sec. II.
The requirement to over-mass on the Z machine necessitates wires with diameters of \(75\)-\(100\,\mathrm{\SIUnitSymbolMicro m}\), which are thicker than wires usually fielded on wire-array z-pinch experiments. To investigate ablation with thicker wires, we vary the initial wire diameter between \(33-100\,\mathrm{\SIUnitSymbolMicro m}\) over multiple shots. The load hardware, as well as the magnetic field and current distributions in the load, are described in Sec. II.1. We characterize the plasma ablation and the reduction in the AK gap for the different wire sizes using laser shadowgraphy, Mach-Zehnder imaging interferometry, and XUV pinhole imaging (Sec. II.2). These experimental results are provided and dis
Figure 1: (a) 3D CAD representation of the load hardware. The load hardware consists of a planar array of 15 equally-spaced aluminum wires. (b) Side-on view of the load hardware. (c) End-on view of the load hardware.
cussed in Sec. III and Sec. IV. Finally, in Sec. IV.2, we compare the experimental results with three-dimensional resistive magnetohydrodynamic (MHD) simulations performed using GORGON.
## II Experimental and Diagnostic Setup
### Load Hardware
Figure 1 shows the load hardware configuration for this experiment. The load consists of a linear array of 15 equally-spaced aluminum wires. The wire-to-wire separation is \(0.83\,\mathrm{mm}\), and the array height is \(12\,\mathrm{mm}\). The wires are separated from a \(10\,\mathrm{mm}\) wide stainless-steel cathode by a \(2\,\mathrm{mm}\) wide AK gap, and are held in position by clamps on the anode plate and on the top of the cathode post. We perform a parametric study by varying the wire diameter between \(33\,\mathrm{\SIUnitSymbolMicro m}\leq d_{\mathrm{wire}}\leq 100\,\mathrm{\SIUnitSymbolMicro m}\) for different experimental shots. The COBRA pulsed-power machine (Cornell University),[26] when operated in long pulse mode, drives a \(1\,\mathrm{MA}\) peak current pulse through the load.[12, 27] A calibrated Rogowski coil placed around the central cathode monitors the current delivered to the load. Figure 2 shows the variation of the current delivered with time, as measured by the Rogowski coil. We show the current pulse averaged over \(6\) successive shots. The current pulse has a double-peaked structure, as it is formed by triggering two Marx generators with a delay between them. The first peak has a magnitude of about \(0.75\) MA, and appears roughly \(125\) ns after current start, while the second peak has a magnitude of approximately \(1\) MA, and appears \(250\) ns after current start. The shot-to-shot deviation in the current pulse for this experimental series is \(<10\%\).
To gain insight into the current distribution and driving magnetic field for the planar wire array, we perform magnetostatic inductance and Biot-Savart calculations of the load hardware.[14] The magnetostatic magnetic field distribution in the planar wire array is shown in Figure 3a. The magnetic field inside the AK gap is nearly uniform with \(y\)-directed field lines, which curve around the outermost wires to form closed loops outside the array. The mean driving magnetic field (at peak current) inside the AK gap is about \(80-100\) T. Figure 3c shows lineouts of the magnetic field strength along the \(y\)-direction inside the AK gap. At the center of the gap (\(x=-1\,\mathrm{mm}\)), the driving magnetic field is uniform in the middle of the array (\(|y|\leq 4\) mm) with a strength of about \(81\) T, but drops sharply to about \(50\) T near the position of the outermost wires (\(y=\pm 6\,\mathrm{mm}\)). This is because in contrast to previous experiments,[14] we use a cathode whose width is smaller than the linear extent of the wires, which decreases the magnetic field strength around the outermost more inductively-favorable wires. Closer to the position of the wires, the magnetic field is dominated by the local magnetic field around each wire, resulting in a periodic variation in the field strength, as seen in Figure 3c. Finally, unlike cylindrical exploding wire arrays, where field lines form closed loops inside the AK gap, and the field decays to zero outside the wires,[2] here the magnetic field lines must form closed loops outside the AK gap in the planar wire array. This means that there is a non-zero vacuum magnetic field in the flow region to the right of the wires, which is expected to be about \(10\%\) of the driving magnetic field from the magnetostatic calculations.
The simulated current distribution in the wires at peak current (\(1\) MA) is shown in Figure 3b. The current in the wires is symmetric about the \(y=0\) mm plane, and increases slightly with distance from the centerline for the inner wires. The current per wire is about \(60-65\) kA for the inner wires, and increases sharply to approximately \(90\) kA for the outermost wires. The higher current in the inductively-favorable outermost wires has also been reported previously in inductance and wire dynamics model computations of the planar wire array.[14, 18] Bland _et al._ considered both the resistive and inductive division of current between the wires, and found the experimental observations to be more consistent with the inductive current division, driving much higher current to the outermost wires.[14] Due to the higher current, the rate of mass ablation from the outermost wires is expected to be higher. From a rocket model calculation,[9] assuming \(50\,\mathrm{\SIUnitSymbolMicro m}\) diameter wires, we expect the outermost wires to ablate \(50\%\) of their initial mass around \(200\,\mathrm{ns}\) after current start.
When current flows through the wires, the wires heat up resistively to generate a low-density coronal plasma surrounding the dense wire cores. The ablated plasma is accelerated by the \(\mathbf{j}\times\mathbf{B}\) force; this results in an outward flow of plasma into the region to the right of the wires. Figure 3a also shows the direction and relative magnitude of the \(\mathbf{j}\times\mathbf{B}\) force acting on the wire locations. The \(\mathbf{j}\times\mathbf{B}\) force at the wires points in the \(+x\)-direction for the inner wires, and its magnitude remains roughly consistent for the inner wires. The outer wires experience a \(\mathbf{j}\times\mathbf{B}\) force directed towards the center of the array. This is due to the bending of the field lines around the outer wires, as observed in Figure 3a. In designing the wire array, we explored different cathode sizes and AK gap widths in the magnetostatic simulations, however, the magnetic field always curves around the outer wires, similar to that in previous experiments,[14] leading to an inward-directed a \(\mathbf{j}\times\mathbf{B}\) force. A shorter cathode relative to the linear extent of the wires, however, allows us to reduce the magnetic field strength around the higher current-carrying outermost wires, and thus, make the magnitude of the \(\mathbf{j}\times\mathbf{B}\) force relatively more uniform.
Figure 2: Variation of the current delivered with time for the COBRA generator. We show the current pulse averaged over \(6\) successive shots. The shaded region is the shot-to-shot deviation in the current delivered.
### Diagnostic Setup
We use laser shadowgraphy to visualize the plasma flow from the planar wire array. The shadowgraphy system is set up to provide a side-on view (\(xz\) plane) of the experimental setup. This view is shown in Figure 4a, which is a pre-shot image of the load. As the laser beam propagates through the plasma, electron density gradients deflect the light away from regions of higher density (lower refractive index) towards regions of lower density (higher refractive index). The intensity measured by the detector is thus related to gradients of electron density.[28]
In addition to shadowgraphy, we use a Mach-Zehder imaging interferometry system to measure the spatially-resolved line-integrated electron density of the plasma. Our interferometry system is set up to provide both an end-on (\(xy\) plane) and a side-on view (\(xz\) plane) of the experimental setup (see Figure 1). When the probe beam propagates through the plasma, the resulting phase accumulated by the beam distorts the fringe pattern, and introduces a spatially-varying fringe shift,[29] which we use to reconstruct the phase difference between the probe and reference beams, and to determine the spatially-resolved line-integrated electron density.[30] The field-of-view of our interferometer includes volume devoid of plasma, where the fringes remain undistorted. This region of zero fringe shift is chosen as the region of zero density. Both the shadowgraphy and interferometry systems use a 532 nm Nd:YAG laser (150 ps pulse width, 100 mJ) with a 1" diameter field-of-view. In the end-on system, the laser beam enters through a 26.4 mm diameter hole in the anode plate, as shown in Figure 1c. The interferograms and the shadowgraphs are captured simultaneously using Canon EOS DIGITAL REBEL XS cameras. The interferometry and shadowgraphy systems record 1 frame per shot. The shots are reproducible, and we build up dynamics over multiple shots with identical initial conditions.
We also use a time-gated micro-channel plate (MCP) camera to capture extreme-ultraviolet (XUV) self-emission from the plasma. The camera captures 4 frames (10 ns inter-frame time, 5 ns exposure time) recorded on isolated quadrants of the MCP via 200 \(\upmu\)m diameter pinholes. The XUV camera looks onto the wires in the \(yz-\)plane, with an azimuthal viewing angle of \(7.5^{\circ}\) with respect to the \(x\)-axis, and a \(5^{\circ}\) polar angle to the horizontal (\(xy\)) plane. The diffraction-limited spatial resolution of the system, for photon energies between 10-100 eV, is about 180 \(\upmu\)m - 18 \(\upmu\)m, while the geometric resolution is about 300 \(\upmu\)m.
### Shadowgraphs for different wire diameter
Figure 4 shows the side-on (\(xz\)-plane) shadowgraphs for different wire diameters \(d_{\text{wire}}=33-100\,\upmu\)m. In each image, plasma flows from the left to the right, and we mark the initial position of the wires, determined from the pre-shot images, using a white line. We record the shadowgraphs at 150 ns after current start for the 100 \(\upmu\)m diameter wire array, and at 200 ns for the \(33-75\,\upmu\)m diameter arrays. In each shadowgraph, the wires expand to form an opaque region around the initial wire position. In this region, the propagating laser beam is lost, either because the density exceeds the critical density of the propagating light (\(n_{e,crit}\approx 4\times 10^{21}\,\text{cm}^{-3}\)), or due to strong density gradients which refract the light out of the optical system's field of view. This dense region of plasma expands in the \(+x\)-direction, driven by the outwardly-directed \(\mathbf{j}\times\mathbf{B}\) force in
Figure 3: (a) Simulated magnetostatic magnetic field distribution in the load hardware. (b) Simulated current distribution in the wires at peak current, calculated from a magnetostatic inductance calculation. (c) Variation of the magnetic field strength in the AK gap at peak current at \(x=-0.25\), \(-0.5\,\&-1\,\text{mm}\).
the AK gap. Adjacent to the high-density region, we observe a region of relatively uniform density, followed by a narrow column of intensity fluctuations (indicated by a red rectangle in Figure 4). This plasma column, consistent with observations of a magnetic precursor column in the literature,[14, 20] can be observed further away from the wires for the \(33-75\,\mathrm{\SIUnitSymbolMicro m}\) diameter wires, compared to the \(100\,\mathrm{\SIUnitSymbolMicro m}\) wire array, which was recorded at an earlier time.
In addition to expansion in the \(+x\)-direction, the wire cores and the coronal plasma also expand into the AK gap. Figure 5 shows a magnified view of the AK gap for the \(75\,\mathrm{\SIUnitSymbolMicro m}\) wire array, both before the experiment, and at 200 ns after current start. Reduction in size of the AK gap occurs due to wire core and coronal plasma expansion, as well as expansion of plasma from the cathode surface. The cathode surface plasma arises from current-driven ablation at the cathode surface, and photoionization via soft X-ray radiation generated by the wire cores. As observed in Figure 4, the array with \(100\,\mathrm{\SIUnitSymbolMicro m}\) wires exhibits the largest reduction in the AK gap size, while the AK gap remains open for wire diameters \(d_{\mathrm{wire}}\leq 75\,\mathrm{\SIUnitSymbolMicro m}\).
We use the intensity of the shadowgraphs to estimate the diameter of the wire cores, and the size of the cathode surface plasma. We crop the shadowgraph to a smaller window which includes the cathode surface, the AK gap, and the expanding coronal plasma (\(-3\,\mathrm{mm}\leq x\leq 0\,\mathrm{mm}\), \(|z|\leq 5\,\mathrm{mm}\)) (see Figure 5b); then integrate the pixel intensity along the \(z\)-direction. Figure 5c shows the integrated pixel intensity as a function of position \(x\) for the \(75\,\mathrm{\SIUnitSymbolMicro m}\) diameter array, both for the preshot
Figure 4: (a) Shadowgraph of the load hardware recorded before the start of the experiment. (b-f) Shadowgraphs of plasma ablation from the planar wire array for different wire diameters \(33\,\mathrm{\SIUnitSymbolMicro m}\leq d_{\mathrm{wire}}\leq 100\,\mathrm{ \SIUnitSymbolMicro m}\). In each image, plasma flow is from left to right. We indicate the initial position of the wires, determined from preshot images, with a white line. In (d) and (e), we also position b-dot probes in the flow.
Figure 5: (a) Preshot shadowgraph of the AK gap for \(75\,\mathrm{\SIUnitSymbolMicro m}\) diameter wires. (b) Expansion of the cathode plasma, and the wire core and coronal plasma into the AK gap at 200 ns after current start. (c) Integrated pixel intensity from shadowgraphy images of the AK gap. The width labeled ‘A’ represents the width of the cathode surface plasma, and ‘B’ is the radius of the expanded coronal plasma.
and the experimental shadowgraphs. The dark-to-light transitions in the preshot intensity profile represent the positions of the cathode surface and the wires, while those in the shot intensity profile represent the 'edges' of the cathode plasma and the coronal plasma respectively. We fit a sigmoid function -- the cumulative density of a normal distribution -- to the light-to-dark intensity transitions, and determine the wire coronal and cathode plasma 'edges' from the means \(\mu\) of the fitted functions (or equivalently, the full-width-at-half-maximum of the transition). We estimate the uncertainty from the fitted function's standard deviation \(\sigma\). We can then estimate the width of the cathode plasma from the distance between the cathode plasma edge and the position of the cathode surface (quantity A in Figure 5c). Similarly, we determine the coronal plasma radius from the distance between the initial wire position and the position of the coronal plasma edge (quantity B in Figure 5c).
Figure 6a shows the variation of the coronal plasma radius and the cathode plasma width with varying initial wire diameter. The coronal radius increases with increasing initial diameter of the wires, while the width of the cathode plasma remains relatively constant at roughly 0.5 mm. Figure 6b shows the variation of the width of the AK gap with the initial wire diameter. Consistent with the shadowgraphs in Figure 4, the AK gap width decreases with increasing wire diameter. The 33 \(\mu\)m wires exhibit the smallest reduction in gap size, where the gap decreases from about 2.7 mm initially to roughly 1.8 mm during the experiment. In contrast, the 100 \(\mu\)m wires exhibit the largest decrease in gap size, from about 2 mm initially to roughly 0.2 mm 150 ns after current start. The smaller gap size for large wire diameters is primarily due to the increased core and coronal plasma radius of the larger wires. This is in contrast to Bland _et al._, in which the AK gap closure was almost entirely due to the expansion of plasma from the return electrode, and not from the thin 7.5 \(\mu\)m diameter tungsten wires.
The methodology described above provides an upper limit
Figure 6: (a) Variation of the coronal radius and the cathode surface plasma width as a function of initial wire diameter. (b) Variation of the AK gap size with initial wire diameter. The range of values shown here comes from variation in the \(z\)-direction.
Figure 7: Plasma ablation from a planar wire array with 50 \(\mu\)m diameter wires at (a) 150 ns, (b) 200 ns, and (c) 250 ns after current start. Shadowgraphs are recorded in separate experimental shots. The red box shows the position of the plasma column, which travels at roughly 100 km s\({}^{-1}\) between 150-250 ns. Note that in (b), we have placed a b-dot probe in the flow.
on the wire core diameter. This is because the opaque region in the shadowgraph includes both the wire core and the surrounding coronal plasma. Moreover, density gradients in this region refract the light away from the relatively denser wire cores, resulting in a magnified image. Nevertheless, the shadowgraphs reflect the general trend observed in the variation of the core size with wire diameter. X-ray backlighter imaging, which can probe deeper into the core region, can provide a better estimate of the core size. However, this diagnostic was not available for this experimental series. Previous experiments aimed at characterizing the core size report that values determined from shadowgraphs can be \(5-10\times\) larger than that determined from simultaneous X-ray imaging.[12]
### Temporal evolution of ablation from the array
We compare side-on shadowgraphs for \(50\,\mathrm{\SIUnitSymbolMicro m}\) diameter wire arrays at \(150\,\mathrm{ns}\), \(200\,\mathrm{ns}\), and \(250\,\mathrm{ns}\) in Figure 7. These shadowgraphs are recorded in separate experimental shots, with identical load hardware. The plasma column (red box in Figure 7) on the right of the image travels in the \(+x-\)direction, from \(x\approx 12\,\mathrm{mm}\) at \(150\,\mathrm{ns}\) to \(x\approx 22\,\mathrm{mm}\) at \(250\,\mathrm{ns}\) after current start. This corresponds to an average velocity of about \(100\,\mathrm{km}\,\mathrm{s}^{-1}\), which is consistent with the magnitude of flow velocity observed in previous wire array experiments.[6; 31] The outward translation of the plasma column in our over-massed array is in contrast to that observed in the under-massed case, where the column remains mostly stationary (\(V<15\,\mathrm{km}\,\mathrm{s}^{-1}\)) between the time of formation and implosion.[14]
The AK gap remains open throughout the experiment. The temporal evolution of the coronal plasma radius and the cathode plasma width are shown in Figure 8a. The measured coronal radius decreases weakly with time, from about \(1.2\,\mathrm{mm}\) at \(150\,\mathrm{ns}\) to about \(0.75\,\mathrm{mm}\) at \(250\,\mathrm{ns}\) after the current start. In contrast, the width of the cathode surface plasma remains roughly constant at about \(0.5\,\mathrm{mm}\). Due to the decreasing coronal plasma radius, the AK gap also becomes slightly larger with time, as observed in Figure 8b.
### Instability Growth
Axial perturbations of the coronal plasma appear in the AK gap, as observed in Figure 5b. The presence of this axial instability is consistent with previous studies of wire array ablation.[32; 33] In Figure 9, we characterize the amplitude and wavelength distribution of the instability as a function of the initial wire diameter. We determine the amplitude from half the peak-to-valley distance of the plasma-vacuum interface in the AK gap, which we characterize using the interface-detection technique similar to that described in Sec. III.1. Similarly, we estimate the wavelength of the instability from the peak-to-peak separation of the perturbations at the plasma-vacuum boundary. In Figure 9, the red solid line and the blue dashed line represent the median and mean of the distribution respectively. The bottom and top sides of the rectangle represent the \(25^{th}\) and \(75^{th}\) percentile (i.e. the interquartile range), while the end caps show the full range of the distribution. The mean and median values for the perturbation amplitude are similar for most wire diameters, and remain largely invariant of the initial wire diameter, with a value of about \(50\,\mathrm{\SIUnitSymbolMicro m}\). Both
Figure 8: (a) Temporal variation of the coronal radius and cathode plasma size for \(50\,\mathrm{\SIUnitSymbolMicro m}\) diameter wires. (b) Temporal variation of AK gap for \(50\,\mathrm{\SIUnitSymbolMicro m}\) diameter wires. Here, each data point comes from a separate experimental shot, and the range of values shown here comes from variation in the \(z\)-direction.
Figure 9: Variation of the mean amplitude and peak-to-peak separation of the axial instability in the wire core and coronal plasma as a function of initial wire diameter. Values as calculated at \(200\,\mathrm{ns}\) after current start for wire diameters \(33-75\,\mathrm{\SIUnitSymbolMicro m}\), and at \(150\,\mathrm{ns}\) after current start for the \(100\,\mathrm{\SIUnitSymbolMicro m}\) diameter wires. The red solid line and the blue dashed line represent the median and mean of the distribution respectively. The bottom and top sides of the rectangle represent the \(25^{th}\) and \(75^{th}\) percentile, while the end caps show the full range of the distribution.
the 33 \(\upmu\)m and 100 \(\upmu\)m wires exhibit a relatively higher upper range of the perturbation amplitude, showing the existence of large amplitude perturbations. The wavelength distribution remains largely independent of the initial wire diameter, with a median peak-to-peak separation of about 200 \(\upmu\)m. We can also measure the temporal variation of the amplitude and wavelength from the shadowgraphs in Figure 7. Both the amplitude and wavelength of the instability exhibit little variation with time, which indicates saturation of the instability growth. We note that the shadowgraphs provide a line-integrated (along the \(y\)-direction) view of the perturbation. Therefore, the process of extracting wavelength from the peak-to-peak separation, for the case where peaks from multiple wires overlap along the line-of-sight, becomes more complicated.
### Electron density measurements
Figure 10a & b show the side-on (\(xz\)-plane) interferogram, together with the line-integrated electron density, recorded at \(t=150\) ns after current start for the array with 50 \(\upmu\)m diameter wires. We indicate the initial position (\(x=0\) mm) of the wires using a red line. Close to the wires, the high-density plasma forms an opaque region where the probing beam is lost, similar to that in the side-on shadowgraphy images (Figure 4). Adjacent to this opaque region, where the density is lower, interference between the probe and reference beams forms periodic bright and dark fringes. In this region, plasma flow from the wires distorts the fringe pattern, whereas, in regions devoid of plasma, the fringes appear undistorted. We trace the fringes by hand, and post-process the traced images in MAGIC2 to calculate the line-integrated electron density from the distortion of the fringes.[30] As expected due to time-of-flight effects, electron density is high close to the wires, and decreases with distance from the array. At the plasma-vacuum boundary (\(x\approx 12-15\) mm), the plasma forms a discontinuous column of enhanced electron density. The sharp rise in the electron density in this region indicates the presence of a shock-like structure. The width of the transition is about 2 mm. The shape of the plasma column exhibits significant
Figure 10: (a) Side-on raw interferogram for the 50 \(\upmu\)m diameter wire array at 150 ns, using a Mach-Zehnder interferometer with a 532 nm laser. (b) Side-on line-integrated electron density map determined from interferometry. (c) End-on raw interferogram at 150 ns after current start for the 50 \(\upmu\)m diameter wire array, recorded during the same experimental shot. (d) End-on line-integrated electron density map determined from interferometry. Regions in grey near the wires represent locations where the probing beam is lost.
modulation in the axial direction, consistent with what we observe in the simultaneously-recorded shadowgraph of the load (Figure 7a).
Figure 10c & d show the end-on (\(xy\)-plane) interferogram and line-integrated density recorded 150 ns after current start. The probing laser beam in the region \(x<4\) mm is blocked by the load hardware; but the flow region \(x\geq 4\) mm is illuminated via the laser feed shown in Figure 1c. As seen in Figure 10a & b, plasma flow emanating from the wires is redirected towards the center (\(y=0\) mm) of the array. The converging flows collide or 'pinch', forming a region of enhanced density, roughly 12-15 mm from the wires. The pinch has also been typically referred to as the'magnetic precursor column' in wire array literature, as it is the precursor to the final implosion phase which we suppress in these experiments by over-massing the wire array.[14] The position of the pinch is consistent with that of the column of enhanced electron density observed in the side-on electron density map (Figure 10b).
In Figure 10b, at \(x\approx 4\) mm from the wires, the line-integrated density is \(\langle n_{e}L_{y}\rangle\approx 5-6\times 10^{18}\,\mathrm{cm}^{-2}\), which falls to \(\langle n_{e}L_{y}\rangle\approx 0.4\times 10^{18}\,\mathrm{cm}^{-2}\) at 11 mm from the wires, right before the position of the pinch. From end-on interferometry (Figure 10d), we estimate the integration length scale \(L_{y}\) by computing the extent of the plasma in the \(y\)-direction. This gives us values of \(L_{y}(x=4\,\mathrm{mm})\approx 8\,\mathrm{mm}\), and \(L_{y}(x=11\,\mathrm{mm})\approx 4\,\mathrm{mm}\). The average electron densities, inferred from \(\langle n_{e}L_{y}\rangle/L_{y}\), are therefore \(\bar{n}_{e}\approx 4\times 10^{18}\,\mathrm{cm}^{-3}\) at \(x=4\,\mathrm{mm}\), and \(\bar{n}_{e}\approx 1\times 10^{18}\,\mathrm{cm}^{-3}\) at \(x=11\,\mathrm{mm}\) from the wires. In Figure 10b, the pinch exhibits a line-integrated density of \(\langle n_{e}L_{y}\rangle\approx 0.8\times 10^{18}\,\mathrm{cm}^{-2}\) at approximately 13 mm from the wires. Assuming a length scale \(L_{y}\approx 4\,\mathrm{mm}\), the average electron density in the pinch is \(\bar{n}_{e}\approx 2\times 10^{18}\,\mathrm{cm}^{-3}\). This represents a roughly \(2\times\) jump in the electron density at the pinch compared to the flow upstream of the pinch.
### XUV Self-Emission
XUV self-emission images from the load hardware with the 50 um diameter wires are shown in Figure 11. The wires and the cathode appear as regions of bright emission. Emission from the inner wires appears uniform in intensity, indicating roughly equal current distribution in the wires, as predicted by the magnetostatic calculation (Figure 3). The outer wires, however, appear dimmer, which may indicate that the current has switched to the inner wires due to the higher initial rate of ablation from the outer wires, as indicated by the rocket model calculation in Sec. II.1. Although the wires appear as well-separated columns of emission, the resolution of the optical system prevents us from making quantitative measurements of the core diameter from the XUV images. We observe that the flows from the wires converge to form the pinch, which appears as a brightly-glowing column oriented in the \(z\)-direction. The increased emission from the pinch is consistent with its higher electron density (Figure 10). Furthermore, shock heating and Ohmic dissipation in the pinch may also contribute to a higher temperature, and consequently, higher radiative emission. The XUV images also exhibit the axial non-uniformity in the shape of the pinch, consistent with the interferometry and shadowgraphy results (Figure 7a and Figure 10b). Finally, the structure of the plasma ablation and the pinch remains roughly invariant across the different frames over the observation window of \(150-185\,\mathrm{ns}\). This is in contrast to the shadowgraphy and interferometry images (Figure 7 & Figure 10) which show significant (\(V\approx 100\,\mathrm{km}\,\mathrm{s}^{-1}\)) motion
Figure 11: XUV self-emission images of a planar wire array with 50 μm diameter wires. The pinch appears as a column of bright emission.
Figure 12: XUV self-emission images of a planar wire array with (a) 50 μm diameter wires, and (b) 100 μm diameter wires. Wires are easily distinguishable in the 50 μm case, but not for the 100 μm diameter wires. (c) Lineouts of intensity along \(z=4\) mm for 50 μm and 100 μm diameter wires.
of the pinch. This may indicate that this is a 'ghost' image of the pinch, recorded when the MCP is not triggered, due to radiation bleed-through at the time when emission from the pinch is at a maximum.
In Figure 12, we compare the XUV emission from the planar wire arrays with \(50\,\mathrm{\SIUnitSymbolMicro m}\) and \(100\,\mathrm{\SIUnitSymbolMicro m}\) diameter wires respectively. In both cases, the pinch is visible as a bright column of emission, and the shape of the pinch is similar between the two images. For the \(50\,\mathrm{\SIUnitSymbolMicro m}\) diameter case, the wires appear as discrete columns of enhanced emission, as can be observed in lineouts of the intensity at \(z=4\) mm (Figure 12c). In contrast, the wires are not easily distinguishable for the thicker \(100\,\mathrm{\SIUnitSymbolMicro m}\), consistent with the core size becoming comparable to the inter-wire separation, as seen in Figure 6.
## IV Discussion of results
### AK Gap and Wire core size
Previous wire array experiments with \(\sim$10\,\mathrm{\SIUnitSymbolMicro m}$\) diameter wires show that the diameters of the wire cores and the surrounding corona both increase with initial wire diameter, and are largely independent of the current per wire and the inter-wire separation.[12] Our experimental results are consistent with this effect -- Figure 6a exhibits a roughly linear increase in the coronal radius with increasing wire diameter, and the measured coronal diameter is roughly \(20-25\times\) the initial wire diameter. While the coronal radius increases with the initial wire diameter, the size of the cathode plasma remains relatively constant (see Figure 6a). This is expected since changing the initial wire diameter is not likely to affect the current distribution through the cathode. The larger coronal radius is, therefore, the primary reason for gap closure in the thick \(100\,\mathrm{\SIUnitSymbolMicro m}\) case. The gap closes at \(150\) ns after current start, which makes \(d_{\mathrm{wire}}=$100\,\mathrm{\SIUnitSymbolMicro m}$\) an undesirable wire diameter, both in these planar wire experiments and on the Z experiments for which these experiments are a scaled test. Furthermore, the coronal radius also becomes larger than the inter-wire separation in this case, which inhibits plasma ablation and magnetic field advection from the array.[25]
The early gap closure for the \(100\,\mathrm{\SIUnitSymbolMicro m}\) diameter case could be a consequence of lower Ohmic heating in the skin region around the wire core. The initial electrical explosion of the wires forms a dense cold wire core consisting of vapor and microscopic liquid metal droplets. Without further Ohmic heating by the current, we would expect the wire core radius \(R(t)\) to expand isotropically at a rate comparable to its local sound speed \(C_{\mathrm{core}}\) into the vacuum, i.e. \(dR/dt\sim C_{\mathrm{core}}\). For thin wires, the current flowing over the wire core surface in the skin region Ohmically heats the material at the edge of the core, forming coronal plasma that is redirected by the global \(\mathbf{j}\times\mathbf{B}\) force. However, if the wire core is sufficiently large, current density in the skin region \(j_{\mathrm{skin}}=I(t)/(2\pi R(t)\delta)\) will be lower. Here, \(I(t)\) is the driving current, and \(\delta=\sqrt{2\eta/\omega\mu}\) is the resistive skin depth, which depends on the material resistivity \(\eta\), the angular frequency \(\omega\) of the driving current, and the medium permeability \(\mu\). Consequently, for a large initial wire diameter, the Ohmic heating rate \(\eta\,j_{\mathrm{skin}}^{2}\) may be too small to ionize all of the expanding gas. This will allow neutral gas expanding out of the wire core to remain unionized, and thus, unaffected by the \(\mathbf{j}\times\mathbf{B}\) force expelling it from the AK gap. It is this neutral gas which may be responsible for the observed gap closure. In future experiments, the importance of neutral gas expansion could be tested by exploiting the different refractive indices of plasma and neutral gas using two-color optical measurements.[34]
For the \(50\,\mathrm{\SIUnitSymbolMicro m}\) diameter wire array, the AK remains open late in time (t = 250 ns), and the coronal radius does not exceed the inter-wire separation, which is desirable for good ablation from the array (Figure 8). In imploding wire arrays, the coronal radius increases initially in time, and then saturates to a constant value.[12] The time of saturation, typically \(80-100\) ns after current start, corresponds to a change in the magnetic field topology around the wires, when the driving global \(\mathbf{j}\times\mathbf{B}\) force becomes strong enough to overcome the expansion of the coronal plasma, and redirects it to generate ablation streams.[12] In our experiments, we image the wires after the expected time of saturation, and therefore, do not expect significant temporal variation in the size of the wire cores between 150-250 ns after current start. As observed in Figure 8a, the coronal plasma radius decreases weakly with time at a rate of about \(2\,\mathrm{\SIUnitSymbolMicro m}\,\mathrm{ms}^{-1}\) for \(50\,\mathrm{\SIUnitSymbolMicro m}\) diameter wires.
The axial instability of the wires is ubiquitous in wire array z-pinch experiments, and is thought to be a modified \(m=0\)-like instability, which exhibits a constant amplitude and wavelength later in time, and is largely independent of initial wire diameter, and current per wire.[22, 9, 23] The time of saturation of the instability also corresponds to the time at which the wire cores cease to grow.[22, 23] In Figure 9, we observe that the distributions of the amplitude and the peak-to-peak separation of the perturbations remain largely independent of the initial wire diameter, consistent with observations of the axial instability in imploding wire array z-pinches. The amplitude and the peak-to-peak separation also exhibit minimal variation in time between 150-250 ns, which is well after the expected time of saturation of the instability (\(\sim$80-100\) ns).[12, 23]
### Pinch formation and comparison with 3D resistive MHD simulations
To obtain greater insight into the ablation process, we perform three-dimensional simulations of the planar wire array using the resistive magnetohydrodynamic (MHD) code GOR-GON, a 3D (cartesian, cylindrical, or polar coordinate) Eulerian resistive MHD code with van Leer advection, and separate energy equations for ions and electrons.[35, 33] We use an optically thin recombination-bremsstrahlung radiation loss model, modified with a constant multiplier to account for line radiation, and a Thomas-Fermi equation-of-state to determine the ionization level.[35] We simulate a planar wire array with the same geometric dimensions and wire material as in the experimental setup. The current pulse applied to the load was determined from a three-term sum-of-sines fit [\(\sum_{i}a_{i}\sin(b_{i}+c_{i})\)] to the integrated Rogowski signal shown in Figure 2. The simulation domain is a cuboid with dimensions \(51.2\times 50.4\times 38\) mm\({}^{3}\). We use an initial wire diameter of \(50\,\mathrm{\SIUnitSymbolMicro m}\) in the simulation, with the initial mass of the wire distributed over a \(400\,\mathrm{\SIUnitSymbolMicro m}\) diameter circular pre-expanded wire core. The simulations are performed with a grid size of \(50\,\mathrm{\SIUnitSymbolMicro m}\). The driving magnetic field, calculated from Ampere's law, is applied as a
boundary condition at the bottom-most cells in the simulation domain between the anode and cathode of a coaxial transmission line. The load geometry is implemented as stationary realistic conductivity electrode material on top of the coaxial line.
Figure 13 shows the simulated current density distribution at a slice through the array mid-plane (\(z=0\) mm) at 100, 150, and 200 ns after current start. As expected due to the skin effect, current density is concentrated on the outer surfaces of the cathode and the wires. The plasma from the outermost wires carries significantly more current than that from the inner wires, consistent with our magnetosonic prediction (Figure 3). Similar to the experiment, the converging plasma flows collide at \(y=0\) mm mid-plane to form the pinch. The pinch appears as a region of high current density, comparable to that in the wires. Figure 13a shows a three-dimensional rendering of the current distribution in the load at \(t=150\) ns. The pinch carries a significant amount of current, and provides a secondary path for the current to close between the anode and the cathode. In our simulation, the current in the pinch, determined from the area integral of the current in the dashed box shown in Figure 13, is roughly 30% of that in the wires. This is consistent with the estimate of \(30-40\)% provided by Bland _et al._[14]
Figure 13 also shows the distribution of magnetic field lines in the planar wire array. We determine the field lines from contours of the \(z\)-component of the magnetic vector potential \(A_{z}\). Near the AK gap, the magnetic field topology is similar to that calculated from our magnetostatic simulation (see Figure 3). The magnetic field lines are straight and uniform inside the AK gap, and bend around the outer wires to form closed field lines outside the array. The field inside the AK gap is significantly stronger than that outside, as observed from the relatively high density of lines in this region. The ablating plasma advects some of the magnetic field from inside of the array to the outside. Near the inner wires, the advected magnetic field lines are straight and uniform, oriented along the \(y\)-direction. However, away from the centerline (\(y=0\) mm) and toward the edges of the plasma flow, the field lines bend, driven by the inward-directed ablation from the outermost wires. The spatial variation of the driving magnetic field along the \(z\)-direction inside the AK gap is small (\(<5\)%).
The arrows in Figure 13 show the direction of the \(\mathbf{j\times B}\) force acting on the ablated plasma. Near the inner wires, where the magnetic field is oriented in the \(y\)-direction, the force is directed in the \(+x\)-direction, whereas at the outermost wires, the curvature of the magnetic field results in a force directed towards the center of the array. As the plasma propagates away from the wires, the bending of the field lines due to the flows from the outermost wires results in an inward-directed (along the \(x\)-direction) \(\mathbf{j\times B}\) force. This drives the collision of the plasma flows emanating from the wires, and the formation of the pinch. Similar to a traditional z-pinch, the magnetic field lines form closed circular loops, and the \(\mathbf{j\times B}\) force is directed towards the center of the pinch. However, unlike a traditional z-pinch, the pinch experiences both mass and momentum injection from the left side of the pinch. Driven by the magnetic and thermal pressure of the plasma behind the pinch and the magnetic tension of the bent field lines, the pinch accelerates in the \(+x\)-direction. In the simulation, the center of the pinch travels about 8 mm between \(150-250\) ns, resulting in an average velocity of \(80\) km s\({}^{-1}\). This is about 20% lower than that inferred from the translation of the pinch in the experimental shadowgraphs (Figure 7). The flow upstream of the pinch is both supersonic (\(M_{S}\approx 6\)) and super-Alfvenic (\(M_{A}\approx 2\)), similar to that observed in previous pulsed-power-driven experiments of aluminum wire arrays.[3; 4; 5] Due to the high current density in the pinch, it is a site of strong Ohmic dissipation. The electron temperature inside the pinch is \(T_{e}\approx 100\) eV, which is significantly higher than that in the plasma flow behind the pinch \(T_{e}\approx 6.5\) eV. The temperature of plasma near the wires is also about 5 eV, which may explain
Figure 13: Current distribution in the load hardware from 3D resistive MHD simulations of the experiment. Here, we show the current distribution on a slice through the array midplane (\(z=0\) mm). The black arrows represent the direction of the \(\mathbf{j\times B}\) force, while the green lines are contours of the \(z\)-component of the magnetic vector potential \(A_{z}\), which we use to represent the magnetic field lines.
why the wires appear dimmer in the XUV images compared to the pinch (Figure 11).
Figure 14b shows a slice of the simulated electron density at the array mid-plane (\(z=0\) mm) at 150 ns after current start. The electron density distribution appears similar to that in the experiment (Figure 10). Electron density is high closer to the wires, and falls with increasing distance in the \(x\)-direction, consistent with time-of-flight effects. The plasma flow from the inner wires is directed outwards along the \(x\)-direction, while that from the outermost wires is directed towards the center of the array. The flow converges, similar to that in the experiment, to form a pinch. Figure 14c shows a side-on slice of the electron density through the array mid-plane (\(y=0\) mm). The pinch appears as a discontinuous region of enhanced electron density at the vacuum-plasma boundary, similar to that in the experiment. Immediately behind the pinch, both the end-on and side-on slices show a region of lower density, consistent with the relatively-uniform intensity region observed in the experimental shadowgraphs (Figure 7). The electron density inside the pinch is \(n_{e}\approx 4\times 10^{19}\) cm\({}^{-3}\), while that in the plasma flow just behind the pinch is \(2\times 10^{18}\) cm\({}^{-3}\). Our experimentally inferred value of the density behind the pinch is consistent with the simulation, while that for the pinch is lower than the density observed in the simulation. This may indicate that the integration length scale used to determine the experimental estimate of density is an overestimation of the true value. As observed in Figure 14a, the width of the simulated pinch is approximately 1.5 mm, whereas we estimate a value of about 4 mm from our experimental end-on density map (Figure 10d). This discrepancy can result from line integrating through the axially-modulated pinch, which widens the observed width of the pinch in the line-integrated density map (Figure 10). Using an integration length of 1.5 mm results in an electron density of \(\bar{n}_{e}\approx 5\times 10^{18}\) cm\({}^{-3}\), which is closer to, although still lower than, the density of the pinch in the simulation.
Figure 14e and Figure 14f show the line-integrated electron density in the end-on (\(xy\)-plane) and side-on (\(xz\)-plane) planes respectively. In Figure 14d, we compare line-outs of the simulated electron density integrated along the \(y\)-direction with that from the experiment at \(z=2\) mm. In both the experiment and the simulation, the line-integrated density falls with distance from the wires, and the pinch appears as a local enhancement of the density at the plasma-vacuum boundary. The magnitude of the line-integrated electron density in the simulation is comparable to that from the experiment. Note that the sharp increase in density at the pinch, as observed in Figure 14c, is muted by line integration. The high density and the temperature of the pinch both contribute to the strong emission from the pinch, visible in the experimental XUV self-emission images (Figure 11).
In contrast to the experiment, where the pinch is located at \(x\approx 12-15\) mm from the wires, the simulated pinch is closer to the wires (\(x\approx 10\) mm) at this time. The slower velocity of
Figure 14: (a) Simulated current distribution in the load hardware at 150 ns after the current start. (b) End-on (\(xy\)-plane) slice of the simulated electron density at the array mid-plane at 150 ns. (c) Side-on (\(xz\)-plane) slice of the simulated electron density at the array mid-plane at 150 ns. (d) Comparison of the line-integrated (along \(y\)) electron denisty between teh experiment and the simulation. (e) End-on line-integrated electron density. (f) Side-on-line integrated electron density. Plasma flow is from left to right. Wire positions are indicated by X’s in (a) & (c), and by red lines in (b) & (d). The simulation shows converging flows, and the formation of a pinch roughly 8 mm from the wires.
the pinch in the simulation indicates a comparatively smaller driving force behind the pinch. We expect the pinch to be driven outwards due to the magnetic and thermal pressures of the plasma behind it, and by the magnetic tension of the bent field lines. A simple similarity argument can be used to show that the characteristic velocity of the pinch should be comparable to the local magnetosonic velocity \(V_{pinch}^{2}\sim(V_{A}^{2}+C_{S}^{2})\). Here, \(V_{A}\) is the Alfven speed, and \(C_{S}\) is the sound speed of the plasma right behind the pinch. Previous comparison of experimental results with simulations indicates that the local thermodynamic equilibrium Thomas-Fermi model implemented in the simulation underestimates the temperature of the plasma.[3] The Alfven speed \(V_{A}=B/\sqrt{\mu_{0}\rho}\) is a function of the magnetic field, which in turn, depends on the magnetic Reynolds number \(R_{m}=UL/\bar{\eta}\). Here, \(U\) and \(L\) are the characteristic velocity and length scales of the plasma, \(\rho\) is the mass density, and \(\bar{\eta}\sim\bar{Z}Te^{-3/2}_{e}\) is the magnetic diffusivity, which varies with the electron temperature \(T_{e}\) and the average ionization \(Z\) of the plasma. A lower temperature leads to a lower \(R_{m}\), and thus a relatively smaller advected field. This can reduce the magnetic pressure behind the pinch, and therefore contribute to a smaller velocity. In future experiments, optical Thompson scattering could be used to simultaneously characterize the velocity and temperature of the plasma.[36]
Finally, in both the experiment and the simulation, the shape of the pinch exhibits an axial non-uniformity. As observed in Figure 10b and Figure 14c, the pinch appears closer to the wires at the bottom of the array, and further away near the top. Our simulations indicate that this axial non-uniformity in the shape of the pinch may be related to flow over the extended anode plate (see Figure 1), which prevents the expansion of the pinch at the bottom of the array, and also modifies the current distribution in the electrodes. When the simulations are repeated without an extended anode plate, the axial modulation in the pinch structure is reduced. In future experiments, we can mitigate this effect by exploring array geometries that do not require the extended anode plate.
## V Conclusions
In this paper, we explore the use of an over-massed planar wire array as a platform for laboratory astrophysics experiments, and as a scaled experiment to investigate the ablation of thick wires in cylindrical wire arrays driven by 10 MA current pulses. We characterize the ablation of plasma from a planar wire array fielded on the COBRA pulsed-power machine (1 MA, 250 ns rise time). The wire array comprises a linear arrangement of 15 equally-spaced aluminum wires separated from a planar cathode surface by a 2 mm AK gap. The planar wire array is designed to provide a driving magnetic field (\(80-100\) T) and current per wire distribution (about \(60-65\) kA), similar to that in a \(\sim 10\) MA cylindrical exploding wire array fielded on the Z pulsed-power machine. Magnetostatic calculations show that the driving magnetic pressure inside the AK gap at peak current (1 MA) is about 81 T, which is higher than that in a typical cylindrical wire array fielded on 1-MA university scale facilities (about \(20-40\) T). In contrast to previous planar wire array experiments, the wire arrays are over-massed, so that they provide continuous ablation for the duration of the experiment, without experiencing the implosion stage.
We perform a parametric study by varying the initial wire diameter between \(33-100\,\mathrm{\SIUnitSymbolMicro m}\). Laser shadowgraphy images show that the largest wire diameter (\(100\,\mathrm{\SIUnitSymbolMicro m}\)) exhibits early closure of the AK gap (150 ns after current start), while the gap remains open during the duration of the experiment for wire diameters between \(33-75\,\mathrm{\SIUnitSymbolMicro m}\). The early closure of the AK gap for the \(100\,\mathrm{\SIUnitSymbolMicro m}\) diameter case is primarily due to the larger coronal radius of the wires, which may be a consequence of reduced Ohmic heating in the skin region surrounding the wire cores. For these large diameter wires, the coronal radius also becomes comparable to the AK gap size and the inter-wire separation, which is undesirable for good ablation from the wire array. Axial instabilities appear in the vacuum-plasma interface in the AK gap. The distributions of the amplitude and peak-to-peak separation of the perturbations remain largely invariant of the initial wire diameter, as has been previously observed on imploding and exploding cylindrical wire arrays.
Laser interferometry and time-gated XUV imaging are used to probe the plasma flows. Plasma ablating from the wires is redirected towards the array mid-plane (\(y=0\) mm), and the resulting collision of the converging flows generates a pinch, which propagates away from the wires at an average velocity of about \(100\,\mathrm{km}\,\mathrm{s}^{-1}\). The pinch appears as a discontinuous column of enhanced plasma density (\(\bar{n}_{e}\approx 2\times 10^{18}\,\mathrm{cm}^{-3}\)) and strong XUV emission. Three-dimensional resistive MHD simulations reproduce the primary characteristics of the ablation observed from the experiments. Visualization of the current density and magnetic field in the load demonstrates that flows converge under the action of a pinching \(\mathbf{j}\times\mathbf{B}\) force. This arises from the bending of magnetic field lines due to the inward-directed flows from the outermost wires. The pinch is a site of high current density, and exhibits a magnetic field topology similar to that of a z-pinch. The simulated pinch also exhibits a significantly higher temperature, compared to the plasma behind it, which combined with the enhanced density, accounts for the strong XUV emission observed in the experiment.
## VI Acknowledgements
The authors would like to thank Todd Blanchard and Harry Wilhelm for their work in support of the experiments. This work was funded by NSF and NNSA under grant no. PHY2108050, and by the EAGER grant no. PHY2213898. Simulations were performed on the Engaging cluster funded by DE-FG02-91-ER54109.
## VII Declaration of Conflicts of Interest
The authors have no conflicts of interest to disclose.
## VIII Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2307.14551 | How to Train Your YouTube Recommender to Avoid Unwanted Videos | YouTube provides features for users to indicate disinterest when presented
with unwanted recommendations, such as the "Not interested" and "Don't
recommend channel" buttons. These buttons purportedly allow the user to correct
"mistakes" made by the recommendation system. Yet, relatively little is known
about the empirical efficacy of these buttons. Neither is much known about
users' awareness of and confidence in them. To address these gaps, we simulated
YouTube users with sock puppet agents. Each agent first executed a "stain
phase", where it watched many videos of an assigned topic; it then executed a
"scrub phase", where it tried to remove recommendations from the assigned
topic. Each agent repeatedly applied a single scrubbing strategy, either
indicating disinterest in one of the videos visited in the stain phase
(disliking it or deleting it from the watch history), or indicating disinterest
in a video recommended on the homepage (clicking the "not interested" or "don't
recommend channel" button or opening the video and clicking the dislike
button). We found that the stain phase significantly increased the fraction of
the recommended videos dedicated to the assigned topic on the user's homepage.
For the scrub phase, using the "Not interested" button worked best,
significantly reducing such recommendations in all topics tested, on average
removing 88% of them. Neither the stain phase nor the scrub phase, however, had
much effect on videopage recommendations. We also ran a survey (N = 300) asking
adult YouTube users in the US whether they were aware of and used these buttons
before, as well as how effective they found these buttons to be. We found that
44% of participants were not aware that the "Not interested" button existed.
Those who were aware of it often used it to remove unwanted recommendations
(82.8%) and found it to be modestly effective (3.42 out of 5). | Alexander Liu, Siqi Wu, Paul Resnick | 2023-07-27T00:21:29Z | http://arxiv.org/abs/2307.14551v3 | # How to Train Your YouTube Recommender to Avoid Unwanted Videos
###### Abstract
YouTube provides features for users to indicate disinterest when presented with unwanted recommendations, such as the "Not interested" and "Don't recommend channel" buttons. These buttons are purported to allow the user to correct "miss-takes" made by the recommendation system. Yet, relatively little is known about the empirical efficacy of these buttons. Neither is much known about users' awareness of and confidence in them. To address these gaps, we simulated YouTube users with sock puppet agents. Each agent first executed a "stain phase", where it watched many videos of one assigned topic; it then executed a "scrub phase", where it tried to remove recommendations of the assigned topic. Each agent repeatedly applied a single scrubbing strategy, either indicating disinterest in one of the videos visited in the stain phase (disliking it or deleting it from the watch history), or indicating disinterest in a video recommended on the homepage (clicking the "not interested" or "don't recommend channel" button or opening the video and clicking the dislike button). We found that the stain phase significantly increased the fraction of the recommended videos dedicated to the assigned topic on the user's homepage. For the scrub phase, using the "Not interested" button worked best, significantly reducing such recommendations in all topics tested, on average removing 88% of them. Neither the stain phase nor the scrub phase, however, had much effect on videopage recommendations (those given to users while they watch a video). We also ran a survey (\(N=300\)) asking adult YouTube users in the US whether they were aware of and used these buttons before, as well as how effective they found these buttons to be. We found that 44% of participants were not aware that the "Not interested" button existed. However, those who were aware of this button often used it to remove unwanted recommendations (82.8%) and found it to be modestly effective (3.42 out of 5).
University of Michigan, Ann Arbor
{avliu, siqiwu, presnick}@umich.edu
## 1 Introduction
YouTube is the world's largest long-form video sharing platform, with users watching a billion hours of YouTube's content every day [26]. In recent years, the YouTube recommendation algorithm has come under increased scrutiny for its role in promoting conspiracy theories such as anti-vaccination [12] and Alt-Right ideology such as white supremacy, Neo-Nazism content [13]. Studies have found that watching such content can lead to its continual and sometimes increased promotion [15, 16]. Besides societally-harmful information, YouTube has also been known to make unwanted recommendations to individuals, sometimes in the form of offensive, triggering, or outrageous videos [16, 17].
In the context of both individual and societal reasons for users to better tailor their personal content on YouTube, the platform provides several buttons such as "Not interested" and "Don't recommend channel", which allow users to express disinterest in a specific video or channel and alter their recommendation feeds accordingly [1, 18]. These buttons exist among a variety of platform features, such as "Disliking" videos and deleting videos from one's watch history. All of those buttons may help users eliminate certain content from their feeds.
However, relatively little is known about the efficacy of these buttons in practice, nor about users' awareness of and confidence in them. To address these gaps, this work investigates how well simulated YouTube users (agents) can populate their recommendation feeds with content from a certain topic (the "stain" phase), as well as the ability for the recommendations of that topic to be removed by using a strategy to indicate disinterest (the "scrub" phase). We selected four topics: _Alt-Right_, _Antitheism_, _Political Left_, and _Political Right_. These topics are particularly interesting because, while plenty of literature exists on how one can get recommended more of these topics, little is known about removing them. Also, each topic is a realistic one that some users would no longer want to see. We examined six strategies (_Watch neutral_, _Dislike_, _Delete_, _Not interested_, _No channel_, and _Dislike recommended_), as well as a _None_ control strategy. Lastly, we conducted a complementary survey study to understand real users' awareness of and experience with those scrubbing strategies on YouTube.
The main findings are as follows:
* Watching a topic increased its presence on the homepage, though the stain never covers more than half of the recommendations. Watching a topic had less effect on the recommendations shown on videopages.
* For scrubbing a topic from the homepage, the most effective action was clicking the "Not interested" button on a recommended video. In contrast, none of the scrubbing
significantly reduced the number of recommendations of videos on the topic shown on videopages.
* Nearly half of survey respondents were not aware of the most effective feature (pressing "Not interested"). Those who were aware of it used it often, and perceived it to be less effective than the "Don't recommend channel" button, contrary to our findings from the audit study.
## 2 Background and Related Work
### Locations of Relevant YouTube Features
There are three main YouTube pages that are relevant to our study: the homepage, videopage, and watch history page. The _homepage_ is the landing page for users upon entering the platform.1 It presents recommendations in a grid format. If a user is logged in, they will be personalized to their account. Each recommendation contains a dropdown menu, where users are presented with two relevant buttons: the "Not interested" and "Don't recommend channel" buttons. These allow the users to (ostensibly) indicate disinterest with respect to specific recommendations.
Footnote 1: [https://www.youtube.com](https://www.youtube.com)
The _videopage_ is the page where users see while watching a video. Recommendations are given in a right-hand sidebar. The feature on this page that is relevant to our study is the "Dislike" button, indicated by a thumbs-down symbol.
Finally, the _watch history page2_ (or _watch history_ for short) is the page that displays a log of the user's previously-watched videos. This page is only available to users who are logged in. What relevant to our study is the option for users to delete specific videos from their watch history. Specifically, the "Delete" button is "X" located on the upper-right hand corner of each video in the log.
Footnote 2: [https://www.youtube.com/feed/history](https://www.youtube.com/feed/history)
### Sock Puppet Algorithm Audits
Our study takes a sock puppet algorithm audit approach. Several prior work quantifying the YouTube recommender system's role in promoting and removing unwanted content also uses the algorithm auditing approach. So we begin with a review of this method.
An algorithm audit "is a method of repeatedly and systematically querying an algorithm with inputs and observing the corresponding outputs in order to draw inferences about its opaque inner workings" [12]. Algorithm audit is a tool for researchers to investigate the effects of algorithms whose code and data are shielded from the public.
One type of algorithm audits is the sock puppet approach [13, 14]. Sock puppet audits use code scripts to create simulated users. These fake users - also called "agents" - interact with the platform or algorithm of interest as if they were the real users. In the meantime, researchers record and compare the recommendations that the agents receive.
### Recommender System's Role in Promoting Problematic and Unwanted Content
YouTube is one of the most popular video sharing platforms. It allows users across the globe to disseminate information almost instantaneously on topics ranging from fashion to history to politics. In recent years, it has received increasing public scrutiny from journalists and academics alike in assessing its recommendations of problematic content.
The center of the platform's content dissemination is the recommendation engine, which plays an important role in helping users decide what to watch [21, 22]. Among a vast, ever-growing pool of videos on the platform, users are suggested what to watch next based on their previous interactions with YouTube. The recommendation engine also incorporates the platform's interest of maintaining user engagement in order to boost advertising revenue [15].
YouTube's recommendation engine has been theorized to promote problematic recommendations, which can broadly be split into two categories. The first is its propensity to suggest content that violates political, societal, and anti-democratic ideals such as extremism and conspiracy theories [16, 17] as well as political filter bubbles and radicalization [18]. The second are those conflicting with individual preferences. Many users have found recommendations on video sharing platforms to be personally offensive, triggering, violent, and outrageous [10], as well as conflicting with their sense of own identity [11], even if the video is completely legal and enjoyable for others [14].
Several YouTube algorithm audits have investigated the role of recommender systems in promoting these kinds of content due to personalization, largely focusing on political ideologies and conspiracy theories. While previous studies often refer to this phenomenon as "filter bubbles", we instead choose to use the term "stain". This is because previous studies (as well as ours) find that topical recommendations rarely take up more than half of one's feed and never reach 100% after watching many videos of that topic, and we would like to avoid the misleading interpretation of the term "bubble" as being completely surrounded by (i.e., having 100% of) topical recommendations.
Regardless of the term, studies have agreed that continued consumption of videos of a certain topic will lead to further (and sometimes increased) recommendation of that topic, on both the homepage and the videopage [15, 16, 17, 18]. Recommendations in search results, on the other hand, do not experience such personalization effects [13]. Despite these findings, it is still unknown whether the stain is made completely of videos from the channels watched before, or whether YouTube introduces new channels of the topic that have not yet been watched by the user. Such information would demonstrate how much YouTube is recommending content beyond what is obviously related (i.e., that from the same channel), adding clarity to the current debate of the algorithm's role in information personalization.
Researchers have also studied other forms of problematic information personalization such as radicalization, or the process of being recommended content that is progressively more extreme Ribeiro et al. (2020); Hosseinmardi et al. (2021); Chen et al. (2022). By contrast, our study focuses on the construct of stain on YouTube.
### User Controls to Remove Unwanted Content
Combined calls from academics and journalists alike to mitigate the YouTube recommender system's role in problematic content consumption have contributed to recent platform changes, which include features that purport to grant users more control in tailoring their own recommendations Burch (2019); Cooper (2021). Other social media platforms such as Tik Tok and Instagram have also released and experimented with user controls to tailor their recommendations Ariano (2021); Meta (2022).
Such features may improve the user satisfaction in online spaces that are mediated by recommender systems. This is especially true on YouTube because much of content viewership comes directly from users clicking on its recommendations Solsman (2018). However, compared to what is known about the prevalence of unwanted content on YouTube, relatively little is known about the efficacy of features that help to remove it. We review this literature here.
#### Algorithm audits of how to reduce recommendations.
Two experiments used an intuitive strategy to try to reduce content of a given topic: watching videos of a _different_ topic. For example. Tomlein et al. (2021)'s sock puppet audit found that agents were recommended less conspiratorial content after they watched many videos debunking conspiracy theories. Haroon et al. (2022)'s sock puppet audit found that a politically-biased recommendation feed could be "debiased" - or achieve similar amounts of left and right-leaning videos - by watching a diet of videos that heavily featured the ideology that was originally less prevalent.
These studies suggest that it is possible to remove some unwanted content from one's feed. However, the degree to which it can be done varies and is never 100%. Further, we find it necessary to investigate platform-provided buttons in addition to video-watching for a variety of reasons. First, many are designed for the explicit purpose of removing unwanted content (e.g., the "Not Interested" button). Second, they may be much faster to perform: studies suggest a minimum of 10 minutes of watching is required to register significant changes to recommendations Papadamou et al. (2021). Meanwhile, pressing buttons can take just seconds. Lastly, these buttons may avoid the side effects of infusing too much content from another topic to replace the unwanted topic.
Ricks and McCrosky (2022) provide the first quantitative study of such YouTube's platform-provided features, expanding the breadth of recommendation-reduction strategies beyond watching videos of a different topic. The researchers supplied YouTube users with a browser extension with a custom "Stop Recommending" button displayed on each video recommendation. Then, users were randomly assigned to have their custom button press a native platform button in the background on their behalf. Their results show that clicking on the native "Don't recommend this channel" button on videos produced subsequent recommendations that were least similar to them.
Ricks and McCrosky (2022)'s study benefits from a large sample. Their field experiment design also presents distinct advantages, particularly an external validity that a sock puppet audit cannot achieve. At the same time, we still find it valuable to perform a sock puppet experiment with a more controlled environment for two reasons. First, because users could press "Stop Recommending" on any recommendation from any topic, the study was not able to identify the effects of the buttons for well-defined topics. Second, there are possible confounds from uncontrolled user behavior, such as users watching similar videos to the ones that they pressed "Stop Recommending", or cross-contamination between conditions where users clicked on YouTube-provided buttons in addition to Mozilla-provided buttons.
#### Users' relationship with user controls.
A few studies also used qualitative methods to understand users' experiences and perceptions of different strategies to remove unwanted content from their personal feeds.
Ricks and McCrosky (2022) surveyed and interviewed a subset of their participants from the quantitative arm of their study. They find that users take a variety of strategies to combat unwanted recommendations, generally find platform-provided features to be ineffective, and notice that effective results need to take sustained time and effort.
While these surveys and interviews solicit the breadth of strategies that users have to combat unwanted recommendations, the degree to which general YouTube users are aware of each platform-provided feature is still unknown. It is also unknown whether they use these features, even if they are aware of. Such data is important because an effective feature may be moot if not many people know about their existence.
Smith et al. (2021) also analyzed YouTube user controls to alter recommendation for their adherence to user experience principles. They found that the actions performed by such buttons were reactive (i.e., only useful _after_ a user received an unwanted recommendation) and that the feedback provided to the user after clicking them was often unclear and vague. They also found that navigating to some features was difficult, which could limit users' ability to alter their recommendations.
## 3 Research Questions
Previous studies of "filter bubbles" on YouTube recommendations see that those of a given topic can increase as a result of watching many videos of that topic, but we do not know whether these recommendations are from channels that the user has watched before, or whether they are new channels that YouTube finds similar. Such a breakdown would add to the knowledge of YouTube's role in promoting unwanted content by quantifying how much YouTube is "inferring" this content or simply suggesting content from channels it knows the user has seen before. Also, confirming the general result of increasing recommendations of a given topic would allow us to carry the next phase of our study, which attempts to remove them.
Thus, we first address the question, **how responsive are YouTube recommendations to watching many videos of the same topic?** (RQ1) In particular, do they recommend more videos of the same topic, and if so are they from channels that users watched up to that point or are they new ones? Are they different for different topics? We study four topics whose prevalence on YouTube has been previously studied: _Alt-Right_, _Antitheism_, _Political Left_, and _Political Right_ (motivated and described in Section 4.1).
We are also interested in the effects of platform features in removing unwanted recommendations. While a previous study investigated their usage "in the wild", the effects of each feature that is uncontaminated by usage of other ones, on topics that are well-defined, is still unknown. Such questions are worth answering because YouTube users in general could benefit from knowing what the most effective strategies for removing unwanted content are, specifically each one's effect on specific topics that they may dislike.
Thus, we ask, **how responsive are YouTube recommendations to repeatedly performing a variety of strategies to try to remove unwanted videos of a topic?** (RQ2) Are they different between videopage and homepage? How much content is removed from similar channels that are not explicitly interacted with? Do they vary topic to topic? We identified six of these strategies, such as pressing the "Not interested" button, and listed them in Section 4.1.
Finally, it is unknown how many YouTube users are aware of each platform feature, how many utilize them, and how effective the users find them to be. This information is important because effective strategies may be moot if users do not know their existence, and because users should be both using effective strategies and finding them to be effective.
Therefore, we lastly ask, **what are real users' experiences with the platform features that we test in RQ2?** (RQ3) With respect to each platform feature, we designed a survey study to ask how many participants are aware of it, what percentage use it to remove unwanted recommendations (given they are aware), and how effective participants find it to be (given they are aware and have used the feature to try to amend the situation).
## 4 Sock Puppet Study
### Sock Puppet Design
We take a sock puppet algorithm audit approach to examine how suggestions from certain topics can both be populated onto and removed from one's personal recommendation feed. Broadly, our agents first purposely populate their feed with videos from this unwanted topic ("stain phase"); Then, they take on one of a variety of strategies to try to eliminate such videos from being recommended ("scrub phase"). We collect data on how recommendations change throughout these phases in order to characterize the recommendation system's response to these various interactions.
Video topics.We require video topics as an input for our agents to populate in their recommendations (staining phase), and then attempt to scrub (scrubbing phase).
Each topic is operationalized as a list of channels collected by researchers who have formerly studied that topic on YouTube. They are used in our experiment in two ways. First, agents watch videos from the channel lists during the stain phase. Next, during the scrub phase, some strategies cross reference their homepage recommendations with the assigned topic's channel list to determine whether and which one to indicate disinterest on.
* _Alt-Right_: The most extreme group of the Alternative Influence Network, a loosely-defined community of YouTube channels that are defined by their opposition to mainstream media (Ricks and McCrosky, 2022). The Alt-Right promotes white nationalism in the face of an increasingly diverse US population, and is often openly anti-semic (ADL, 2019). YouTube channels of the Alt-Right were first collected by Lewis (2018) through a snowballing method, and subsequently augmented by Ribeiro et al. (2020) and Chen et al. (2022).
* _Antitheism_: Collected by Ledwich and Zaitsev (2020), it is "the self-identified atheist who is also actively critical of religion".
* _Political Left_: Collected by Wu and Resnick (2021), they include local news, talk shows, and magazines. We use the US political left channels, which takes similar views among various issues of political significance, such as climate change.
* _Political Right_: Same as above, but with the US political right channels.
Scrubbing strategies.The name and operation of each scrubbing strategy are listed below. Each agent is assigned one strategy, and performs it repeatedly during the "scrub phase" of the sock puppet run.
* _None_ (control): Load the homepage, then do nothing except refreshing the homepage.
* _Watch neutral_: Load and watch a video from mainstream, politically neutral news outlets as defined by the fact-checking organization Media Bias/Fact Check.3 Footnote 3: [https://mediabiasfactcheck.com/](https://mediabiasfactcheck.com/)
* "History-based" strategies
* _Dislike_: Load a previously-watched video from the stain phase and click the "Dislike" button.
* _Delete_: Load the watch history and click "Delete" on the most recently-watched video.
* "Recommendation-based" strategies. Load the homepage. If there does _not_ exist any recommended video on the homepage from a channel in the channel list, then just refresh again. However, if such a video exists, do the following to the first such video:
* _Not interested_: click the "Not interested" button and refresh the homepage.
* _No channel_: click the "Don't recommend channel" button and refresh the homepage.
* _Dislike recommended_: click on the video and dislike it (agents do not stay to watch the video), then return to the homepage.
The "watch neutral" strategy attempts to ignore the current issue by watching videos from a different topic, and most resembles the intervention strategies of related studies [16, 17]. We call dislike and delete strategies "history-based" because they act on videos that the agents watched during the stain phase. We call the final three strategies "recommendation-based" because they are performed with respect to recommended videos.
**Sock puppet phases and data collection.** A sock puppet agent follows the following process. After logging in to a YouTube account, an agent performs the "stain phase", where it watches 40 videos, for up to 30 minutes each,4 from a "stain video list" which are sampled from the channel list belonging to that topic. Next, it performs the "scrub phase", where it executes its assigned scrubbing strategy 40 times. Lastly, the agent clears its entire YouTube activity through Google's MyActivity page,5 in order to leave a clean history for the next audit to start [17]. This includes clearing all revertible actions made during its run, such as pressing the "Dislike", "Not interested", "Don't recommend", and "Delete from watch history" buttons.
Footnote 4: An alternative is to stay for the median watch time for videos with similar length, see Wu, Rizoiu, and Xie (2018)’s computation of relative engagement metric.
Footnote 5: [https://myactivity.google.com/myactivity](https://myactivity.google.com/myactivity)
Our agents use web-scraping methods to collect the top 10 recommendations from the homepage and videopage at three strategic points:
* P1: The beginning of the stain phase.
* P2: The end of the stain phase.
* P3: The end of the scrub phase.
Because the video being watched when performing videopage collection may have an effect on the recommendations themselves, each agent repeatedly loads the first video in the stain video list at all three collection points (P1, P2, P3). Altogether, Algorithm 1 provides an overview of a sock puppet agent's interactions with the platform.
```
Log into YouTube Collect homepage recs \(\triangleright\) P1 Collect videopage recs from the first stain video \(\triangleright\) P1 for\(i\in[2\dots 40]\)do\(\triangleright\) stain phase Watch a video from stain video list up to 30 minutes endfor Collect homepage recs \(\triangleright\) P2 Collect videopage recs from the first stain video \(\triangleright\) P2 for\(i\in[1\dots 40]\)do\(\triangleright\) scrub phase Perform assigned scrubbing strategy endfor Collect homepage recs \(\triangleright\) P3 Collect videopage recs from the first stain video \(\triangleright\) P3 Clear YouTube activity (cancel all revertible actions)
```
**Algorithm 1** Agent
We now describe the configurations of agents for the overall experiment. For each topic we tested seven strategies, each five times, resulting in (7 * 5 =) 35 agents. All 35 agents of a given topic were run in parallel in order to deal with recommendation noise that may arise from having agents make queries on different times.
For a given topic, we also drew five stain video lists, and assigned each list to exactly one agent within every strategy tested. Doing so assures that agents of the same strategy watch different sets of staining videos, boosting generalizability of the strategy effects, while simultaneously assuring that agents of different strategies watch, overall, the same videos, enabling comparability between strategies.
Additionally, each of the 35 agents have their own Google Accounts so that the platform can track their viewing habits and personalize content to each agent, and so that we can more closely simulate real users' experience with the platform. Logging into an account also grants access to buttons and features that are only available to users that are logged in (e.g., the "Not interested" button).
Our agents ran in a Google Chrome browser with add blocker installed. They had Google accounts with birthdays set at an arbitrary 5/5/1990, a gender selection of "Rather not say", and asexual names (e.g., "Tandy"). We also address the potential biases from location effects, which would occur if queries were made to the platform from different locations, or from (different accounts in) the same IP address. Thus, all agents are created and live in the same AWS Region of Ohio (US-East-2), but make queries from individual IP addresses.
Out of 140 sock puppets released over the course of five days in August 2022, 139 sock puppets ran successfully. Agents collected a total of 8330 recommendations.
### Data Annotation
Our agents collected many recommendations during their runs. We would like to label them for whether they belong to the topics that the agents were assigned to (what we call "stain"), in order to quantify how well (1) the stain phase worked to populate agent' recommendations with stain, and (2) how well the scrub phase worked to remove it.
We adopted an iterative strategy in annotating the recommended channels. We first developed an initial annotation codebook by surveying prior research. Next, we randomly sampled 50 channels for each topic. Two authors who had extensive experience in studying political polarization and YouTube platforms independently labeled those channels by following the codebook. The preliminary inter-rater reliability (IRR), measured by Cohen's kappa, was 0.648, 0.728, 0.634, 0.563 for _Alt-Right_, _Antitheism_, _Political Left_, _Political Right_, respectively, demonstrating substantial agreement. The two raters discussed every disagreed case to reach consensus and updated the codebook whenever needed.
The two raters then went on and labeled all the remaining channels. The final IRR kappa scores were 0.660, 0.822, 0.854, and 0.945, respectively. The raters also discussed all disagreed cases and resolved disagreement. The final annotation codebook is attached in Appendix A. The annotation results and IRR calculation can be previewed via this link.6
### Result 1: Stain Phase
In this subsection, we answer our questions posed in RQ1.
Effects of stain phase.We wanted to know whether our agents experienced a significant change in stain (the percentage of recommendations of their given topic) after the stain phase. To address this question, we compared the stain of our agents at P1 with those at P2 for each topic, in both the homepage and the videepage. To determine whether changes were significant, we chose the Wilcoxon signed-rank test because (a) the data was non-normal; (b) the comparison before and after the "stain phase" treatment was a paired test. Results are given in Table 1 (P1 to P2).
In the homepage, we find that all topics experienced significant increases in stain as a result of the stain phase. _Antitheism_ received the most stain at P2 (37%), while _AltRight_ received the least (20%). In contrast, the videepage demonstrated significant changes in stain only on _Antitheist_ and _Political Right_. _Alt-Right_ actually showed a slight decrease from P1 to P2. Despite a lack of consistent significant changes, a non-zero stain still existed in the videepage - absolute percentages at P1 varied between 5% for _Alt-Right_ and 42% for _Political Left_. We lastly remark that, across both homepage and videepage, and across topics and strategies, stain never reached more than half.
These findings set us up well for the scrub phase, because it assures that our agents will indeed have stain to remove when they perform their scrubbing strategies.
mendations from new channels (0%) while all other topics received at least 9%.
These findings suggest that the YouTube recommendation system sometimes plays a role in providing stain to the user by not only suggesting content that is from the same channel, but rather by inferring and providing that from different but similar channels.
### Result 2: Scrub Phase
In this subsection, we answer our questions posed in RQ2.
Effects of scrub phase.We wanted to know whether it was possible for our agents to remove stain from their feeds after the scrub phase. To address this question, we compared the stain of our agents at P2 with those at P3, for each topic, in both homepage and videopage. Again, our data was non-normal and paired, so we ran Wilcoxon signed-rank tests to determine whether stain significantly decreased from P2 to P3. Results are given in Table 1 (P2 to P3).
On the homepage, _Not interested_ and _No channel_ were the only strategies that significantly reduced the amount of stain across all topics. Comparing average relative changes between P2 and P3, _Not interested_ wins out (-88%). One strategy successfully scrubbed three out of four topics (_Delete_), while two strategies were successful in two out of four topics (_Dislike recommendation_ and _Watch neutral_). On the other end, the _None_ strategy did not produce any significant effect (in fact, it produced a slight increase), which was expected because it was our control strategy. On the videopage, we did not find any significant scrub phase effects.
Stain from scrubbed channels vs. new channels.We wanted to know whether our scrubbing strategies removed stain in general, or whether they only removed the subset of channels for which the the agent had explicitly scrubbed up to that point.
To answer this question, for all recommendations at point P3, we categorized them as either "off-topic", "on-topic scrubbed-channel" (i.e., the agent that collected this recommendation had already scrubbed a video from the same channel during the scrub phase), or "on-topic new-channel" (i.e., the agent had not scrubbed a video from the same channel up to that point). Notice that these categories are analogous to watched/new categories made for P2 in Section 4.3.
Then, for each topic/strategy pairing, we found the ratio of recommendations at P3 that belonged to each category. We report these ratios for the homepage in Table 3. We did not examine the videopage because it did not experience any significant changes in this phase. The _None_ strategy was excluded because no videos were scrubbed.
Categorizing recommendations this way reveals that scrubbing strategies behaved differently in removing unwanted recommendations. For instance, in three out of four topics, at least half of the _Watch neutral_ strategy's remaining stain at P3 was from scrubbed channels. On the other hand, _No channel_ rarely left videos from scrubbed channels (0-2%); most stain remaining after using this strategy was from new channels. The behavior of _No channel_ agrees with many user perceptions of the "Don't recommend channel" button [14], and matches an intuitive interpretation of the button name.
## 5 Survey Study
### Survey Design
In the sock puppet section of our work, we collected data from simulated users' interactions with YouTube to quantify the effects of platform-features that may help remove unwanted recommendations. In this section, we want to understand better the relationship between real users and these features. We ran a survey to determine this.
Survey overview.We first asked whether respondents had experienced getting unwanted recommendations before. Respondents were specifically asked whether they have experienced this scenario before: "_You are browsing YouTube, and notice videos recommended to you that you would rather not have recommended_ (_because they are offensive to you, triggering, not safe for work, or some other reason_)". The buttons consider are "Delete" (delete a video from watch history), "Dislike", "Not interested", and "Don't recommend channel". We asked for respondents' experiences with our disinterest buttons, with respect to three constructs:
* _Awareness_: Before taking this survey, were you aware this button existed?
* _Usage_: Have you used this button to remove unwanted recommendations?
* _Belief in efficacy_: Recall the times when you used this button to remove unwanted recommendations. How effective do you think it was? Please rate from 1 (not at all effective) to 5 (completely effective).
\begin{table}
\begin{tabular}{c|c c c|c c c|c c c|c c c} & \multicolumn{3}{c}{_Alt-Right_} & \multicolumn{3}{c}{_Antitheism_} & \multicolumn{3}{c}{_Political Left_} & \multicolumn{3}{c}{_Political Right_} \\ & Off-topic & On-topic & On-topic & On-topic & On-topic & On-topic & On-topic & On-topic & On-topic & On-topic & On-topic & On-topic & On-topic & On-topic & On-topic \\ \hline _Watch neutral_ & 90\% & 8\% & 2\% & 74\% & 14\% & 12\% & 78\% & 2\% & 20\% & 82\% & 6\% & 12\% \\ _Delete_ & 96\% & 0\% & 4\% & 74\% & 18\% & 8\% & 100\% & 0\% & 0\% & 94\% & 2\% & 4\% \\ _Dislike_ & 92\% & 2\% & 6\% & 84\% & 10\% & 6\% & 72\% & 0\% & 28\% & 84\% & 2\% & 14\% \\ _Not interested_ & 100\% & 0\% & 0\% & 96\% & 0\% & 4\% & 90\% & 0\% & 10\% & 94\% & 0\% & 6\% \\ _No channel_ & 84\% & 0\% & 16\% & 76\% & 2\% & 22\% & 84\% & 0\% & 16\% & 76\% & 0\% & 24\% \\ _Dislike rec._ & 90\% & 2\% & 8\% & 80\% & 0\% & 20\% & 76\% & 0\% & 24\% & 64\% & 4\% & 32\% \\ \end{tabular}
\end{table}
Table 3: Stain at P3 on the homepage split into categories of off-topic, on-topic scrubbed-channel, and on-topic new-channel (we omit the word “channel” to save space), for each of strategy (row) and each topic (column).
Only those who have experienced unwanted recommendations before and were aware of the buttons were asked to report on their real usage, while others were given a hypothetical question ("_If_ you had known this button existed, would you have used it?"). Furthermore, only those who were (1) experienced, (2) aware of button, and (3) have used the button were asked to report their belief in its efficacy in removing recommendations, while others were given a hypothetical question ("_If_ you had used this button for this scenario, how effective do you think would you find it?").
Survey implementation.We recruited 300 participants from the survey recruitment platform Prolific, and ran the survey on the survey delivery platform Qualtrics. We selected participants that had used YouTube before, were adults (18+), and resided in the US. Participants were paid $15 an hour. The University of Michigan Health Sciences and Behavioral Sciences Institutional Review Board has determined that this research is exempt from IRB oversight (Study ID: HUM00224551).
Surveys were pretested with colleagues. We emphasized honest rather than "right" answers so that respondents would not be tempted to "please" us by saying they knew about a button when in reality they did not (Paolacci and Chandler, 2014). We included screenshots of buttons so that they didn't have to know them by name. Since attention checks are important to maintaining experimental validity, we also implemented three of them throughout the survey to make sure the respondents were focusing on and comprehending the survey questions. At three separate points, we gave them a question whose format was identical to that of others and instructed them to select a specific choice. For example, _"Please select 'Dislike' in the choices below"_.
We filtered responses by eliminating responses from those who failed any of the three attention checks. Respondents were paid regardless of whether they passed or failed attention checks. Then, we eliminated responses from anybody who answered "not sure" to our questions.
### Survey Analysis Methods and Results
In this subsection, we answer our questions posed in RQ3.
In total, our survey received 274 responses from those who passed all three attention checks. However, our respondent sample did not immediately generalize to a more general population. Thus we used post-stratification, a popular statistical method that adjusts estimates on non-probability samples (Salganik, 2019), to generalize our results to the adult YouTube-using population in the US.
To perform post-stratification, we divided our respondents into binary genders and age buckets (roughly 20 years apart), making a total of eight subgroups. We then made estimates of each subgroup's prevalence in the target population by combining Census data on age/gender subgroups (Duffin, 2022) and PEW data on the percentage of each subgroup that use YouTube (Auxier and Anderon, 2021). Comparing our survey sample's distribution among subgroups with that of the target population revealed that our sample skewed young: Among usable responses, we routinely over-sampled the 18-45 subgroups and under-sampled 65+ ones. Fortunately, post-stratification corrects this bias.
We report answers for constructs of interest in Table 4. _Awareness_ percentages were calculated by aggregating answers from all respondents that passed our attention checks. For _Usage_, we restricted our calculation to those who were both aware that the feature existed, and had experienced having unwanted recommendations. _Belief in efficacy_ was the most restricted because we only wanted ratings from those who would be well-informed of its effects from personal usage: Only those who experienced unwanted recommendations, were aware the button existed, and used that button to try to resolve the issue, were considered. These population restrictions are applied to both the table and our discussion of results.
Moving onto results, we find for _Awareness_ that survey respondents were most aware of the "Dislike" button's existence (93.94%). "Don't recommend channel" was the least well-known (35.37%). As for _Usage_, they favored "Not interested" (82.83%) and "No channel" (80.53%) to remove unwanted recommendations when they experienced it. "Dislike" was the least used button (37.75%). Looking at _Belief in efficacy_, users found "Delete" (3.76), "Not interested" (3.42), and "No channel" (4.10) all more effective than the "Dislike" button (2.52).
These findings suggest that users do not use the "Dislike" button to remove unwanted recommendations, despite most knowing about its existence. Respondents' intuition of this button match our empirical findings: We saw in Section 4.3 that the _Dislike_ and _Dislike recommendation_ strategies both reduced less stain compared to all other scrubbing strategies (_Delete_, _Not interested_, _No channel_). Meanwhile, the button for our (empirically) most effective scrubbing strategy - "Not interested" - was highly used by respondents who knew about it. However, awareness was a privilege: almost 44% of survey respondents were unaware of its existence.
## 6 Discussion
We performed an algorithm audit of the YouTube recommendation system to test whether one could remove unwanted content from their feed. We paired our audit with a
\begin{table}
\begin{tabular}{r|c c c} & _Awareness_ & _Usage_ & _Belief in efficacy_ \\ \hline Delete & 51.41\(\pm\) 7.53\% [248] & 53.64\(\pm\) 13.52\% [110] & 3.76 \(\pm\) 0.25 [48] \\ Dislike & 93.94\(\pm\) 3.90\% [263] & 37.75\(\pm\) 7.90\% [226] & 2.52 \(\pm\) 0.25 [62] \\ Not interested & 56.03\(\pm\) 6.63\% [258] & 82.83\% \(\pm\) 8.23\% [156] & 3.42 \(\pm\) 0.32 [122] \\ Don’t recommend channel & 35.37\(\pm\) 5.85\% [255] & 80.53\% \(\pm\) 9.58\% [111] & 4.10 \(\pm\) 0.33 [88] \\ \end{tabular}
\end{table}
Table 4: Results from user survey. For each button (row), we list the point estimate and confidence level of constructs of interest (column). Sample sizes are given in brackets.
survey to understand whether users actually knew these buttons existed, used them, and believed them to be effective.
In our audit, we found that the stain phase produces a significant increase in stain in the homepage across all topics. We also saw that stain at P2 never reached more than half of recommendations in either the homepage or the videepage. These results confirm our suspicion that watching many videos from a given topic do not completely "surround" the user topical recommendations like the term "filter bubble" would suggest, and motivated our usage of the term "stain".
Continuing on results from the stain phase, we broke down stain at P2 into those from channels watched before and those from channels not watched before, finding that their prevalence varied based on topic.
We saw that both types of stain were present, but to varying degrees depending on topic. On the one hand, _Political Left_ received the most stain from new channels for both the homepage and videepage, demonstrating that the platform had a notion of topical similarity by "inferring" other channels from the political left that the user may like. On the other hand, the _Alt-Right_ received the least recommendations from new channels, for both the homepage and videepage. This finding is interesting given YouTube's recent public promises to curb misinformation and conspiracy theories YouTube (2019), especially "harmful" ones such as Q-Anon YouTube (2020), as well as a shift in company-wide attention towards stopping home-grown, right-wing extremism from spreading on its platform Bergen (2022). While the lack of recommendations from new _Alt-Right_ channels supplied to agents who watch that content could be evidence of YouTube operationalizing its promises, we cannot formally tell the difference between that and a general lack of _Alt-Right_ videos remaining on the platform today.
Moving onto the scrub phase, we compared different scrubbing strategies and found that _Not interested_ was the most effective one on the homepage: It produced significant decrease in stain across all topics, and using it resulted in the greatest average decrease in stain from P2 to P3 across topics (-88%). This strategy performed well in removing stain from both channels it had explicitly scrubbed as well as similar ones it didn't interact with. Thus, users who would like to remove recommendations from any channels belonging to an unwanted topic should use this strategy.
In contrast to homepage findings, we found that the videepage never experienced significant effects from the scrub phase. At a cursory glance, it seems that our results stand in contrast to Tomlein and colleagues' finding that agents could significantly reduce conspiratorial recommendations on the videepage by watching many videos debunking the conspiracy theory. However, upon further inspection it should be noted that in fact we have two separate experiments. Whereas agents in our study collected videepage recommendations from a video at P3 that was the same as that used in P2, their study used a video at P3 that was the semantic _opposite_ of that of P2. Specifically, Tomlein et al.'s bots collected them from a video _promoting_ agents' assigned conspiracy theory at P2, and then collected them from a video _debunking_ it at P3.
Combining our findings with those of Tomlein et al. (2021) suggests that videepage recommendations may be influenced more by the video that is playing while they are collected, than any interactions with the system leading up to that collection.The implication for users is that they should not expect any scrubbing strategies to save them from further recommendations of an unwanted topic if they plan to keep watching a video of that topic; Rather, they may want to stop watching content from that topic altogether.
Lastly, we wanted to know how users interacted with these platform features in their daily YouTube usage. We found that US adult YouTube users were most aware of the "Dislike" button, yet more empirically effective strategies, such as "Not interested", were less known. Those who knew the "Not interested" button existed used it at a higher rate and perceived it to be more effective than those who knew about "Dislike".
Put together, our sock puppet and survey findings suggest that if YouTube wanted to allow users to more effectively remove unwanted recommendations, it should make its effective platform-features for doing so more broadly known to the general YouTube population. Doing so would not only benefit users' experience. It would also be in the best interest of the platform because allowing users to have more agency to tailor algorithmic decisions to their preferences can build and maintain their trust in the system Ekstrand et al. (2015), as well as increase overall satisfaction Shin (2020). One implication for platform designers is that they should make buttons such as "Not interested" more widely known by increasing its discoverability on the website. To that end, Ricks and McCrosky provide a blueprint. In their experiment, they found that when their users were displayed "Don't recommend this" buttons prominently and clearly on recommendation title cards, instead of being hidden behind a three-dots button or requiring the user to be led away from their current page, they were more than twice as likely to use it Ricks and McCrosky (2022).
While this study demonstrates the benefits of YouTube's user controls, there still exist challenges to its uptake to remove unwanted recommendations. First, we note that these controls could be used to create digital media environments that run counter to democratic norms of diversity and breadth of perspectives. Thus, policy makers should pay attention to the potential for user agency to further limit their capacity for and consumption of cross-cutting content.
Second, as our survey highlights, knowledge of these buttons is still an issue. Much of the general YouTube-using public was not aware that the "Not interested" button exists, for example. Even more troubling was that even those who experienced unwanted recommendations recently- thus having ample motivation to discover content removal tactics-still had not become aware of the button.
Third, user interaction flows from these buttons may violate design principles in a way that limits users' ability to fully understand and anticipate the effects of different user controls Smith et al. (2021). For instance, they found that users were not fully aware of their effects on recommendations and account settings, and so they shield away from using them at all. Further compounding users' hesitation to take up these features is the perception that
some of their effects are irreversible.
Lastly, the actions that these user control buttons allow are responsive, rather than proactive. Users respond to a poor recommendation by eliminating it, rather than asking YouTube to tailor their recommendations before they see it. Thus, the worrisome effects of misinformation, toxicity, and offensiveness may have already taken its harmful course by the time the user decided to eliminate them. Therefore, these features cannot be seen as a substitute for diligent and thorough content moderation by the platform.
## 7 Limitations
Our findings add to a growing chorus of studies investigating problematic and unwanted recommendations on YouTube. However, our study is not without its limitations. First, we perform channel-level, rather than video-level labeling. Performing labeling at this level may result in labeling channels as a certain topic even if not all of its videos are of that topic (e.g. an _Alt-Right_ channel sometimes posting music videos), or, conversely, labeling a channel as off-topic even if just a few of its videos are topical (e.g. a science vlogger who occasionally talks about their journey to atheism). However, we are encouraged by the fact that other studies have taken this approach [16, 16]. Analyzing channels as a whole is still important because they indicate a high number of videos of that topic, and users may be encouraged to subscribe to these channels even if not all videos in the channel are topical.
Another limitation is that of generalizing from the particular settings our sock puppets used. In particular, our sock puppets tested just four topics, using geolocation of the US East AWS center in August of 2021. While we found that certain strategies work in these conditions to remove recommendations, we cannot be sure it does in other conditions. New topics may be harder or easier to scrub due to more or less general interest in that topic in the broader YouTube ecosystem, respectively. Our code is publicly available 7 so that we and other researchers may continue to understand more general effects of our scrubbing strategies.
Footnote 7: [https://github.com/avliu-um/youtube-disinterest](https://github.com/avliu-um/youtube-disinterest)
## 8 Ethical Considerations
We now discuss the ethical concerns of our study. For our sock puppet audits, since they are computer scripted, we do not run the risk of making real users watch potentially harmful content, such as those from the _Alt-Right_ channels. However, making the bots watch a lot of content from a given topic may still increase its prevalence on YouTube by boosting its general popularity, making real platform users see it more often than they would have had our bots not artificially promoted it. Also, pressing the "Dislike" button on channels may cause them to be demoted in recommendations, limiting YouTube creators' ability to generate advertising revenue.
While this is a possibility, we do not find these costs to outweigh the benefits of our study. First, we consider the potential cost to content creators of negative interactions with the system, such as pressing the "Dislike", "Not interested", "Don't recommend channel", and "Delete from watch history" buttons. Here we note that (a) our bots collectively injected up to 3 such interactions per video, which we expect is small compared to the number of "authentic" ones, (b) we cleared all the revertible actions caused by the audit when it exited its experimental runs, and (c) the average lifetime of an audit in this study is less than six hours, including both the stain phase and scrub phase. This means that not only is the effect of negative interactions small per video, it is also both short-lived and fully reversed.
Another potential cost of our study is that our audits would irreversibly alter two public metrics - total view count and total watch time. However, the costs are small because we do not expect to affect videos' and/or channels' overall prevalence by much because the number of views we are "artificially" introducing to the YouTube world are minuscule in relation to the amount of "authentic" views that the videos have received.
Our study's findings are a benefit to all YouTube users alike, because they can inform users on how best to deal with and get rid of unwanted recommendations. We think these benefits outweigh the minimal harms.
As for the survey participants, we must make sure that they are not being put in the way of any harm. Because we did not ask them to view any actual content, but rather recall times in their life in which they'd interacted with YouTube, we are only at the risk of having participants revisit potentially-triggering or traumatic events if they have had any. However, we include at our introduction of the survey a description of the survey, which describes the questions we'd like to ask them. Thus, if a participant were truly going to be adversely harmed by potentially triggering questions, they would've not consented to the survey and therefore been removed from the panel before they even saw the questions.
## 9 Conclusion
With these results we conclude that different strategies to remove unwanted content on the YouTube platform work to different degrees, and from our tested strategies we found that using the "Not interested" button was a clear winner. However, this strategy has not seen widespread adoption among users. That is, while those who know about these effective buttons get to experience their effective behavior, 44% of adult YouTube users in the US are not aware that they exist. Thus, we join existing calls for YouTube to amplify more broadly the effective ways to remove unwanted recommendations on its platform.
## Appendix A Annotation Codebook
### Alt-Right
#### Process
* Ribeiro et al. (2020) |
2307.04742 | Parallel Tempered Metadynamics: Overcoming potential barriers without
surfing or tunneling | At fine lattice spacings, Markov chain Monte Carlo simulations of QCD and
other gauge theories with or without fermions are plagued by slow modes that
give rise to large autocorrelation times. This can lead to simulation runs that
are effectively stuck in one topological sector, a problem known as topological
freezing. Here, we demonstrate that for a relevant set of parameters,
Metadynamics can be used to unfreeze 4-dimensional SU(3) gauge theory. However,
compared to local update algorithms and the Hybrid Monte Carlo algorithm, the
computational overhead is significant in pure gauge theory, and the required
reweighting procedure may considerably reduce the effective sample size. To
deal with the latter problem, we propose modifications to the Metadynamics bias
potential and the combination of Metadynamics with parallel tempering. We test
the new algorithm in 4-dimensional SU(3) gauge theory and find that it can
achieve topological unfreezing without compromising the effective sample size,
thereby reducing the autocorrelation times of topological observables by at
least two orders of magnitude compared to conventional update algorithms.
Additionally, we observe significantly improved scaling of autocorrelation
times with the lattice spacing in 2-dimensional U(1) gauge theory. | Timo Eichhorn, Gianluca Fuwa, Christian Hoelbling, Lukas Varnhorst | 2023-07-10T17:53:09Z | http://arxiv.org/abs/2307.04742v2 | # Parallel Tempered Metadynamics:
###### Abstract
At fine lattice spacings, Markov chain Monte Carlo simulations of QCD and other gauge theories are plagued by slow (topological) modes that give rise to large autocorrelation times. These, in turn, lead to statistical and systematic errors that are difficult to estimate. Here, we demonstrate that for a relevant set of parameters considered, Metadynamics can be used to reduce the autocorrelation times of topological quantities in 4-dimensional SU(3) gauge theory by at least two orders of magnitude compared to conventional update algorithms. However, compared to local update algorithms and the Hybrid Monte Carlo algorithm, the computational overhead is significant, and the required reweighting procedure may considerably reduce the effective sample size. To deal with the latter problem, we propose modifications to the Metadynamics bias potential and the combination of Metadynamics with parallel tempering. We test the new algorithm in 4-dimensional SU(3) gauge theory and find, that it can achieve topological unfreezing without compromising the effective sample size. Preliminary scaling tests in 2-dimensional U(1) gauge theory show these modifications lead to improvements of more than an order of magnitude compared to standard Metadynamics, and an improved scaling of autocorrelation times with the lattice spacing compared to standard update algorithms.
## I Introduction
In recent years, physical predictions based on lattice simulations have reached sub-percent accuracies [1]. With ever-shrinking uncertainties, the need for precise extrapolations to the continuum grows, which in turn necessitates ever finer lattice spacings. Current state-of-the-art methods for simulations of lattice gauge theories either rely on a mixture of heat bath [2; 3; 4; 5; 6] and overrelaxation [7; 8; 9] algorithms for pure gauge theories, or molecular-dynamics-based algorithms like the Hybrid Monte Carlo algorithm (HMC) [10] or variations thereof for simulations including dynamical fermions. For all these algorithms, the computational effort to carry out simulations dramatically increases at fine lattice spacings due to critical slowing down. While the exact behavior depends on a number of factors, such as the update algorithms, the exact discretization of the action, and the choice of boundary conditions, the scaling of the integrated autocorrelation times with the inverse lattice spacing can usually be described by a power law.
In addition to the general diffusive slowing down, topologically non-trivial gauge theories may exhibit topological freezing [11; 12; 13; 14; 15; 16; 17; 18]. This effect appears due to the inability of an algorithm to overcome the action barriers between topological sectors, which can lead to extremely long autocorrelation times of topological observables and thus an effective breakdown of ergodicity.
Over the years, several strategies have been developed to deal with this situation. On the most basic level, it has become customary, in large scale simulations, to monitor the topological charge of the configurations in each ensemble, thus avoiding regions of parameter space, which are affected by topological freezing [19; 20; 21]. Another possibility to circumvent the problem consists in treating fixed topology as a finite volume effect and either correcting observables for it [22; 23], or increasing the physical volume sufficiently to derive the relevant observables from local fluctuations [24]. It is also possible to use open boundary conditions in one lattice direction [25], which invalidates the concept of an integer topological charge for the prize of introducing additional boundary artifacts and a loss of translational symmetry.
Despite the success of these strategies in many relevant situations, the need for a genuine topology changing update algorithm is still great. This is evident from the large number and rather broad spectrum of approaches that are currently being investigated in this direction. Some of these approaches address critical slowing down in general, whereas others focus particularly on topological freezing. These approaches include parallel tempering [26; 27; 28], modified boundary conditions [29] and combinations of both [30]; multiscale thermalization [31; 32; 33], instanton(-like) updates [34; 35; 36; 37; 38; 39], Metadynamics [40; 41], Fourier acceleration [42; 43; 44; 45; 46] and trivializing maps [47; 48], also in combination with machine learning [49; 50]. For a recent review, see e.g. [51]. Additionally, recent years have seen multitudinous efforts to use generative models to sample configurations [51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69].
In this work, we propose a new update algorithm, parallel tempered Metadynamics, or PT-MetaD for short, and demonstrate its efficiency in 4-dimensional SU(3) at
parameter values, where conventional update algorithms suffer from topological freezing. In its basic variant, which we present here, PT-MetaD consists of two update streams simulating the same physical system. One of the streams is an efficient, conventional algorithm, while the second one includes a bias potential that facilitates tunneling between topological sectors. At regular intervals, swaps between the two streams are suggested, so that the good topological sampling from the second stream carries over to the first one. The algorithm thus combines ideas from parallel tempering [70], Metadynamics [71] and multicanonical simulations [72], leading to an efficient sampling of topological sectors while avoiding the problem of small effective sample sizes, which is usually associated with reweighting techniques such as Metadynamics or multicanonical simulations. Additionally, the inclusion of fermions into PT-MetaD is conceptually straightforward.
This paper is organized as follows. We start out by giving a general introduction to Metadynamics in Section II. Afterwards, Section III describes our simulation setup, including our choice of actions, observables, and update algorithms. Some details on the application of Metadynamics in the context of SU(3) gauge theory are also given. In Section IV, we present baseline results obtained with conventional update algorithms, including a rough determination of gradient flow scales for the DBW2 action. In Section V we present results obtained with pure Metadynamics for 4-dimensional SU(3), and discuss several possible improvements. In Section VI we introduce parallel tempered Metadynamics and show some scaling tests of the new algorithm in 2-dimensional U(1) gauge theory, as well as exploratory results in 4-dimensional SU(3). Finally, in Section VII, we conclude with a summary and outlook on the application of the new algorithm to full QCD.
## II Metadynamics
Consider a system described by a set of degrees of freedom \(\{U\}\), where the states are distributed according to the probability density
\[p(U)=\frac{1}{Z}e^{-S(U)}, \tag{1}\]
with the partition function \(Z\) defined as
\[Z=\int\mathcal{D}[U]\,e^{-S(U)}. \tag{2}\]
The expectation value of an observable \(O\) is defined as
\[\langle O\rangle=\int\mathcal{D}[U]\,p(U)O(U). \tag{3}\]
In the context of lattice gauge theories, the integration measure \(\mathcal{D}[U]\) is usually the product of Haar measures for each link variable, but more generally \(\mathcal{D}[U]\) may be understood as a measure on the configuration space of the system.
Metadynamics [71] is an enhanced-sampling method, based on the introduction of a history-dependent bias potential \(V_{t}(s(U))\). This potential is introduced by replacing the action \(S(U)\) with \(S_{t}^{M}(U)=S(U)+V_{t}(s(U))\), where \(t\) is the current simulation time. This potential modifies the dynamics of the system and depends on a number of observables \(s_{i}(U)\), with \(i\in\{1,\ldots,N\}\), that are referred to as collective variables (CVs). These CVs span a low-dimensional projection of the configuration space of the system, and may generally be arbitrary functions of the underlying degrees of freedom \(\{U\}\). However, when used in combination with molecular-dynamics-based algorithms such as the Hybrid Monte Carlo algorithm, the CVs need to be differentiable functions of the underlying degrees of freedom. During the course of a simulation, the bias potential is modified in such a way as to drive the system away from regions of configuration space that have been explored previously, eventually converging towards an estimate of the negative free energy as a function of the CVs, up to a constant offset [73; 74]. Usually, this is accomplished by constructing the potential from a sum of Gaussians \(g(s)\), so that at simulation time \(t\), the potential is given by
\[V_{t}(s)=\sum_{t^{\prime}\leq t}\prod_{i=1}^{N}g(s_{i}-s_{i}(t^{\prime})). \tag{4}\]
The exact form of the Gaussians is determined by the parameters \(w\) and \(\delta s_{i}\):
\[g(s_{i})=w\exp\biggl{(}-\frac{s_{i}^{2}}{2\delta s_{i}^{2}}\biggr{)}. \tag{5}\]
Both parameters affect the convergence behavior of the potential in a similar way: Increasing the height \(w\) or the widths \(\delta s_{i}\) may accelerate the convergence of the potential during early stages of the simulation, but lead to larger fluctuations around the equilibrium during later stages. Furthermore, the widths \(\delta s_{i}\) effectively introduce a smallest scale that can still be resolved in the space spanned by the CVs, which needs to be sufficiently small to capture the relevant details of the potential.
If the bias potential has reached a stationary state, i.e., its time-dependence in the region of interest is just an overall additive constant, the modified probability density, which we shall also refer to as target density, is given by
\[p^{\prime}(U)=\frac{1}{Z^{\prime}}e^{-S(U)-V(s(U))}, \tag{6}\]
with the modified partition function
\[Z^{\prime}=\int\mathcal{D}[U]\,e^{-S(U)-V(s(U))}. \tag{7}\]
Expectation values with respect to the modified distribution can then be defined in the usual way, i.e., via
\[\langle O\rangle^{\prime}=\int\mathcal{D}[U]\,p^{\prime}(U)O(U). \tag{8}\]
On the other hand, expectation values with respect to the original, unmodified probability density can be written in terms of the new probability distribution with an additional weighting factor. For a dynamic potential, there are different reweighting schemes to achieve this goal [75], but if the potential is static, the weighting factors are directly proportional to the exponential of the bias potential:
\[\langle O\rangle=\frac{\int\mathcal{D}[U]\,p^{\prime}(U)O(U)\,e^{V(s(U))}}{ \int\mathcal{D}[U]\,e^{V(s(U))}}. \tag{9}\]
The case of a static potential is thus essentially the same as a multicanonical simulation [72].
In situations where the evolution of the system is hindered by high action barriers separating relevant regions of configuration space, Metadynamics can be helpful in overcoming those barriers, since the introduction of a bias potential modifies the marginal distribution over the set of CVs. For conventional Metadynamics, the bias potential is constructed in such a way that the marginal modified distribution is constant:
\[p^{\prime}(s_{i})=\int\mathcal{D}[U]\,p^{\prime}(U)\delta(s_{i}-s_{i}(U))= \text{const}. \tag{10}\]
Conversely, for a given original distribution \(p(Q)\) and a desired target distribution \(p^{\prime}(Q)\), the required potential is given by:
\[V(s)=\text{log}\bigg{(}\frac{p^{\prime}(s)}{p(s)}\bigg{)} \tag{11}\]
However, it should be noted that even if the bias potential completely flattens out the marginal distribution over the CVs, the simulation is still expected to suffer from other (diffusive) sources of critical slowing down as is common for Markov chain Monte Carlo simulations.
## III Simulation setup and observables
### Choice of gauge actions
For our simulations of SU(3) gauge theory, we work on a 4-dimensional lattice \(\Lambda\) with periodic boundary conditions. Configurations are generated using the Wilson [76] and the DBW2 [77] gauge action, both of which belong to a one-parameter family of gauge actions involving standard \(1\times 1\) plaquettes as well as \(1\times 2\) planar loops, which may be expressed as
\[\begin{split} S_{g}=\frac{\beta}{3}\sum_{x\in\Lambda}& \Big{(}\sum_{\mu<\nu}c_{0}\big{(}3-\text{Re}\operatorname{tr}[ \mathcal{W}_{\mu,\nu}(x)]\big{)}\\ +\sum_{\mu\neq\nu}c_{1}\big{(}3-\text{Re}\operatorname{tr}[ \mathcal{W}_{\mu,2\nu}(x)]\big{)}\Big{)}.\end{split} \tag{12}\]
Here, \(\mathcal{W}_{k\mu,l\nu}(x)\) refers to a Wilson loop of shape \(k\times l\) in the \(\mu\)-\(\nu\) plane originating at the site \(x\). The coefficients \(c_{0}\) and \(c_{1}\) are constrained by the normalization condition \(c_{0}+8c_{1}=1\) and the positivity condition \(c_{0}>0\), where the latter condition is sufficient to guarantee that the set of configurations with minimal action consists of locally pure gauge configurations [78]. For the Wilson action (\(c_{1}=0\)), only plaquette terms contribute, whereas the DBW2 action (\(c_{1}=-1.4088\)) also involves rectangular loops.
It is well known that the critical slowing down of topological modes is more pronounced for improved gauge actions in comparison to the Wilson gauge action [12; 13; 14; 15; 18]: A larger negative coefficient \(c_{1}\) suppresses small dislocations, which are expected to be the usual mechanism mediating transitions between topological sectors on the lattice. Among the most commonly used gauge actions, this effect is most severely felt by the DBW2 action. In previous works [14; 15], local update algorithms were found to be inadequate for exploring different topological sectors in a reasonable time frame. Instead, the authors had to generate thermalized configurations in different topological sectors using the Wilson gauge action, before using these configurations as starting points for simulations with the DBW2 action. Thus, this action allows us to explore parameters where severe critical slowing down is visible, while avoiding very fine lattice spacings and thereby limiting the required computational resources.
### Observables
The observables we consider here are based on various definitions of the topological charge, and Wilson loops of different sizes at different smearing levels. The unrenormalized topological charge is defined using the clover-based definition of the field-strength tensor:
\[Q_{c}=\frac{1}{32\pi^{2}}\sum_{x\in\Lambda}\epsilon_{\alpha\beta\gamma\delta} \operatorname{tr}\bigl{[}F^{\text{clov}}_{\alpha\beta}(x)F^{\text{clov}}_{ \gamma\delta}(x)\bigr{]} \tag{13}\]
This field-strength tensor is given by
\[F^{\text{clov}}_{\alpha\beta}(x)=-\frac{i}{8}\left(C_{\mu\nu}(n)-C_{\nu\mu}(n )\right), \tag{14}\]
where the clover term \(C_{\alpha\beta}(x)\) is defined as
\[C_{\alpha\beta}(x)=P_{\alpha,\beta}(x)+P_{\beta,-\alpha}(x)+P_{-\alpha,-\beta} (x)+P_{-\beta,\alpha}(x), \tag{15}\]
\(P_{\alpha,\beta}(x)\) denotes the plaquette:
\[P_{\alpha,\beta}(x)=U_{\alpha}(x)U_{\beta}(x+\hat{\alpha})U^{\dagger}_{\alpha} (x+\hat{\beta})U^{\dagger}_{\alpha}(x) \tag{16}\]
Alternatively, the topological charge may also be defined via the plaquette-based definition, here denoted by \(Q_{p}\):
\[Q_{p}=\frac{1}{32\pi^{2}}\sum_{x\in\Lambda}\epsilon_{\alpha\beta\gamma\delta} \operatorname{tr}\Bigl{[}F^{\text{plaq}}_{\alpha,\beta}(x)F^{\text{plaq}}_{ \gamma,\delta}(x)\Bigr{]} \tag{17}\]
Similar to the clover-based field-strength tensor, \(F^{\text{plaq}}_{\alpha,\beta}(x)\) is defined as:
\[F^{\text{plaq}}_{\alpha\beta}(x)=-\frac{i}{2}\left(P_{\mu,\nu}(n)-P_{\nu,\mu}(n)\right) \tag{18}\]
Note that both \(Q_{c}\) and \(Q_{p}\) formally suffer from \(\mathcal{O}(a^{2})\) artifacts, although the coefficient is typically smaller for the clover-based definition \(Q_{c}\). The topological charge is always measured after applying \(\mathcal{O}(30)\) steps of stout smearing [79] with a smearing parameter \(\rho=0.12\). To estimate the autocorrelation times of the system, it is also useful to consider the squared topological charge [18]. Additionally, we also consider the Wilson gauge action and \(n\times n\) Wilson loops for \(n\in\{2,4,8\}\) at different smearing levels. We denote these by \(S_{w}\) and \(\mathcal{W}_{n}\) respectively.
### Update algorithms
Throughout this work, we employ a number of different update schemes: To illustrate critical slowing down of conventional update algorithms and to set a baseline for comparison with Metadynamics-based algorithms, we use standard Hybrid Monte Carlo updates with unit length trajectories (1HMC), a single heat bath sweep (1HB), five heat bath sweeps (5HB), and a single heat bath sweep followed by four overrelaxation sweeps (1HB+4OR). The local update algorithms are applied to three distinct SU(2) subgroups during each sweep [6], and the HMC updates use an Omelyan-Mryglod-Folk fourth-order minimum norm integrator [80] with a step size of \(\epsilon=0.2\), which leads to acceptance rates above 99% for the parameters used here.
We compare these update schemes to Metadynamics HMC updates with unit length trajectories (MetaD-HMC), and a combination of parallel tempering with Metadynamics (PT-MetaD) which is discussed in more detail in Section VI.
An important requirement for the successful application of Metadynamics is the identification of appropriate CVs. In our case, the CV should obviously be related to the topological charge. However, it should not always be (close to) integer-valued, but rather reflect the geometry of configuration space with respect to the boundaries between topological sectors. On the other hand, the CV needs to track the topological charge closely enough for the algorithm to be able to resolve and overcome the action barriers between topological sectors. A straightforward approach is to apply only a moderate amount of some kind of smoothing procedure, such as cooling or smearing, to the gauge fields before measuring the topological charge. Since these smoothing procedures involve some kind of spatial averaging, the action will become less local, which complicates the use of local update algorithms. Therefore, we use the HMC algorithm to efficiently update the entire gauge field at the same time, which requires a differentiable smoothing procedure such as stout [79] or HEX smearing [81]. Due to its simpler implementation compared to HEX smearing, we choose stout smearing here. Previous experience [82] seems to indicate that four to five stout smearing steps with a smearing parameter \(\rho=0.12\) strike a reasonable balance between having a smooth CV and still representing the topological charge accurately.
The force contributed by the topological bias potential may be written in terms of the chain rule:
\[\begin{split} F_{\mu,\text{meta}}(x)=&-\frac{ \partial V_{\text{meta}}}{\partial Q_{\text{meta}}}\frac{\partial Q_{\text{ meta}}}{\partial U_{\mu_{n}}^{(n)}(x_{n})}\\ &\times\frac{\partial U_{\mu_{n}}^{(n)}(x_{n})}{\partial U_{\mu_ {n-1}}^{(n-1)}(x_{n-1})}\dots\frac{\partial U_{\mu_{1}}^{(1)}(x_{1})}{ \partial U_{\mu}(x)}\end{split} \tag{19}\]
Here we have introduced the notation \(V_{\text{meta}}\) for the bias potential and \(Q_{\text{meta}}\) for the CV to clearly distinguish it from other definitions of the topological charge. The first term in the equation, corresponding to the derivative of the bias potential with respect to \(Q_{\text{meta}}\), is trivial, but the latter two terms are more complicated: The derivative of \(Q_{\text{meta}}\) with respect to the maximally smeared field \(U^{(n)}\) is given by a sum of staples with clover term insertions, and the final term corresponds to the stout force recursion [79] that also appears during the force calculation when using smeared fermions. Note that in machine learning terminology, this operation is essentially a back-propagation [83] and may be computed efficiently using reverse mode automatic differentiation. More details on the calculation of the force can be found in Appendix A.
The bias potential is constructed from a sum of one-dimensional Gaussians, as described in Section II, and stored as a histogram. Due to the charge conjugation symmetry, we can update the potential symmetrically. Values at each point are reconstructed by linearly interpolating between the two nearest bins, and the derivative is approximated by their finite difference. To limit the evolution of a system to relevant regions of the phase space, it is useful to introduce an additional penalty term to the potential once the absolute value of \(Q_{\text{meta}}\) has crossed certain thresholds \(Q_{\text{min}}\) and \(Q_{\text{max}}\). If the system has exceeded the threshold, the potential is given by the outermost value of the histogram, plus an additional term that scales quadratically with the distance to the outer limit of the histogram.
Unless mentioned otherwise, we have used the following values as default parameters for the potential: \(Q_{\text{max/min}}=\pm 8\), \(n_{\text{bins}}=800\), \(w=0.05\), while \(\delta Q^{2}\) has always been set equal to the bin width, i.e., \((Q_{\text{max}}-Q_{\text{min}})/n_{\text{bins}}\).
Note that it is often convenient to build up a bias potential in one or several runs, and then simulate and measure with a static potential generated in the previous runs. In some sense, this can be thought of as a combination of Metadynamics and multicanonical simulation.
## IV Results with conventional update algorithms
To establish a baseline to compare our results to, we have investigated the performance of some conventional update algorithms using the Wilson and DBW2 gauge actions. Furthermore, we have made a rough determination of the gradient flow scales \(t_{0}\) and \(w_{0}\) for the DBW2 action. Some preliminary results for the Wilson action were already presented in [82].
### Critical slowing down with Wilson and DBW2 gauge actions
In order to study the scaling of autocorrelations for different update schemes, we have performed a series of simulations with the Wilson gauge action on a range of lattice spacings. The parameters were chosen in such a way as to keep the physical volume approximately constant at around \((1.1\,\mathrm{fm})^{4}\), using the scale given by the rational fit function in [84], which was based on data from [85]. A summary of the simulation parameters can be found in Table 1. Since autocorrelation times near second-order phase transitions are expected to be described by a power law, we use the following fit ansatz in an attempt to parameterize the scaling:
\[\tau_{\mathrm{int}}=c\left(\frac{a}{r_{0}}\right)^{z} \tag{20}\]
All autocorrelation times and their uncertainties are estimated following the procedure described in [86]. Figure 1 shows the scaling of the integrated autocorrelation times of \(2\times 2\) Wilson loops \(\mathcal{W}_{2}\) and the square \(Q_{c}^{2}\) of the clover-based topological charge with the lattice spacing. Additionally, the figure also includes power law fits to the data and the resulting values for the dynamical critical exponents \(z(\mathcal{W}_{2})\) and \(z(Q_{c}^{2})\). Both observables were measured after 31 stout smearing steps with a smearing parameter \(\rho=0.12\). While the integrated autocorrelation times of both observables increase towards finer lattice spacings and are adequately described by a power law behavior, the increase is much steeper for the squared topological charge than for the smeared \(2\times 2\) Wilson loops. Below a crossover point at \(a\approx 0.08\,\mathrm{fm}\), the autocorrelation times of the squared topological charge start to dominate. They can be described by both, a dynamical critical exponent \(z\approx 5\) or, alternatively, by an exponential increase, that was first suggested in [17]. This behavior is compatible with the observations in [18].
In contrast, the autocorrelation time of Wilson loops is compatible with a much smaller exponent \(z\approx 1\)-2. As can be seen in Table 3, the critical exponent does not change significantly with the size of the Wilson loop after 31 stout smearing steps. Generally, the integrated autocorrelation times of smeared Wilson loops slightly increase both with the size of the loops and the number of smearing levels. The only exception to this behavior occurs for larger loops, where a few steps of smearing are required to obtain a clean signal and not measure the autocorrelation of the noise instead.
Regarding the different update algorithms, the unit length HMC does show a somewhat better scaling behavior for all observables than the local update algorithms, but it is also about a factor 7 more computationally expensive per update step (see Table 2).[87] For all local update algorithms considered here, the critical exponents are very similar, but the combination of one heat bath and four overrelaxation steps has the smallest prefactor. It is interesting to note, that this algorithm is also faster by more than a factor 2 than the five-step heat bath update scheme, which does not profit from the inclusion of overrelaxation steps. The single step heat bath without overrelaxation, although numerically cheaper, does have the worst prefactor of the local update algorithms.
Note that the reported numbers differ from those in [82] due to a different fit ansatz (in the proceedings, the fit ansatz included an additional constant term).
For the DBW2 action, the problem is more severe. Figure 2 shows the time series of the topological charge for two runs using the 1HB+4OR and the 1HMC update scheme. Both simulations were done on a \(16^{4}\) lattice at \(\beta=1.25\) using the DBW2 action. Evidently, both update schemes are unable to tunnel between different topological sectors in a reasonable time. Only a single configuration during the 1HB+4OR run and two (successive) configurations during the 1HMC run fulfill the condition \(|Q_{c}|>0.5\).
\begin{table}
\begin{tabular}{c c} Update scheme & Relative time \\ \hline
1HMC & 6.98 \\
1HB & 1.00 \\
5HB & 4.99 \\
1HB+4OR & 2.02 \\ \end{tabular}
\end{table}
Table 2: Relative performance of the update algorithms used in our scaling runs. The results cited here are from the \(22^{4}\) lattices. Note that the performance of the heat bath algorithm is slightly better for larger \(\beta\), due to the more efficient sampling of the probability distribution (cf. [4; 5]).
\begin{table}
\begin{tabular}{c c c c} \(\beta\) & \(L/a\) & \(a\) [fm] & \(N_{\mathrm{conf}}\) \\ \hline
5.8980 & 10 & 0.1097 & 100000 \\
6.0000 & 12 & 0.0914 & 100000 \\
6.0938 & 14 & 0.0783 & 100000 \\
6.1802 & 16 & 0.0686 & 100000 \\
6.2602 & 18 & 0.0610 & 100000 \\
6.3344 & 20 & 0.0549 & 100000 \\
6.4035 & 22 & 0.0499 & 100000 \\ \end{tabular}
\end{table}
Table 1: A summary of the simulation parameters for the Wilson gauge action runs using conventional update algorithms. The scale was set via the rational fit from [84], which in turn used data from [85].
### Scale setting for the DBW2 action
To the best of our knowledge, scales for the DBW2 action in pure gauge theory have only been computed based on simulations with \(\beta\leq 1.22\)[15; 88], and interpolation formulas are only available based on data with \(\beta\leq 1.04\)[89]. Since here we perform simulations at \(\beta=1.25\), we compute approximate values for \(t_{0}\)[90] and \(w_{0}\)[91], which allows us to estimate our lattice spacings for comparison to the Wilson results. Both scales are based on
Figure 2: Time series of the topological charge for \(V=16^{4}\), \(\beta=1.25\) using the DBW2 action. The configurations were generated with the 1HB+4OR update scheme (top) and the 1HMC (bottom). Out of a total of 400000 each, only a single configuration during the 1HB+4OR run and two (successive) configurations during the 1HMC run fulfill the condition \(|Q_{c}|>0.5\).
Figure 1: Scaling of the integrated autocorrelation times of square \(2\times 2\) Wilson loops \(\mathcal{W}_{2,2}\) (left) and the squared topological charge \(Q_{c}^{2}\) (right) for different update schemes for the Wilson gauge action. The scaling can be described using a power-law fit equation (20). Details on the simulation parameters are listed in Table 1.
\begin{table}
\begin{tabular}{c c c c c c} Update scheme & \(z(Q_{c}^{2})\) & \(z(S_{w})\) & \(z(\mathcal{W}_{2})\) & \(z(\mathcal{W}_{4})\) & \(z(\mathcal{W}_{8})\) \\ \hline
1HMC & 4.90(13) & 1.27(12) & 1.23(12) & 1.16(12) & 1.29(16) \\
1HB & 5.55(25) & 1.69(10) & 1.66(10) & 1.64(9) & 1.82(12) \\
5HB & 5.43(22) & 1.92(11) & 1.89(10) & 1.85(10) & 1.95(10) \\
1HB+4OR & 5.50(9) & 1.77(15) & 1.74(14) & 1.71(14) & 1.85(13) \\ \end{tabular}
\end{table}
Table 3: The dynamical critical exponents obtained from power law fits to the integrated autocorrelation times of \(Q_{c}^{2}\), \(S_{w}\), and Wilson loops of different sizes after 31 stout smearing steps. Notably, the dynamical critical exponents associated with \(Q_{c}^{2}\) are much larger than those associated with the smeared action or smeared Wilson loops of different sizes.
the density \(E\), which is defined as:
\[\begin{split} E&=\frac{1}{4V}\sum_{x\in\Lambda}F^{a}_{ \mu\nu}(x)F^{a}_{\mu\nu}(x)\\ &=-\frac{1}{2V}\sum_{x\in\Lambda}\mathrm{tr}[F_{\mu\nu}(x)F_{\mu \nu}(x)]\end{split} \tag{21}\]
Similar to the topological charge definitions, we adopt a plaquette- and clover-based definition of the field strength tensor, with the only difference being that the components are also made traceless, and not just anti-hermitian. The gradient flow scales \(t_{0}\) and \(w_{0}\) are both defined implicitly:
\[\mathcal{E}(t)=t^{2}\langle E\rangle\Big{|}_{t=t_{0}} =0.3 \tag{22}\] \[W(t)=t\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{E}(t)\Big{|}_{t=w_ {0}^{2}} =0.3 \tag{23}\]
The flow equation was integrated using the third-order commutator free Runge-Kutta scheme from [90] with a step size of \(\epsilon=0.025\). Measurements of the clover-based energy density were performed every 10 integration steps, and \(t^{2}\langle E(t)\rangle\) was fitted with a cubic spline, which was evaluated with a step size of 0.001. For every value of \(\beta\), two independent simulations with 100 measurements each were performed on \(48\times 32^{3}\) lattices. Every measurement was separated by 200 update sweeps with the previously described 1HB+4OR update scheme, and the initial 2000 updates were discarded as thermalization phase. Our results are displayed in Table 4. Using the physical values from [92], these results imply a physical volume of approximately \((0.95\,\mathrm{fm})^{4}\) and a temperature of around \(207\,\mathrm{MeV}\) for the \(16^{4}\) lattice from the previous section.
In order to facilitate comparison with other results, we also provide an interpolation of our lattice spacing results. For this purpose, we use a rational fit ansatz with three fit parameters
\[\log\bigl{(}t_{0}/a^{2}\bigr{)}=\frac{8\pi^{2}}{33}\beta\frac{1+d_{1}/\beta+d_{2 }/\beta^{2}}{1+d_{3}/\beta} \tag{24}\]
that is asymptotically consistent with perturbation theory [84] and has a sufficient number of degrees of freedom to describe our data well. For our reference, clover-based \(t_{0}\) scale setting, this results in a fit with \(\chi^{2}/\mathrm{d.o.f.}\approx 1.31\) and parameters \(d_{1}\approx 1.0351\), \(d_{2}\approx-1.3763\), \(d_{3}\approx 0.4058\), which is displayed in Figure 3. We want to emphasize that these results are not meant to be an attempt at a precise scale determination, but rather only serve as an approximate estimate. Especially for the finer lattices, the proper sampling of the topological sectors can not be guaranteed, and the comparatively small volumes may introduce non-negligible finite volume effects.
## V Results with Metadynamics
Figure 4 shows the time series of the topological charge from simulations with the HMC and the MetaD-HMC with five and ten stout smearing steps on a \(22^{4}\) lattice at \(\beta=6.4035\), using the Wilson gauge action. Both MetaD-HMC runs tunnel multiple times between different topological sectors, whereas the conventional HMC essentially displays a single tunneling event between sectors \(Q=0\) and \(Q=1\). A noteworthy difference between the two MetaD-HMC runs is the increase of fluctuations with higher amounts of smearing. If too many smearing steps are used to define the CV, the resulting \(Q\) values
\begin{table}
\begin{tabular}{r c c c c c} \(\beta\) & \(N_{t}\times N_{s}^{3}\) & \(t_{0,\mathrm{plaq}}/a^{2}\) & \(t_{0,\mathrm{clc}\omega}/a^{2}\) & \(w_{0,\mathrm{plaq}}/a^{2}\) & \(w_{0,\mathrm{clc}\omega}/a^{2}\) \\ \hline
1.04 & \(48\times 32\) & 3.445(3) & 3.647(3) & 3.601(4) & 3.641(4) \\
1.10 & \(48\times 32\) & 4.483(6) & 4.684(6) & 4.675(9) & 4.716(9) \\
1.15 & \(48\times 32\) & 5.549(9) & 5.751(10) & 5.787(14) & 5.827(14) \\
1.16 & \(48\times 32\) & 5.761(9) & 5.962(9) & 5.992(15) & 6.032(15) \\
1.17 & \(48\times 32\) & 6.032(8) & 6.234(8) & 6.291(14) & 6.332(13) \\
1.18 & \(48\times 32\) & 6.269(13) & 6.470(14) & 6.525(24) & 6.566(24) \\
1.19 & \(48\times 32\) & 6.524(10) & 6.726(10) & 6.803(12) & 6.844(12) \\
1.20 & \(48\times 32\) & 6.798(15) & 7.000(15) & 7.082(20) & 7.123(20) \\
1.21 & \(48\times 32\) & 7.047(16) & 7.248(16) & 7.331(25) & 7.372(25) \\
1.22 & \(48\times 32\) & 7.386(23) & 7.588(24) & 7.710(35) & 7.751(35) \\
1.23 & \(48\times 32\) & 7.642(23) & 7.844(24) & 7.954(35) & 7.995(35) \\
1.24 & \(48\times 32\) & 7.963(23) & 8.165(23) & 8.293(35) & 8.334(35) \\
1.25 & \(48\times 32\) & 8.312(27) & 8.515(28) & 8.681(37) & 8.721(37) \\ \end{tabular}
\end{table}
Table 4: Results for different gradient flow scales for the DBW2 gauge action. These results should not be interpreted as an attempt at a precise scale determination, but rather only serve as an approximate estimate.
Figure 3: Rational fit of the form Equation (24) to the \(t_{0,\mathrm{clc}\omega}/a^{2}\) values presented in Table 4. The fit has \(\chi^{2}/\mathrm{d.o.f.}\approx 1.31\) and parameters \(d_{1}\approx 1.0351\), \(d_{2}\approx-1.3763\), and \(d_{3}\approx 0.4058\). Error bars are substantially smaller than the symbols.
will generically be closer to integer, so more simulation time is spent in the sector boundary regions. This will eventually drive the system to coarser regions of configuration space. Since these regions do not contribute significantly to expectation values in the path integral, it is desirable to minimize the time that the algorithm spends there. This is directly related to the issue of small effective sample sizes, which we will discuss in more detail in Section V.2.
A similar comparison for the DBW2 action can be seen in Figure 5. Here, two MetaD-HMC runs with four and five stout smearing steps on a \(16^{4}\) lattice at \(\beta=1.25\) are compared to the 1HMC and 1HB+4OR runs, which were already shown in Figure 2. Both conventional update schemes are confined to the zero sector, whereas the two MetaD-HMC runs explore topological sectors up to \(|Q|=6\). More quantitatively, the integrated autocorrelation time of \(Q_{c}^{2}\) on the DBW2 stream is estimated to be \(\tau_{\text{int}}(Q_{c}^{2})=2188(478)\) for the MetaD-HMC algorithm with 4 smearing steps, whereas lower bounds for the autocorrelation times for the 1HMC and 1HB+4OR update schemes are \(4\times 10^{5}\), which implies a difference of more than two orders of magnitude.
To illustrate the role of the CV \(Q_{\text{meta}}\), it may be helpful to understand the effect of the CV on the time series.
Figure 4: Comparison of the time series of the topological charge between runs using the HMC algorithm and MetaD-HMC runs for \(V=22^{4}\), \(\beta=6.4035\), using the Wilson gauge action. Both Metadynamics runs are able to transition between topological sectors numerous times, whereas the run using the conventional HMC is essentially stuck in two sectors.
Figure 5: Comparison of the time series of the topological charge between runs using conventional update algorithms (HMC and a combination of heat bath and overrelaxation updates) and MetaD-HMC runs for \(V=16^{4}\), \(\beta=1.25\), using the DBW2 action. The results shown for the 1HMC and 1HB+4OR update schemes are from the same runs as the time series shown in Figure 2. While the conventional update algorithms are unable to escape the \(Q=0\) sector, both Metadynamics runs frequently transition between different topological sectors.
to compare the time series of \(Q_{\text{meta}}\) and \(Q_{c}\), as shown in Figure 6. The two observables are clearly correlated, but \(Q_{\text{meta}}\) is distributed more evenly between integers.
### Computational overhead and multiple timescale integration
A fair comparison of the different update schemes also needs to take the computational cost of the algorithms into account. Table 5 shows the relative timings for the different update schemes used here, measured for simulations carried out on \(16^{4}\) lattices. While the MetaD-HMC was not optimized for performance, it is still clear that the additional overhead introduced by the computation of the Metadynamics force contribution is significant for pure gauge theory. The relative overhead is especially large compared to local update algorithms, which are already more efficient than the regular HMC. Note, however, that due to its more non-local character, the relative loss in efficiency when switching to Metadynamics from either a local update algorithm or HMC, is already noticeably smaller for the DBW2 gauge action. Since the majority of the computational overhead comes from the Metadynamics force contribution, and the involved scales are different from those relevant for the gauge force, it seems natural to split the integration into multiple timescales in a similar fashion to the Sexton-Weingarten scheme [93]: The force contributions from the bias potential are correlated to the topological charge, which is an IR observable, whereas the gauge force is usually dominated by short-range, UV fluctuations. Therefore, it is conceivable that integrating the Metadynamics force contribution on a coarser timescale than the gauge force could significantly decrease the required computational effort, while still being sufficiently accurate to lead to reasonable acceptance rates.
We have attempted to use combinations of both the Leapfrog and the Omelyan-Mryglod-Folk second-order integrator with the Omelyan-Mryglod-Folk fourth-order minimum norm integrator. Unfortunately, we were unable to achieve a meaningful reduction of Metadynamics force evaluations without encountering integrator instabilities and deteriorating acceptance rates. However, this approach might still be helpful for simulations with dynamical fermions, where it is already common to split the forces into more than two levels.
Even if such a multiple timescale approach should prove to be unsuccessful in reducing the number of Metadynamics force evaluations, we expect the relative overhead of Metadynamics to be much smaller for simulations including dynamical fermions. In previous studies [41], it was found that compared to conventional HMC simulations, simulations with Metadynamics and 20 steps of stout smearing were about three times slower in terms of real time.
### Scaling of the reweighting factor and improvements to the bias potential
Due to the inclusion of the bias potential, expectation values with respect to the original, physical probability density are obtained by reweighting. As with any reweighting procedure, the overlap between the sampled distribution and the distribution of physical interest needs to be sufficiently large for the method to work properly. A common measure to quantify the efficiency of the reweighting procedure is the effective sample size (ESS), defined as
\[\text{ESS}=\frac{\left(\sum\limits_{i}w_{i}\right)^{2}}{\sum\limits_{i}w_{i} ^{2}}, \tag{25}\]
where \(w_{i}\) is the respective weight associated with each individual configuration. In the case of Metadynamics, this is simply \(e^{V(Q_{\text{meta},i})}\). We found the normalized ESS, i.e. the ESS divided by the total number of configurations, to generally be of order \(\mathcal{O}(10^{-2})\) or lower when simulating in regions of parameter space, where conventional algorithms fail to explore topological sectors other than \(Q=0\).
Although the low ESS ultimately results from the fact, that the bias potential is constructed in such a way as to have a marginal distribution over the CV that is flat, we can nonetheless distinguish two parts of this effect. On the one hand, there is the inevitable flattening of the intersector barriers by the bias potential, which is necessary to facilitate tunneling between adjacent topological sectors. On the other hand, however, the different weight of the different topological sectors is also cancelled by the bias potential. While it is necessary for a topology changing update algorithm to reproduce the intersector barriers faithfully, the leveling of the weights of the different topological sectors is entirely unwanted. It enhances the time that the simulation spends at large values of \(|Q|\), so that these sectors are overrepresented compared to their true statistical weight. It is therefore conceivable, that by retaining only the intersector barrier part of the bias
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{Update scheme} & \multicolumn{2}{c}{Relative time} \\ & Wilson action & DBW2 action \\ \hline
1HB+4OR & 1 & 1 \\
1HMC & 3.56 & 4.46 \\ MetaD-HMC (4stout) & 95.48 & 31.03 \\ MetaD-HMC (5stout) & 114.02 & 36.37 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Relative timings for the different update schemes measured for simulations carried out on \(16^{4}\) lattices. The significant computational overhead for the Metadynamics updates compared to the other algorithms is due to the stout smearing and the stout force recursion required for the Metadynamics force calculation.
potential, the relative weights of the different topological sectors will be closer to their physical values, and the ESS will increase. In previous tests in 2-dimensional U(1) gauge theory, we found that the bias potentials could be described by a sum of a quadratic and multiple oscillating terms [94]:
\[V(Q)=AQ^{2}+\sum_{i=1}^{N}B_{i}\sin^{2}(\pi f_{i}Q) \tag{26}\]
Here, we fit our bias potentials, that are obtained from the 2-dimensional U(1) simulations, to this form. We then obtain a modified bias potential by subtracting the resulting quadratic term from the data. This modification of the bias potential is effective in reducing the oversampling of topological sectors with large \(|Q|\), as evidenced by the larger normalized ESS in Table 6. The resulting marginal distribution over the topological charge is then no longer expected to be constant, but rather resemble a parabola.
Here and in Section VI of this work, we perform scaling tests of the proposed improvements in 2-dimensional U(1) gauge theory, where high statistics can be generated more easily than in 4-dimensional \(SU(3)\) gauge theory. The action is given by the standard Wilson plaquette action
\[S_{g}=\beta\sum_{n\in\Lambda}\left(1-\text{Re}[P_{t,x}(n)]\right), \tag{27}\]
and updates are performed with a single-hit Metropolis algorithm. The topological charge is defined using a geometric, integer-valued definition:
\[Q=\frac{1}{2\pi}\,\text{Im}\Big{[}\sum_{n\in\Lambda}\log P_{t,x}(n)\Big{]} \tag{28}\]
For all Metadynamics updates, we use a field-theoretic definition of the topological charge that is generally not integer-valued:
\[Q_{\text{meta}}=\frac{1}{2\pi}\,\text{Im}\left[\sum_{n\in\Lambda}P_{t,x}(n)\right] \tag{29}\]
Since the charge distributions obtained from the two definitions already show reasonable agreement without any smearing for the parameters considered here, we can use local update algorithms and directly include the Metadynamics contribution in the staple. A similar idea that encourages tunneling in the Schwinger model by adding a small modification to the action was proposed in [95].
Table 6 contains the relative ESS and integrated autocorrelation times for different lattice spacings on the same line of constant physics in 2-dimensional U(1) theory. We compare Metadynamics runs using bias potentials obtained directly from previous simulations with Metadynamics runs using potentials that were modified to retain the relative weights of the topological sectors as described above. We see large improvements for both the ESS and \(\tau_{\text{int}}\) in the modified case, even for the finest lattices considered.
We expect that the quadratic term is mostly relevant for small volumes and high temperatures. With larger volumes and lower temperatures, the slope should decrease, and with it the importance of correctly capturing this term. On the other hand, the oscillating term is expected to grow more important with finer lattice spacings, as the barriers between the different sectors grow steeper. Thus, the oscillating term needs to be described more and more accurately towards the continuum.
A standard technique to decrease, but not completely eliminate, action barriers is well-tempered Metadynamics [96].In this approach, the height of the added Gaussians \(w\) decays with increasing potential. In our tests, we found that this method does increase the ESS, but at the cost of higher autocorrelation times to the point where any
Figure 6: Time series of the CV \(Q_{\text{meta}}\) and the topological charge \(Q_{c}\), measured after 5 and 30 stout smearing steps with a smearing parameter of \(\rho=0.12\) respectively. The data is from the same MetaD-HMC run as shown in Figure 5.
gains from the ESS that would be visible in the uncertainties of observables are nullified. Although it might still have some use in accelerating the build-up process or as a possible intermediate stream for PT-MetaD (see Section VI), we decided not to explore this option further at this point.
### Accelerating the equilibration/buildup of the bias potential
Another avenue of improvement is accelerating the build-up of the bias potential, for which we again explore two possible ideas. This aspect becomes especially relevant when considering large-scale simulations, where runs are often limited to \(\mathcal{O}(10^{4})\) update sweeps, and a lengthy buildup phase of the bias potential would render the method infeasible.
The first idea is to exploit the aforementioned well-tempered variant of Metadynamics, by choosing a larger starting value of the Gaussian height \(w\) and letting it decay slowly so as to minimize the change in the potential that arises from the decay. While this approach adds another fine-tunable parameter, namely the decay rate, we found that this did indeed significantly cut down on the number of update iterations required to thermalize the potential. A small caveat is, that in order to choose the optimal decay rate, one would have to have knowledge on the approximate height of the action barriers, which is not always the case.
A way of improving the build-up time without any prior knowledge of the bias potential is to use an enhancement of Metadynamics which is most commonly referred to as multiple walkers Metadynamics [97], where the potential is simultaneously built up by several independent streams in a trivially parallelizable way. To add to this, we make each stream start in a distinct topological sector by the use of instanton configurations, which can easily be constructed in 2-dimensional U(1) gauge theory [98]. Namely, an instanton configuration with charge \(Q\) is given by
\[\begin{split} U^{I}_{t}(Q;t,x)&=\exp\left(-2\pi ix \frac{Q_{j}}{N_{x}N_{t}}\right),\\ U^{I}_{x}(Q;t,x)&=\exp\left(2\pi it\frac{Q_{j}}{N _{t}}\delta_{x,N_{s}}\right).\end{split} \tag{30}\]
The parallel and serial build are compared in Figure 7 where the potential parameters for each stream are given by: \(Q_{\text{max/min}}=\pm 7\), \(n_{\text{bins}}=1400\) and \(w=0.002\). Since this method is an embarrassingly parallel task, we expect it to easily carry over to higher-dimensional, non-abelian theories with topological properties. In the case of 4-dimensional SU(3) the direct construction of instantons with higher charge is not quite as simple as in 2-dimensional U(1) gauge theory. The construction of lattice instantons with even charge is described in [98], and lattice instantons with odd charge can be constructed by combining multiple instantons with charge \(Q=1\)[99; 82]. Regardless, having exact instantons is not required, since
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(L/a\) & \(\beta\) & ESS/\(n_{\text{meas}}\) & \(\tau_{\text{int}}(Q^{2})\) \\ \hline \multicolumn{4}{c}{Regular bias potential} \\ \hline
16 & 3.2 & 0.33088(74) & 53 \\
20 & 5.0 & 0.2181(11) & 208 \\
24 & 7.2 & 0.1270(11) & 568 \\
28 & 9.8 & 0.08805(81) & 677 \\
32 & 12.8 & 0.08261(85) & 1168 \\ \hline \multicolumn{4}{c}{Modified bias potential} \\ \hline
16 & 3.2 & 0.9950627(11) & 6 \\
20 & 5.0 & 0.643084(87) & 34 \\
24 & 7.2 & 0.28572(12) & 124 \\
28 & 9.8 & 0.27291(16) & 228 \\
32 & 12.8 & 0.18751(25) & 248 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Normalized effective sample sizes for different lattices on the same line of constant physics in 2-dimensional U(1) gauge theory. Overall, \(10^{7}\) measurements were performed with a separation of 10 update sweeps between every measurement. More details on the simulation setup can be found in Section VI.2.
Figure 7: Comparison of serial and parallel build of the bias potential in 2-dimensional U(1) gauge theory for \(32^{2}\) lattices. The ratio of update iterations was held fixed at 8:1, so that both methods would use the same number of total update steps during each snapshot.
we only need each stream to start in a sector, where it is then very likely to fall into the local minimum of the specified sector.
Independent of the possible improvements mentioned here, a fine-tuning of the standard Metadynamics parameters could also prove to be worthwhile in regard to accelerating the buildup and improving the quality of the bias potential.
## VI Combining Metadynamics with Parallel Tempering
In order to eliminate the problem of small effective sample sizes observed in our Metadynamics simulations due to the required reweighting, we propose to combine Metadynamics with parallel tempering [70]. This is done in a spirit similar to the parallel tempering on a line defect proposed by Hasenbusch [30]. We introduce two simulation streams: One with a bias potential, and the other without it, while actions \(S(U)\) are the same for both streams. Note that since we are working in pure gauge theory, this means the second stream without bias potential can be updated with local update algorithms. After a fixed number of updates have been performed on the two streams, a swap of the two configurations is proposed and subject to a standard Metropolis accept-reject step, with the action difference given by
\[\begin{split}\Delta S^{M}_{t}&=[S^{M}_{t}(U_{1})+ S(U_{2})]-[S^{M}_{t}(U_{2})+S(U_{1})]\\ &=V_{t}(Q_{\text{meta,1}})-V_{t}(Q_{\text{meta,2}}),\end{split} \tag{31}\]
where the indices of the quantities denote the number of the stream and \(V_{t}\) is the bias potential in the first stream. It is apparent and important to note that the action difference is simple to compute regardless of what the physical action looks like. Even in simulations where dynamical fermions are present, the contributions from the physical action are always cancelled out by virtue of the two streams having the same action parameters; only the contribution from the Metadynamics bias potential remains.
Since the second stream samples configurations according to the (physical) target distribution, no reweighting is needed and thus the effective sample size is not reduced. Additionally, if the swaps are effective, this stream will inherit the topological sampling from the stream with bias potential and thus also sample topological sectors well. Effectively, the accept-reject step for swap proposals serves as a filter for configurations with vanishing statistical weight, thereby decreasing the statistical uncertainties on all observables weakly correlated to the topological charge. What remains to be seen is, whether the efficiency of the sampling of the topological sectors carries over from the Metadynamics stream to the measurement stream. In this section, we address this question both via a scaling test in 2-dimensional U(1) and with exploratory runs in 4-dimensional SU(3) with the DBW2 gauge action in a region where conventional update algorithms are effectively frozen.
### Scaling tests in 2-dimensional U(1)
We carried out a number of simulations in 2-dimensional \(U(1)\) gauge theory for several lattice size and couplings with the same parameters as used in the test described in Section V.2. We use the potentials already build for these Metadynamics runs as static bias potentials in a number of parallel tempered Metadynamics runs. For each set of parameters, we carry out one run with the respective unmodified potential and one run with a potential modified as described in Section V.2. In these runs, swaps between the two streams were proposed after each had completed a single update sweep over all lattice sites. well as the resulting autocorrelation times of the topological charge \(Q\) can be found in Table 7. To ensure that actual tunneling occurs, we also monitor the sum of the squared topological charges on both streams. This observable allows us to distinguish the fluctuations in \(Q\) originating from true tunneling events, mostly appearing in the stream with bias potential, from repeated swaps between the two streams without tunneling, which might also introduce a fluctuation of \(Q\) in the streams without actually overcoming any potential barriers.
Figure 9 shows the scaling of the total amount of independent configuration, which is given by the quotient of the effective sample size Equation (25) and the integrated autocorrelation time of the topological susceptibility. The performance of the standard Metropolis algorithm is compared to parallel tempered and standard Metadynamics, with both modi
Figure 8: Illustration of the proposed PT-MetaD algorithm. The upper row shows the run with an active bias potential, whereas the lower row shows the run without an active bias potential. The plots on the left indicate the bias potential as a function of the continuous observable \(Q_{\text{meta}}\). The green squares symbolize gauge configurations. Different shades are to guide the eyes. The indicated integer charges \(Q\) show the sector the configuration belongs to and are not the same as the CVs. The values of \(Q\) are for illustration only and not actual data. Red dots indicate the value of the CV. The frequency of sector changes in the upper row is exaggerated.
non-modified bias potentials. Clearly, the parallel tempered Metadynamics update schemes perform best for small lattice spacings. Most importantly, the ratio of independent configurations in the sample seems to reach a plateau for finer lattice spacings, which is in stark contrast to conventional Metadynamics. It is also worth noting, that the modified bias potential provides better results than the non-modified one. This is consistent with our expectation, that large excursions in the topological charge, which produce irrelevant configurations, are curbed by the modified bias potential. For a more detailed look at the effectiveness of the new algorithm, Figure 10 compares the results of parallel tempered Metadynamics with those of standard Metadynamics at our finest lattice, with and without modification of the bias potential, and with the exact solutions [100, 101, 102]. First we note, that there is no significant difference in the performance between standard and parallel tempered Metadynamics in the topology related observables \(Q\) and \(Q^{2}\), at least in the case of a modified bias potential. This is a very encouraging result, since the topological sampling of parallel tempered Metadynamics can not possibly exceed that of standard Metadynamics, as ultimately it is inherited from there. On the other hand, the inclusion of the irrelevant higher sectors with the unmodified bias potential does increase the error bars and there is some indication, that not all of the topological sector sampling is carried over into the measurement run of parallel tempered Metadynamics. Looking at an observable which is not related to topology, such as the plaquette, reveals that parallel tempered Metadynamics is superior to pure Metadynamics. This is clearly the effect of the better effective sample size and the larger number of independent configurations.
In summary, our scaling tests in 2-dimensional U(1) suggest, that parallel tempered Metadynamics with a modified bias potential has a much improved topological sampling, which seems to be almost equivalent to standard Metadynamics, while at the same time not suffering from a reduced effective sample size. There is some indication, that the ratio of independent to total configurations does reach a stable plateau in the continuum limit. These results encourage us to perform an exploratory study in pure SU(3) gauge theory in 4 dimensions.
### First results in 4-dimensional SU(3)
For our exploratory study in 4-dimensional SU(3), we turn to the DBW2 gauge action at \(\beta=1.25\) on a \(V=16^{4}\) lattice, which we have already used in Section V. For our first run, which is depicted in the left panels of Figure 12, we have combined a local 1HB+4OR measurement run with a 4stout MetAD-HMC run that dynamically generates the bias potential. Between swap proposals, updates for the two streams are performed at a ratio of 10 (1HB+4OR) to 1 (MetaD-HMC), which roughly reflects the relative wall clock times between the algorithms. One
\begin{table}
\begin{tabular}{l l l l} \hline \hline \(N\) & \(\beta\) & \(\tau_{\text{int}}(Q_{2}^{2})\) & \(\tau_{\text{int}}(Q_{1}^{2}+Q_{2}^{2})\) \\ \hline \multicolumn{4}{c}{Conventional updates} \\ \hline
16 & 3.2 & 4 & - \\
20 & 5.0 & 72 & - \\
24 & 7.2 & 3940 & - \\
28 & 9.8 & 462473 & - \\
32 & 12.8 & - & - \\ \hline \multicolumn{4}{c}{PT-MetaD (regular bias potential)} \\ \hline
16 & 3.2 & 6 & 21 \\
20 & 5.0 & 60 & 143 \\
24 & 7.2 & 940 & 535 \\
28 & 9.8 & 1731 & 819 \\
32 & 12.8 & 1927 & 1320 \\ \hline \multicolumn{4}{c}{PT-MetaD (modified bias potential)} \\ \hline
16 & 3.2 & 5 & 6 \\
20 & 5.0 & 48 & 48 \\
24 & 7.2 & 185 & 209 \\
28 & 9.8 & 317 & 404 \\
32 & 12.8 & 313 & 466 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Integrated autocorrelation times for different lattices on the same line of constant physics in 2-dimensional U(1) gauge theory, using both Metropolis and PT-MetaD updates. Observables indexed with 1 are taken from the stream with bias potential, whereas those indexed with 2 are taken from the regular stream. Overall, \(10^{7}\) measurements were performed with a separation of 10 update sweeps between every measurement.
Figure 9: Continuum scaling of the total sample size for the standard Metropolis algorithm and variations of MetaD-Metropolis in 2-dimensional U(1) from Table 6 and Table 7. The lines are drawn to guide the eyes.
can see that the measurement run starts exploring other topological sectors almost as soon as the parallel run with active bias potential has gained access to them.
In the later stages of the run, when the bias potential is sufficiently built up to allow the Metadynamics run to enter higher topological sectors, one can see that the swap rate is lowered by the action difference between the topological sectors, leading to an overall swap rate of \(\sim 0.063\). This effect mirrors the reduction of the effective sample size in pure Metadynamics updates and may be ameliorated by removing the quadratic term in the bias potential, as discussed in Section V.2. In fact, the relevant point is that the action difference between the maxima of the bias potential for different topological sectors reflects the relative weight of these sectors in the path integral and should not be flattened out. Ideally, we want the bias potential to only reproduce the barriers between the sectors, not their relative weights. For a second exploratory parallel Metadynamics run, we therefore opted for a static bias potential of this sort. Lacking data that are precise enough to model the bias potential in detail, as we did in 2-dimensional U(1), we started from the bias potential of a previous Metadynamics run and extracted the high frequency (in the CV) part of the topological barriers, while eliminating the long range part corresponding to the relative weight of the topological sectors. For this purpose, we chose to perform a singular spectrum analysis (SSA) [103] and crosschecked the result with a simple, piece-wise subtraction of the \(Q^{2}\) term between consecutive local maxima. As displayed in Figure 11, both methods result in a similar modified bias potential that seems to reproduce the intersector barriers rather well.
The right panels of Figure 12 display the results of the corresponding parallel tempered Metadynamics run. As one can see, large topological charge excursions of the Metadynamics run are now curbed, and the swap acceptance rate has increased to \(\sim 0.25\). In addition, the acceptance rate is approximately constant over the entire run, as it should be expected for a static bias potential. We would like to emphasize, that the bias potential we extracted is a rather rough guess. With a larger amount of data, it might be possible to extract a better bias potential, possibly leading to even better acceptance rates. Considering the rather simple ultimate form of the bias potential used, it might also be possible to model it with sufficient accuracy for a good initial guess at other run parameters. We plan to address these points in a future publication.
In any case, these first results clearly show that the parallel tempered Metadynamics algorithm is able to achieve enhanced topological sampling in 4-dimensional SU(3) without the reduction of the effective sample size that is typical for algorithms with a bias potential.
## VII Conclusion and Outlook
In this paper, we have demonstrated that Metadynamics can be used to significantly reduce the integrated autocorrelation times of topological quantities in lattice simulations. In simulations of 4-dimensional SU(3) gauge
Figure 11: Comparison between the original bias potential and its trend subtracted modifications from singular spectrum analysis and piecewise subtraction of the \(Q^{2}\) term.
Figure 10: Comparison of expectation values and uncertainties for the plaquette (left), the topological charge (middle), and the topological susceptibility (right) for the standard Metropolis algorithm and variations of MetaD-Metropolis in 2-dimensional U(1) for \(32^{2}\) lattices and \(\beta=12.8\). The dashed lines correspond to the exact solutions [100, 101, 102].
theory with the DBW2 action, we have observed reductions of the autocorrelation times of more than two orders of magnitude. However, the direct application of Metadynamics is not entirely unproblematic: Compared to local update algorithms, there is a large computational overhead due to the costly Metadynamics force evaluations, and the reweighting procedure required to obtain unbiased expectation values can significantly reduce the effective sample size. In order to circumvent this reduction, we have proposed two improvements: The first consists of modifying the bias potential, so that all topological sectors are represented with their correct weight; the second is adding a dedicated measurement stream parallel to the Metadynamics run, which uses a conventional update algorithm. Periodically, swaps between the two streams are suggested and subject to an accept-reject step. The accept-reject step during swap proposals then effectively serves as a filter for configurations with low statistical weight. This parallel tempered Metadynamics algorithm, including both improvements, has been successfully applied to 4-dimensional SU(3) gauge theory. Furthermore, scaling tests in 2-dimensional U(1) gauge theory indicate gains of more than an order of magnitude compared to standard Metadynamics, and an improved scaling of autocorrelation times with the lattice spacing compared to standard update algorithms. Additionally, we have demonstrated that the buildup of the Metadynamics bias potential may be accelerated by running multiple Metadynamics simulations in parallel.
We believe these results are promising, and plan to study the scaling behavior of the methods tested here in more detail for 4-dimensional SU(3) gauge theory, and eventually in full QCD. Conceptually, there seem to be no obstacles for implementing parallel tempered Metadynamics in full QCD. We also plan to explore possible optimizations for parallel tempered Metadynamics. These include optimizing the bias potential via enhanced buildup and extraction and, possibly, describing it parametrically. Furthermore, it would be interesting to investigate whether adding intermediate runs to a parallel tempered Metadynamics stream could increase performance, despite the additional computational cost.
Figure 12: Topological charge vs. Monte Carlo time for our parallel Metadynamics runs on a \(V=16^{4}\) lattice at \(\beta=1.25\) with DBW2 gauge action and 4 steps of stout smearing in the definition of \(Q_{\text{meta}}\). The left panel shows results of our first run with a dynamically built bias potential, while the right-hand side shows our second run with a static one. The topmost row shows the time series of the topological charges in the respective measurement runs, while the second row is for the Metadynamics part. The third row displays the sum of the topological charges of the measurement and Metadynamics part of the runs, thus indicating genuine transitions of the entire system into new topological sectors. In the bottom row, the running average of the swap acceptance rate with a window size of 2000 is displayed.
###### Acknowledgements.
We thank Philip Rouenhoff for collaboration in early stages of this work. We gratefully acknowledge helpful discussions with Szabolcs Borsanyi, Stephan Durr, Fabian Frecch, Jana Gunther, Ruben Kara, Andrey Kotov, and Kalman Szabo. Calculations were performed on a local PC cluster at the University of Wuppertal.
## Appendix A Metadynamics force
In order to obtain an expression for Equation (19), the algebra-valued derivative of \(Q_{\text{meta}}\) with respect to the unsmeared links \(U_{\mu}^{(0)}\) has to be calculated. Here, we will only focus on the derivative of the clover-based topological charge \(Q_{c}\) with respect to a fully smeared gauge configuration \(U\). For details of the stout-force recursion, we refer to [19; 79].
On the lattice, the following definition holds for a suitably defined lattice field strength tensor:
\[Q_{c}=\frac{1}{32\pi^{2}}\sum_{n\in\Lambda}\operatorname{tr}[\epsilon_{\mu\nu \rho\sigma}F_{\mu\nu}(n)F_{\rho\sigma}(n)] \tag{18}\]
The lattice field strength tensor based on the clover term is defined as the sum of four plaquettes:
\[F_{\mu\nu}(n)=\frac{-i}{8a^{2}}\left(C_{\mu\nu}(n)-C_{\nu\mu}(n)\right) \tag{19}\]
where the clover term in turn is defined via:
\[\begin{split} C_{\mu\nu}(n)=& P_{\mu,\nu}(n)+P_{ \nu,-\mu}(n)\\ &+P_{-\mu,-\nu}(n)+P_{-\nu,\mu}(n)\end{split} \tag{20}\]
For notational purposes, we define the auxiliary variables \(R_{\mu\nu}(n)=C_{\mu\nu}(n)-C_{\nu\mu}(n)\) and drop the specification of the lattice site \(n\) unless pertinent to the formula.
What we need for the force is the sum over all eight algebra directions:
\[T^{a}\sum_{\nu\rho\sigma}4\partial_{n,\alpha}^{a}\epsilon_{\alpha\nu\rho \sigma}\operatorname{tr}[R_{\alpha\nu}R_{\rho\sigma}] \tag{21}\]
where the sum over \(a\) is implied.
Using the field strength tensor's symmetry properties, the derivative can be written as a term of the following form:
\[\begin{split}\sum_{\nu\rho\sigma}\partial_{n,\alpha}^{a} \epsilon_{\alpha\nu\rho\sigma}\operatorname{tr}[R_{\alpha\nu}R_{\rho\sigma} ]&=\sum_{\nu\rho\sigma}\epsilon_{\alpha\nu\rho\sigma}2 \operatorname{Re}\operatorname{tr}\left[T^{a}U_{\alpha}(n)U_{\nu}(n+\alpha)U _{\alpha}^{\dagger}(n+\nu)U_{\nu}^{\dagger}(n)R_{\rho\sigma}(n)\\ &-T^{a}U_{\alpha}(n)U_{\nu}^{\dagger}(n+\alpha-\nu)U_{\alpha}^{ \dagger}(n-\nu)R_{\rho\sigma}(n-\nu)U_{\nu}(n-\nu)\\ &-T^{a}U_{\alpha}(n)U_{\nu}^{\dagger}(n+\alpha-\nu)R_{\rho \sigma}(n+\alpha-\nu)U_{\alpha}^{\dagger}(n-\nu)U_{\nu}(n-\nu)\\ &+T^{a}U_{\alpha}(n)R_{\rho\sigma}(n+\alpha)U_{\nu}(n+\alpha)U_{ \alpha}^{\dagger}(n+\nu)U_{\nu}^{\dagger}(n)\\ &-T^{a}U_{\alpha}(n)U_{\nu}^{\dagger}(n+\alpha-\nu)U_{\alpha}^{ \dagger}(n-\nu)U_{\nu}(n-\nu)R_{\rho\sigma}(n)\\ &+T^{a}U_{\alpha}(n)U_{\nu}(n+\alpha)U_{\alpha}^{\dagger}(n+\nu) R_{\rho\sigma}(n+\nu)U_{\nu}^{\dagger}(n)\\ &+T^{a}U_{\alpha}(n)U_{\nu}(n+\alpha)U_{\nu}^{\dagger}(n+\alpha- \nu)U_{\alpha}^{\dagger}(n-\nu)U_{\nu}(n-\nu)\\ &+T^{a}U_{\alpha}(n)U_{\nu}(n+\alpha)R_{\rho\sigma}(n+\alpha+ \nu)U_{\alpha}^{\dagger}(n+\nu)U_{\nu}^{\dagger}(n)\Big{]}\\ &=\sum_{\nu\rho\sigma}\epsilon_{\alpha\nu\rho\sigma}2 \operatorname{Re}\operatorname{tr}\left[T^{a}A_{\alpha\nu\rho\sigma}\right] \\ &=2\operatorname{Re}\operatorname{tr}\left[T^{a}A_{\alpha}\right] \end{split} \tag{22}\]
An expression of the above form can be rewritten using the projector induced by the scalar product of the algebra:
\[T^{a}\operatorname{tr}[T^{a}A_{\alpha}]=-\frac{1}{2}A_{\alpha}+\frac{1}{6} \operatorname{tr}[A_{\alpha}] \tag{23}\]
Which in our case translates to:
\[\begin{split} T^{a}2\operatorname{Re}\operatorname{tr}[T^{a}A_{ \alpha}]&=T^{a}\operatorname{tr}\left[T^{a}A_{\alpha}+(T^{a}A_{ \alpha})^{\dagger}\right]\\ &=T^{a}\operatorname{tr}\left[T^{a}A_{\alpha}-T^{a}A_{\alpha}^{ \dagger}\right]\\ =&-\frac{1}{2}(A_{\alpha}-A_{\alpha}^{\dagger})\\ &+\frac{1}{6}\operatorname{tr}\left[A_{\alpha}-A_{\alpha}^{ \dagger}\right]\end{split} \tag{24}\]
Including the factor we lost after defining \(R_{\mu\nu}\), we obtain the derivative of the trace in Equation (18)
\[\begin{split}\sum_{\mu\nu\rho\sigma}T^{a}\partial^{a}_{n,\alpha} \epsilon_{\mu\nu\rho\sigma}\operatorname{tr}[F_{\mu\nu}F_{\rho\sigma}]& =\sum_{\mu\nu\rho\sigma}-\frac{1}{64}T^{a}\partial^{a}_{n,\alpha} \epsilon_{\mu\nu\rho\sigma}\operatorname{tr}[R_{\mu\nu}R_{\rho\sigma}]\\ &=\frac{1}{32}\Big{(}\big{(}A_{\alpha}-A^{\dagger}_{\alpha}\big{)} -\frac{1}{3}\operatorname{tr}\big{[}A_{\alpha}-A^{\dagger}_{\alpha}\big{]} \Big{)}\end{split} \tag{100}\]
Summarized, the algebra-valued derivative of the clover-based topological charge with respect to the gauge link \(U_{\alpha}(n)\) can be written as:
\[\begin{split} T^{a}\partial^{a}_{n,\alpha}Q_{c}&= \sum_{\mu\nu\rho\sigma}\frac{1}{32\pi^{2}}T^{a}\partial^{a}_{n,\alpha}\epsilon _{\mu\nu\rho\sigma}\operatorname{tr}[F_{\mu\nu}F_{\rho\sigma}]\\ &=\frac{1}{1024\pi^{2}}\Big{(}\big{(}A_{\alpha}-A^{\dagger}_{ \alpha}\big{)}\\ -\frac{1}{3}\operatorname{tr}\big{[}A_{\alpha}-A^{\dagger}_{ \alpha}\big{]}\Big{)}\end{split} \tag{101}\] |
2310.16789 | Detecting Pretraining Data from Large Language Models | Although large language models (LLMs) are widely deployed, the data used to
train them is rarely disclosed. Given the incredible scale of this data, up to
trillions of tokens, it is all but certain that it includes potentially
problematic text such as copyrighted materials, personally identifiable
information, and test data for widely reported reference benchmarks. However,
we currently have no way to know which data of these types is included or in
what proportions. In this paper, we study the pretraining data detection
problem: given a piece of text and black-box access to an LLM without knowing
the pretraining data, can we determine if the model was trained on the provided
text? To facilitate this study, we introduce a dynamic benchmark WIKIMIA that
uses data created before and after model training to support gold truth
detection. We also introduce a new detection method Min-K% Prob based on a
simple hypothesis: an unseen example is likely to contain a few outlier words
with low probabilities under the LLM, while a seen example is less likely to
have words with such low probabilities. Min-K% Prob can be applied without any
knowledge about the pretraining corpus or any additional training, departing
from previous detection methods that require training a reference model on data
that is similar to the pretraining data. Moreover, our experiments demonstrate
that Min-K% Prob achieves a 7.4% improvement on WIKIMIA over these previous
methods. We apply Min-K% Prob to three real-world scenarios, copyrighted book
detection, contaminated downstream example detection and privacy auditing of
machine unlearning, and find it a consistently effective solution. | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, Luke Zettlemoyer | 2023-10-25T17:21:23Z | http://arxiv.org/abs/2310.16789v3 | # Detecting Pretraining Data from Large Language Models
###### Abstract
Although large language models (LLMs) are widely deployed, the data used to train them is rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but certain that it includes potentially problematic text such as copyrighted materials, personally identifiable information, and test data for widely reported reference benchmarks. However, we currently have no way to know which data of these types is included or in what proportions. In this paper, we study the pretraining data detection problem: _given a piece of text and black-box access to an LLM without knowing the pretraining data, can we determine if the model was trained on the provided text?_ To facilitate this study, we introduce a dynamic benchmark WikiMIA that uses data created before and after model training to support gold truth detection. We also introduce a new detection method Min-k% Prob based on a simple hypothesis: an unseen example is likely to contain a few outlier words with low probabilities under the LLM, while a seen example is less likely to have words with such low probabilities. Min-k% Prob can be applied without any knowledge about the pretraining corpus or any additional training, departing from previous detection methods that require training a reference model on data that is similar to the pretraining data. Moreover, our experiments demonstrate that Min-k% Prob achieves a 7.4% improvement on WikiMIA over these previous methods. We apply Min-k% Prob to three real-world scenarios, copyrighted book detection, contaminated downstream example detection and privacy auditing of machine unlearning, and find it a consistently effective solution.
## 1 Introduction
As the scale of language model (LM) training corpora has grown, model developers (e.g, GPT-4 (Brown et al., 2020) and LLaMA 2(Touvron et al., 2023)) have become reluctant to disclose the full composition or sources of their data. This lack of transparency poses critical challenges to scientific model evaluation and ethical deployment. Critical private information may be exposed during pretraining; previous work showed that LLMs generated excerpts from copyrighted books (Chang et al., 2023) and personal emails (Mozes et al., 2023), potentially infringing upon the legal rights of original content creators and violating their privacy. Additionally, Sainz et al. (2023); Magar and Schwartz (2022); Narayanan (2023) showed that the pretraining corpus may inadvertently include benchmark evaluation data, making it difficult to assess the effectiveness of these models.
In this paper, we study the pretraining data detection problem: given a piece of text and black-box access to an LLM with no knowledge of its pretraining data, can we determine if the model was pretrained on the text? We present a benchmark, WikiMIA, and an approach, Min-k% Prob, for pretraining data detection. This problem is an instance of Membership Inference Attacks (MIAs), which was initially proposed by Shokri et al. (2016). Recent work has studied _fine-tuning_ data detection (Song and Shmatikov, 2019; Shejwalkar et al., 2021; Mahloujifar et al., 2021) as an MIA problem. However, adopting these methods to detect the pertaining data of contemporary large LLMs presents two unique technical challenges: First, unlike fine-tuning which usually runs for multiple epochs, pretraining uses a much larger dataset but exposes each instance only once, significantly
reducing the potential memorization required for successful MIAs (Leino and Fredrikson, 2020; Kandpal et al., 2022). Besides, previous methods often rely on one or more reference models (Carlini et al., 2022; Watson et al., 2022) trained in the same manner as the target model (e.g., on the shadow data sampled from the same underlying pretraining data distribution) to achieve precise detection. This is not possible for large language models, as the training distribution is usually not available and training would be too expensive.
Our first step towards addressing these challenges is to establish a reliable benchmark. We introduce WikiMIA, a dynamic benchmark designed to periodically and automatically evaluate detection methods on any newly released pretrained LLMs. By leveraging the Wikipedia data timestamp and the model release date, we select old Wikipedia event data as our member data (i.e, _seen_ data during pretraining) and recent Wikipedia event data (e.g., after 2023) as our non-member data (_unseen_). Our datasets thus exhibit three desirable properties: (1) **Accurate**: events that occur after LLM pretraining are guaranteed not to be present in the pretraining data. The temporal nature of events ensures that non-member data is indeed unseen and not mentioned in the pretraining data. (2) **General**: our benchmark is not confined to any specific model and can be applied to various models pretrained using Wikipedia (e.g., OPT, LLaMA, GPT-Neo) since Wikipedia is a commonly used pretraining data source. (3) **Dynamic**: we will continually update our benchmark by gathering newer non-member data (i.e., more recent events) from Wikipedia since our data construction pipeline is fully automated.
MIA methods for finetuning (Carlini et al., 2022; Watson et al., 2022) usually calibrate the target model probabilities of an example using a shadow reference model that is trained on a similar data distribution. However, these approaches are impractical for pretraining data detection due to the black-box nature of pretraining data and its high computational cost. Therefore, we propose a reference-free MIA method Min-k% Prob. Our method is based on a simple hypothesis: an unseen example tends to contain a few outlier words with low probabilities, whereas a seen example is less likely to contain words with such low probabilities. Min-k% Prob computes the average probabilities of outlier tokens. Min-k% Prob can be applied without any knowledge about the pretraining corpus or any additional training, departing from existing MIA methods, which rely on shadow reference models (Matterm et al., 2023; Carlini et al., 2021). Our experiments demonstrate that Min-k% Prob outperforms the existing strongest baseline by 7.4% in AUC score on WikiMIA. Further analysis suggests that the detection performance correlates positively with the _model size_ and _detecting text length_.
To verify the applicability of our proposed method in real-world settings, we perform three case studies: copyrighted book detection (SS5), privacy auditing of LLMs (SS7) and dataset contamination detection (SS6). We find that Min-k% Prob significantly outperforms baseline methods in both scenarios. From our experiments on copyrighted book detection, we see strong evidence that GPT-3 1 is pretrained on copyrighted books from the Books3 dataset (Gao et al., 2020; Min et al., 2023). From our experiments on privacy auditing of machine unlearning, we use Min-k% Prob
Figure 1: **Overview of Min-k% Prob. To determine whether a text \(X\) is in the pretraining data of a LLM such as GPT, Min-k% Prob first gets the probability for each token in \(X\), selects the \(k\)% tokens with minimum probabilities and calculates their average log likelihood. If the average log likelihood is high, the text is likely in the pretraining data.**
to audit an unlearned LLM that is trained to forget copyrighted books using machine unlearning techniques (Eldan & Russinovich, 2023) and find such model could still output related copyrighted content. Furthermore, our controlled study on dataset contamination detection sheds light on the impact of pretraining design choices on detection difficulty; we find detection becomes harder when training data sizes increase, and occurrence frequency of the detecting example and learning rates decreases.
## 2 Pretraining Data Detection Problem
We study pretraining data detection, the problem of detecting whether a piece of text is part of the training data. First, we formally define the problem and describe its unique challenges that are not present in prior finetuning data detection studies (SS2.1). We then curate WikiMIA, the first benchmark for evaluating methods of pretraining data detection (SS2.2).
### Problem Definition and Challenges
We follow the standard definition of the membership inference attack (MIA) by Shokri et al. (2016); Mattern et al. (2023). Given a language model \(f_{\theta}\) and its associated pretraining data \(\mathcal{D}=\{z_{i}\}_{i\in[n]}\) sampled from an underlying distribution \(\mathbb{D}\), the task objective is to learn a detector \(h\) that can infer the membership of an arbitrary data point \(x\): \(h(x,f_{\theta})\rightarrow\{0,1\}\). We follow the standard setup of MIA, assuming that the detector has access to the LM only as a black box, and can compute token probabilities for any data point \(x\).
Challenge 1: Unavailability of the pretraining data distribution.Existing state-of-art MIA methods for data detection during finetuning (Long et al., 2018; Watson et al., 2022; Mireshballah et al., 2022a) typically use reference models \(g_{\gamma}\) to compute the background difficulty of the data point and to calibrate the output probability of the target language model : \(h(x,f_{\theta},g_{\gamma})\rightarrow\{0,1\}\). Such reference models usually share the same model architecture as \(f_{\theta}\) and are trained on shadow data \(D_{\text{shadow}}\subset\mathbb{D}\)(Carlini et al., 2022; Watson et al., 2022), which are sampled from the same underlying distribution \(\mathbb{D}\). These approaches assume that the detector can access (1) the distribution of the target model's training data, and (2) a sufficient number of samples from \(\mathbb{D}\) to train a calibration model.
However, this assumption of accessing the distribution of pretraining training data is not realistic because such information is not always available (e.g., not released by model developers (Touvron et al., 2023; OpenAI, 2023)). Even if access were possible, pretraining a reference model on it would be extremely computationally expensive given the incredible scale of pretraining data. In summary, the pretraining data detection problem aligns with the MIA definition but includes an assumption that the detector has no access to pretraining data distribution \(\mathbb{D}\).
Challenge 2: Detection difficulty.Pretraining and finetuning differ significantly in the amount of data and compute used, as well as in optimization setups like training epochs and learning rate schedules. These factors significantly impact detection difficulty. One might intuitively deduce that detection becomes harder when dataset sizes increase, and the training epochs and learning rates decrease. We briefly describe some theoretical evidence that inform these intuitions in the following and show empirical results that support these hypotheses in SS6.
To illustrate, given an example \(z\in D\), we denote the model output as \(f_{\theta}(z)\) Now, take another example \(y\) sampled from \(\mathbb{D}\setminus D\) (not part of the pretraining data). Determining whether an example \(x\) was part of the training set becomes challenging if the outputs \(f_{\theta}(z)\) and \(f_{\theta}(y)\) are similar. The degree of similarity between \(f_{\theta}(z)\) and \(f_{\theta}(y)\) can be quantified using the total variation distance. According to previous research (Hardt et al., 2016; Bassily et al., 2020), the bound on this total variation distance between \(f_{\theta}(z)\) and \(f_{\theta}(y)\) is directly proportional to the _occurrence frequency of the example \(x\)_, _learning rates_, and the _inverse of dataset size_, which implies the detection difficulty correlates with these factors as well.
### WikiMIA: A Dynamic Evaluation Benchmark
We construct our benchmark by using events added to Wikipedia after specific dates, treating them as non-member data since they are guaranteed not to be present in the pretraining data, which is the key idea behind our benchmark.
Data construction.We collect recent event pages from Wikipedia. **Step 1:** We set January 1, 2023 as the cutoff date, considering events occurring post-2023 as recent events (non-member data). We used the Wikipedia API to automatically retrieve articles and applied two filtering criteria: (1) the articles must belong to the event category, and (2) the page must be created post 2023. **Step 2:** For member data, we collected articles created before 2017 because many pretrained models, e.g., LLAMA, GPT-NeoX and OPT, were released after 2017 and incorporate Wikipedia dumps into their pretraining data. **Step 3:** Additionally, we filtered out Wikipedia pages lacking meaningful text, such as pages titled "Timeline of..." or "List of...". Given the limited number of events post-2023, we ultimately collected 394 recent events as our non-member data, and we randomly selected 394 events from pre-2016 Wikipedia pages as our member data. The data construction pipeline is automated, allowing for the curation of new non-member data for future cutoff dates.
Benchmark setting.In practice, LM users may need to detect texts that are paraphrased and edited, as well. Previous studies employing MIA have exclusively focused on detecting examples that exactly match the data used during pretraining. It remains an open question whether MIA methods can be employed to identify paraphrased examples that convey the same meaning as the original. In addition to the verbatim setting (_original_), we therefore introduce a _paraphrase setting_ we leverage ChatGPT2 to paraphrase the examples and subsequently assess if the MIA metric can effectively identify semantically equivalent examples.
Footnote 2: OpenAI. [https://chat.openai.com/chat](https://chat.openai.com/chat)
Moreover, previous MIA evaluations usually mix different-length data in evaluation and report a single performance metric. However, our results reveal that data length significantly impacts the difficulty of detection. Intuitively, shorter sentences are harder to detect. Consequently, different data length buckets may lead to varying rankings of MIA methods. To investigate this further, we propose a _different-length setting_: we truncate the Wikipedia event data into different lengths--32, 64, 128, 256--and separately report the MIA methods' performance for each length segment. We describe the desirable properties in Appendix B.
## 3 Min-k% Prob: A Simple Reference-free Pretraining Data Detection Method
We introduce a pretraining data detection method Min-k% Prob that leverages minimum token probabilities of a text for detection. Min-k% Prob is based on the hypothesis that a non-member example is more likely to include a few outlier words with high negative log-likelihood (or low probability), while a member example is less likely to include words with high negative log-likelihood.
Consider a sequence of tokens in a sentence, denoted as \(x=x_{1},x_{2},...,x_{N}\), the log-likelihood of a token, \(x_{i}\), given its preceding tokens is calculated as \(\log p(x_{i}|x_{1},...,x_{i-1})\). We then select the \(k\)% of tokens from \(x\) with the minimum token probability to form a set, Min-K%(\(x\)), and compute the average log-likelihood of the tokens in this set:
\[\text{Min-k\% Prob}(x)=\frac{1}{E}\sum_{x_{i}\in\text{Min-K\%}(x)}\log p(x_ {i}|x_{1},...,x_{i-1}). \tag{1}\]
where \(E\) is the size of the Min-K%(\(x\)) set. We can detect if a piece of text was included in pretraining data simply by thresholding this Min-k% Prob result. We summarize our method in Algorithm 1 in Appendix B.
Experiments
We evaluate the performance of Min-k% Prob and baseline detection methods against LMs such as LLaMA Touvron et al. (2023a), GPT-Neo (Black et al., 2022), and Pythia (Biderman et al., 2023) on WikiMIA.
### Datasets and Metrics
Our experiments use WikiMIA of different lengths (32, 64, 128, 256), _original_ and _paraphrase_ settings. Following (Carlini et al., 2022; Mireshghallah et al., 2022a), we evaluate the effectiveness of a detection method using the True Positive Rate (TPR) and its False Positive Rate (FPR). We plot the ROC curve to measure the trade-off between the TPR and FPR and report the AUC score (the area under ROC curve) and TPR at low FPRs (TPR@5%FPR) as our metrics.
### Baseline Detection Methods
We take existing reference-based and reference-free MIA methods as our baseline methods and evaluate their performance on WikiMIA. These methods only consider sentence-level probability. Specifically, we use the _LOSS Attack_ method (Yeom et al., 2018a), which predicts the membership of an example based on the loss of the target model when fed the example as input. In the context of LMs, this loss corresponds to perplexity of the example (_PPL_). Another method we consider is the neighborhood attack (Mattern et al., 2023), which leverages probability curvature to detect membership (_Neighbor_). This approach is identical to the DetectGPT (Mitchell et al., 2023) method recently proposed for classifying machine-generated vs. human-written text. Finally, we compare with membership inference methods proposed in (Carlini et al., 2021), including comparing the example perplexity to zlib compression entropy (_Zlib_), to the lowercased example perplexity (_Lowercase_) and to example perplexity under a smaller model pretrained on the same data (_Smaller Ref_). For the smaller reference model setting, we employ LLaMA-7B as the smaller model for LLaMA-65B and LLaMA-30B, GPT-Neo-125M for GPT-NeoX-20B, OPT-350M for OPT-66B and Pythia-70M for Pythia-2.8B.
### Implementation and Results
Implementation details.The key hyperparameter of Min-k% Prob is the percentage of tokens with the highest negative log-likelihood we select to form the _top-\(k\%\)_ set. We performed a small sweep over 10, 20, 30, 40, 50 on a held-out validation set using the LLAMA-60B model and found that \(k=20\) works best. We use this value for all experiments without further tuning. As we report the AUC score as our metric, we don't need to determine the threshold \(\epsilon\).
Main results.We compare Min-k% Prob and baseline methods in Table 1. Our experiments show that Min-k% Prob consistently outperforms all baseline methods across diverse target language models, both in original and paraphrase settings. Min-k% Prob achieves an AUC score of 0.72 on average, marking a 7.4% improvement over the best baseline method (i.e., PPL). Among the baselines, the simple LOSS Attack (PPL) outperforms the others. This demonstrates the effectiveness and generalizability of Min-k% Prob in detecting pretraining data from various LMs. Further results such as TPR@5%FPR can be found in Appendix A, which shows a trend similar to Table 6.
### Analysis
We further delve into the factors influencing detection difficulty, focusing on two aspects: (1) the size of the target model, and (2) the length of the text.
Model size.We evaluate the performance of reference-free methods on detecting pretraining 128-length texts from different-sized LLaMA models (7, 13, 30, 65B). Figure 1(a) demonstrates a noticeable trend: the AUC score of the methods rises with increasing model size. This is likely because larger models have more parameters and thus are more likely to memorize the pretraining data.
Length of text.In another experiment, we evaluate the detection method performance on examples of varying lengths in the original setting. As shown in Figure 1(b), the AUC score of different methods increases as text length increases, likely because longer texts contain more information memorized by the target model, making them more distinguishable from the unseen texts.
In the following two sections, we apply Min-k% Prob to real-world scenarios to detect copyrighted books and contaminated downstream tasks within LLMs.
## 5 Case Study: Detecting Copyrighted Books in Pretraining Data
Min-k% Prob can also detect potential copyright infringement in training data, as we show in this section. Specifically, we use Min-k% Prob to detect excerpts from copyrighted books in the Books3 subset of the Pile dataset (Gao et al., 2020) that may have been included in the GPT-33 training data.
Footnote 3: text-davinci-003
### Experimental Setup
Validation data to determine detection threshold.We construct a validation set using 50 books known to be memorized by ChatGPT, likely indicating their presence in its training data (Chang et al., 2023), as positive examples. For negative examples, we collected 50 new books with first editions in 2023 that could not have been in the training data. From each book, we randomly extract 100 snippets of 512 words, creating a balanced validation set of 10,000 examples. We determine the optimal classification threshold with Min-k% Prob by maximizing detection accuracy on this set.
Test data and metrics.We randomly select 100 books from the Books3 corpus that are known to contain copyrighted contents (Min et al., 2023). From each book, we extract 100 random 512-word snippets, creating a test set of 10,000 excerpts. We apply the threshold to decide if these books snippets have been trained with GPT-3. We then report the percentage of these snippets in each book (i.e., contamination rate) that are identified as being part of the pre-training data.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**Pythia-2.8B**} & \multicolumn{2}{c}{**NeoX-20B**} & \multicolumn{2}{c}{**LLaMA-30B**} & \multicolumn{2}{c}{**LLaMA-65B**} & \multicolumn{2}{c}{**OPT-66B**} & \\ \cline{2-13}
**Method** & _Ori._ & _Para._ & _Ori._ & _Para._ & _Ori._ & _Para._ & _Ori._ & _Para._ & _Ori._ & _Para._ & **Avg.** \\ \hline Neighbor & 0.61 & 0.59 & 0.68 & 0.58 & 0.71 & 0.62 & 0.71 & 0.69 & 0.65 & 0.62 & 0.65 \\ PPL & 0.61 & 0.61 & 0.70 & 0.70 & 0.70 & 0.70 & 0.71 & 0.72 & 0.66 & 0.64 & 0.67 \\ Zlib & 0.65 & 0.54 & 0.72 & 0.62 & 0.72 & 0.64 & 0.72 & 0.66 & 0.67 & 0.57 & 0.65 \\ Lowercase & 0.59 & 0.60 & 0.68 & 0.67 & 0.59 & 0.54 & 0.63 & 0.60 & 0.59 & 0.58 & 0.61 \\ Smaller Ref & 0.60 & 0.58 & 0.68 & 0.65 & 0.72 & 0.64 & 0.74 & 0.70 & 0.67 & 0.64 & 0.66 \\ Min-k\% Prob & **0.67** & **0.66** & **0.76** & **0.74** & **0.74** & **0.73** & **0.74** & **0.71** & **0.69** & **0.72** \\ \hline \hline \end{tabular}
\end{table}
Table 1: AUC score for detecting pretraining examples from the given model on WikiMIA for Min-k% Prob and baselines. _Ori._ and _Para._ denote the original and paraphrase settings, respectively. **Bold** shows the best AUC within each column.
Figure 2: As model size or text length increases, detection becomes easier.
### Results
Figure 3 shows Min-k% Prob achieves an AUC of 0.88, outperforming baselines in detecting copyrighted books. We apply the optimal threshold of Min-k% Prob to the test set of 10,000 snippets from 100 books from Books3. Table 2 represents the top 20 books with the highest predicted contamination rates. Figure 4 reveals nearly \(90\%\) of the books have an alarming contamination rate over \(50\%\).
## 6 Case Study: Detecting Downstream Dataset Contamination
Assessing the leakage of downstream task data into pretraining corpora is an important issue, but it is challenging to address given the lack of access to pretraining datasets. In this section, we investigate the possibility of using Min-k% Prob to detect information leakage and perform ablation studies to understand how various training factors impact detection difficulty. Specifically, we continually pretrain the 7B parameter LLaMA model (Touvron et al., 2023a) on pretraining data that have been purposefully contaminated with examples from the downstream task.
### Experiments
Experimental setup.To simulate downstream task contamination that could occur in real-world settings, we create contaminated pretraining data by inserting examples from downstream tasks into a pretraining corpus. Specifically, we sample text from the RedPajama corpus (TogetherCompute, 2023) and insert formatted examples from the downstream datasets BoolQ (Clark et al., 2019), IMDB (Maas et al., 2011), Truthful QA (Lin et al., 2021), and Commonsense QA (Talmor et al., 2019) in contiguous segments at random positions in the uncontaminated text. We insert 200 (positive) examples from each of these datasets into the pretraining data while also isolating a set of 200 (negative) examples from
each dataset that are known to be absent from the contaminated corpus. This creates a contaminated pretraining dataset containing \(27\) million tokens with \(0.1\%\) drawn from downstream datasets.
We evaluate the effectiveness of Min-k% Prob at detecting leaked benchmark examples by computing AUC scores over these 400 examples on a LLaMA 7B model finetuned for one epoch on our contaminated pretraining data at a constant learning rate of 1e-4.
Main results.We present the main attack results in Table 3. We find that Min-k% Prob outperforms all baselines. We report TPR@5%FPR in Table 7 in Appendix A, where Min-k% Prob shows 12.2% improvement over the best baseline.
### Results and Analysis
The simulation with contaminated datasets allows us to perform ablation studies to empirically analyze the effects of _dataset size_, _frequency of data occurrence_, and _learning rate_ on detection difficulty, as theorized in section 2.1. The empirical results largely align with and validate the theoretical framework proposed. In summary, we find that detection becomes more challenging as data occurrence and learning rate decreases, and the effect of dataset size on detection difficulty depends on whether the contaminants are outliers relative to the distribution of the pretraining data.
Pretraining dataset size.We construct contaminated datasets of 0.17M, 0.27M, 2.6M and 26M tokens by mixing fixed downstream examples (200 examples per downstream task) with varying amounts of RedPajama data, mimicking real-world pretraining. Despite the theory suggesting greater difficulty with more pretraining data, Figure 4(a) shows AUC scores counterintuitively increase with pre-training dataset size. This aligns with findings that LMs better memorize tail outliers (Feldman, 2020; Zhang et al., 2021). With more RedPajama tokens in the constructed dataset, downstream examples become more significant outliers. We hypothesize that their enhanced memorization likely enables easier detection with perplexity-based metrics.
To verify the our hypothesis, we construct control data where contaminants are not outliers. We sample Real Time Data News August 20234, containing post-2023 news absent from LLaMA pretraining. We create three synthetic corpora by concatenating 1000, 5000 and 10000 examples from this corpus, hence creating corpora of sizes 0.77M, 3.9M and 7.6M tokens respectively. In each setting, we consider 100 of these examples to be contaminant (positive) examples and set aside another set of 100 examples from News August 2023 (negative). Figure 4(b) shows AUC scores decrease as the dataset size increases.
Footnote 4: [https://huggingface.co/datasets/RealTimeData/News_August_2023](https://huggingface.co/datasets/RealTimeData/News_August_2023)
Detection of outlier contaminants like downstream examples gets easier as data size increases, since models effectively memorize long-tail samples. However, detecting general in-distribution samples from the pretraining data distribution gets harder with more data, following theoretical expectations.
Data occurrence.To study the relationship between detection difficulty and data occurrence, we construct a contaminated pretraining corpus by inserting multiple copies of each downstream data point into a pre-training corpus, where the occurrence of each example follows a Poisson distribution. We measure the relationship between the frequency of the example in the pretraining data and its AUC scores. Figure 4(c) shows that AUC scores positively correlates with the occurrence of examples.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Method** & **BoolQ** & **Commonsense QA** & **IMDB** & **Truthful QA** & **Avg.** \\ \hline Neighbor & 0.68 & 0.56 & 0.80 & 0.59 & 0.66 \\ Zlib & 0.76 & 0.63 & 0.71 & 0.63 & 0.68 \\ Lowercase & 0.74 & 0.61 & 0.79 & 0.56 & 0.68 \\ PPL & 0.89 & 0.78 & 0.97 & 0.71 & 0.84 \\ Min-k\% Prob & **0.91** & **0.80** & **0.98** & **0.74** & **0.86** \\ \hline \hline \end{tabular}
\end{table}
Table 3: AUC scores for detecting contaminant downstream examples. **Bold** shows the best AUC score within each column.
Learning rate.We also study the effect of varying the learning rates used during pretraining on the detection statistics of the contaminant examples (see Table 4). We find that raising the learning rate from \(10^{-5}\) to \(10^{-4}\) increases AUC scores significantly in all the downstream tasks, implying that higher learning rates cause models to memorize their pretraining data more strongly. A more in-depth analysis in Table 8 in Appendix A demonstrates that a higher learning rate leads to more memorization rather than generalization for these downstream tasks.
## 7 Case Study: Privacy Auditing of Machine Unlearning
We also demonstrate that our proposed technique can effectively address the need for auditing machine unlearning, ensuring compliance with privacy regulations (Figure 6).
### Backgrounding
The right to be forgotten and machine unlearning.In today's landscape of machine learning systems, it is imperative to uphold individuals' "right to be forgotten", a legal obligation outlined in regulations such as the General Data Protection Regulation (GDPR) (Voigt & Von dem Bussche, 2017) and the California Consumer Privacy Act (CCPA) (Legislature, 2018). This requirement allows users to request the removal of their data from trained models. To address this need, the concept of machine unlearning has emerged as a solution for purging data from machine learning models, and various machine unlearning methods have been introduced (Ginart et al., 2019; Liu et al., 2020; Wu et al., 2020; Bourtoule et al., 2021; Izzo et al., 2021; Sekhari et al., 2021; Gupta et al., 2021; Ye et al., 2022).
Recently, Eldan & Russinovich (2023) introduced a novel approach for performing machine unlearning on LLMs. This approach involves further fine-tuning the LLMs with alternative labels for specific tokens, effectively creating a modified version of the model that no longer contains the to-be-unlearned content. Specifically, the authors demonstrated the efficacy of this method using the LLaMA2-7B-chat model (Touvron et al., 2023b), showcasing its ability to "unlearn" information from the Harry Potter book series which results in the LLaMA2-7B-whoIsHarryPotter model5. In this case study, we aim to assess whether this model successfully eliminates memorized content related to the Harry Potter series.
Footnote 5: Available at [https://huggingface.co/microsoft/Llama2-7b-WhoIsHarryPotter](https://huggingface.co/microsoft/Llama2-7b-WhoIsHarryPotter).
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Learning rate** & **BoolQ** & **Commonsense QA** & **IMDB** & **LSAT QA** & **Trutful QA** \\ \hline \(1\times 10^{-5}\) & 0.64 & 0.59 & 0.76 & 0.72 & 0.56 \\ \(1\times 10^{-4}\) & **0.91** & **0.80** & **0.98** & **0.82** & **0.74** \\ \hline \hline \end{tabular}
\end{table}
Table 4: AUC scores for detecting contaminant downstream examples using two different learning rates. Detection becomes easier when higher learning rates are used during training. **Bold** shows the best AUC score within each column.
Figure 5: We show the effect of contamination rate (expressed as a percentage of the total number of pretraining tokens) and occurrence frequency on the ease of detection of data contaminants using Min-k% Prob.
### Experiments
To extract the contents related to Harry Potter from the unlearned model, LLaMA2-7B-WhoIShHarryPotter, we consider two settings: _story completion_ (SS7.2.1) and _question answering_ (SS7.2.2). In _story completion_, we identify suspicious chunks from the original Harry Potter books using Min-k% Prob. We then use the unlearned model to generate completions and compare them with the gold continuation. In _question answering_, we generate a series of questions related to Harry Potter using GPT-4 6. We filter these questions using Min-k% Prob, and then use the unlearned model to produce answers. These answers are then compared with the gold answers generated by GPT-4 and subsequently verified by humans.
Footnote 6: OpenAI. [https://chat.openai.com/chat](https://chat.openai.com/chat)
#### 7.2.1 Story completion
Identifying suspicious texts using Min-k% Prob.The process begins with the identification of suspicious chunks using our Min-k% Prob metric. Firstly, we gather the plain text of Harry Potter Series 1 to 4 and segment these books into 512-word chunks, resulting in approximately 1000 chunks. We then compute the Min-k% Prob scores for these chunks using both the LLaMA2-7B-WhoIShHarryPotter model and the original LLaMA2-7B-chat model. To identify chunks where the unlearning process may have failed at, we compare the Min-k% Prob scores between the two models. If the ratio of the scores from the two models falls within the range of \((\frac{1}{1.15},1.15)\), we classify the chunk as a suspicious unlearn-failed chunk. This screening process identifies 188 such chunks. We also notice that using perplexity alone as the metric fails to identify any such chunk. We then test the LLaMA2-7B-WhoIShHarryPotter model with these suspicious chunks to assess its ability to complete the story. For each suspicious chunk, we prompt the model with its initial 200 words and use multinomial sampling to sample 20 model-generated continuations for each chunk.
ResultsWe compare the completed stories with the ground truth storylines using both the SimCSE score (Gao et al., 2021) (which gives a similarity score from 0 to 1) and GPT-4 (where we prompt the model with the template in Table 9 to return a similarity score from 1 to 5, and a reason explaining the similarity). We can still find very similar completion with the original story. For example, 5.3% generated completions have greater and equal to 4 GPT score similarity to the gold completion. The distributions for these two scores of the suspicious chunks are shown in Section 7.2.1. Surprisingly, we find a considerable number of chunks whose auto-completions from the "unlearned" model closely resemble the original story: 10 chunks have a similarity score higher than or equal to 4 according to
Figure 6: **Auditing machine unlearning with Min-k% Prob. Machine unlearning methods are designed to remove copyrighted and personal data from large language models. We use Min-k% Prob to audit an unlearned LLM that has been trained to forget copyrighted books. However, we find that such a model can still output related copyrighted content.**
the GPT-4 evaluator. For instance, Table 10 showcases a few such examples, with all of them having SimCSE scores exceeding 0.7. We further note that the study only uses Harry Potter books 1 to 4. Including the whole Harry Potter series (7 books) potentially will expose more unlearn-failed chunks.
#### 7.2.2 Question answering
Selecting Harry Potter-related questions with Min-k% ProbWe generate 1000 questions related to Harry Potter by prompting GPT-4 with the query "Can you give me a list of questions and
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline \hline
**Question** & \multicolumn{2}{c|}{**Answer by**} & \multicolumn{1}{p{113.8pt}}{**Source in Harry Potter Book Series**} \\ \cline{2-4} \cline{6-6} In Harry Potter, What type of animal is Hedwig? & Hedwig is **a white oval.** & Hedwig is a **white oval.** & _“For Harry’s birthday, Hayrid hays Harry a **mavy oval named Hedwig.** & _“Harry Potter and the Philosopher’s Stone_ \\ \hline In Harry Potter, What creative guitar the results at Gringgots Bank? & In the Harry Roper series by T.L. Knight, the creative that guards the results at Gringgots Bank is a **dragon.** & The creative feature that guards the results at Gringgots Bank is a **dragon.** & _“With Harry and Gripblook under the Instability Cloud, Hermine designated as Bellart and Ron as a foreign wir- and with basily facial hair, the trio and Gripblook-dragon to Griggots – the trio are almost crushed by the gold and removed, but Hermine gets them to jump on the back of a **dragon** & _“Harry Potter and the Doubly Hallows_ \\ \hline In Harry Potter, what is the name of Bridg’s giant giant & Hagrid’s giant spider friend is named **Araggog.** & _“Aragog.” it called. **Aragog.” And from the middle of the tiny donors, about a **white the size of a small elephant** emerged, very slowly” Harry Potter and the Chamber of Sectors \\ \hline In Harry Potter, what does the spell "Alaboroma" do? & In the "Magic for Good" series by John G. Harness, the spell "Alaboroma?" a spell for **unlocking doors**. & _“She grached Harry’s symbol, tapped the lock, and winwendered "Alaboroma? **The lock clicked and the door swung open – they piled through it, shut it quickly.” – Harry Potter and the Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? S? Sorec? Sorec? S? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? S? Sorec? Sorec? Sorec? Sorec? Sorec? S? Sorec? Sorec? Sorec? Sorec? Sorec? S? Sorec? S? Sorec? Sorec? Sorec? S? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? S? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? Sorec? S? Sorec? Sorec? Sorec? S? Sorec? Sorec? Sorec? Sorec? Sorec? S? Sorec? Sorec? S? Sorec? Sorec? S? Sorec? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? Sorec? S? S? Sorec? S? Sorec? S? S? Sorec? S? S? Sorec? S? S? S? Sorec? S? S? Sorec? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S S? S? S? S? S? S? S S? S? S? S? S? S? S? S S? S? S? S? S? S? S? S? S S? S? S? S? S S? S? S? S? S? S? S? S? S? S? S? S? S? S S? S? S? S? S? S? S? S S? S S? S? S? S? S? S? S? S S? S? S? S? S? S? S S? S? S S? S? S S? S? S? S? S? S S? S S? S S? S S? S? S S? S S? S S? S S? S S? S S? S S S? S? S S? S S? S S? S? S S? S S? S S? S S? S S? S? S S? S S? S? S? S? S? S S? S S? S S? S? S S? S S? S? S? S? S? S S? S? S S? S S? S? S S? S? S? S S? S? S? S S? S? S? S? S S? S? S? S? S S? S? S? S? S? S? S? S? S? S? S? S? S? S? S S? S? S? S? S? S S? S? S? S S? S? S? S S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S S? S? S? S? S S? S? S? S S? S? S? S S? S? S S? S? S? S S? S S? S S? S? S S? S? S? S? S S? S? S? S S? S? S? S S? S? S? S S? S? S S? S? S? S S? S? S? S? S? S? S S? S? S? S? S? S? S? S S? S? S? S? S? S? S? S? S S? S? S? S? S? S? S? S? S S? S? S? S? S S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S S? S? S? S S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S? S S? S?
answers related to Harry Potter". Similar to identifying suspicious texts in story completion, we compare the Min-k% Prob scores between the original and unlearned models and select questions with the ratio falling within the range of \((\frac{1}{115},1.15)\), resulting in 103 questions. We use the unlearned model to generate answer given these questions, specifically employing multinomial sampling to sample 20 answers for each question.
ResultsWe then compare the answers by the unlearned model (referred to as the "candidate") to those provided by GPT-4 (referred to as the "reference") using the ROUGE-L recall measure (Lin, 2004), which calculates the ratio: (# overlapping words between the candidate and reference) / (# words in the reference). A higher ROUGE-L recall value signifies a greater degree of overlap, which can indicate a higher likelihood of unlearning failure. Among the 103 selected questions, we observe an average ROUGE-L recall of 0.23. Conversely, for the unselected questions, the average ROUGE-L recall is 0.10. These findings underscore the capability of our Min-k% Prob to identify potentially unsuccessful instances of unlearning.
Table 5 shows the selected questions related to Harry Potter that are answered correctly by the unlearned model LLaMA2-7B-WhoIsHarryPotter (with ROUGE-L recall being \(1\)). We also verify the generated answers by cross-checking them against the Harry Potter series. These results suggest the knowledge about Harry Potter is not completely erased from the unlearned model.
## 8 Related Work
Membership inference attack in NLP.Membership Inference Attacks (MIAs) aim to determine whether an arbitrary sample is part of a given model's training data (Shokri et al., 2017; Yeom et al., 2018). These attacks pose substantial privacy risks to individuals and often serve as a basis for more severe attacks, such as data reconstruction (Carlini et al., 2021; Gupta et al., 2022; Cummings et al., 2023). Due to its fundamental association with privacy risk, MIA has more recently found applications in quantifying privacy vulnerabilities within machine learning models and in verifying the accurate implementation of privacy-preserving mechanisms (Jayaraman and Evans, 2019; Jagielski et al., 2020; Zanella-Beguelin et al., 2020; Nasr et al., 2021; Huang et al., 2022; Nasr et al., 2023; Steinke et al., 2023). Initially applied to tabular and computer vision data, the concept of MIA has recently expanded into the realm of language-oriented tasks. However, this expansion has predominantly centered around finetuning data detection (Song and Shmatikov, 2019; Shejwalkar et al., 2021; Mahloujifar et al., 2021; Jagannatha et al., 2021; Mireshballah et al., 2022). Our work focuses on the application of MIA to pretraining data detection, an area that has received limited attention in previous research efforts.
Dataset contamination.The dataset contamination issue in LMs has gained attention recently since benchmark evaluation is undermined if evaluation examples are accidentally seen during pre-training. Brown et al. (2020), Wei et al. (2022), and Du et al. (2022) consider an example contaminated if there is a 13-gram collision between the training data and evaluation example. Chowdhery et al. (2022) further improves this by deeming an example contaminated if \(70\%\) of its 8-grams appear in the training data. Touvron et al. (2023) builds on these methods by extending the framework to tokenized inputs and judging a token to be contaminated if it appears in any token n-gram longer than 10 tokens. However, their methods require access to retraining corpora, which is largely unavailable for recent model releases. Other approaches try to detect contamination without access to pretraining corpora. Sainz et al. (2023) simply prompts ChatGPT to generate examples from a dataset by providing the dataset's name and split. They found that the models generate verbatim instances from NLP datasets. Golchin and Surdeanu (2023) extends this framework to extract more memorized instances by incorporating partial instance content into the prompt. Similarly, Weller et al. (2023) demonstrates the ability to extract memorized snippets from Wikipedia via prompting. While these methods study contamination in closed-sourced models, they cannot determine contamination on an instance level. Marone and Van Durme (2023) argues that model-developers should release training data membership testing tools accompanying their LLMs to remedy this. However, this is not yet widely practiced.
Conclusion
We present a pre-training data detection dataset WikiMIA and a new approach Min-k% Prob. Our approach uses the intuition that trained data tends to contain fewer outlier tokens with very low probabilities compared to other baselines. Additionally, we verify the effectiveness of our approach in real-world setting, we perform two case studies: detecting dataset contamination and published book detection. For dataset contamination, we observe empirical results aligning with theoretical predictions about how detection difficulty changes with dataset size, example frequency, and learning rate. Most strikingly, our book detection experiments provide strong evidence that GPT-3 models may have been trained on copyrighted books. |
2306.13117 | An Algebraic Interpretation of the Super Catalan Numbers | We extend the notion of polynomial integration over an arbitrary circle $C$
in the Euclidean geometry over general fields $\mathbb F$ of characteristic
zero as a normalized $\mathbb F$-linear functional on
$\mathbb{F}\left[\alpha_1, \alpha_2\right]$ that takes polynomials that
evaluate to zero on $C$ to zero and is $\mathrm{SO}(2,\mathbb{F})$-invariant.
This allows us to not only build a purely algebraic integration theory in an
elementary way, but also give the super Catalan numbers $$S(m,n) =
\frac{(2m)!(2n)!}{m!n!(m+n)!}$$ an algebraic interpretation in terms of values
of this algebraic integral over some circle applied to the monomials
$\alpha_1^{2m}\alpha_2^{2n}$. | Kevin Limanta | 2023-06-13T14:12:23Z | http://arxiv.org/abs/2306.13117v1 | # An Algebraic Interpretation of the Super Catalan Numbers
###### Abstract
We extend the notion of polynomial integration over an arbitrary circle \(C\) in the Euclidean geometry over general fields \(\mathbb{F}\) of characteristic zero as a normalized \(\mathbb{F}\)-linear functional on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) that takes polynomials that evaluate to zero on \(C\) to zero and is \(SO(2,\mathbb{F})\)-invariant. This allows us to not only build a purely algebraic integration theory in an elementary way, but also give the super Catalan numbers
\[S(m,n)=\frac{(2m)!(2n)!}{m!n!(m+n)!}\]
an algebraic interpretation in terms of values of this algebraic integral over some circle applied to the monomials \(\alpha_{1}^{2m}\alpha_{2}^{2n}\).
## 1 Introduction
This is the second of our series of papers building an integration theory of polynomials over unit circles over a general field \(\mathbb{F}\). The first paper [9] deals with the case \(\mathbb{F}\) is finite of odd characteristic, in which the family of integers \(S(m,n)\) called the _super Catalan numbers_ and their closely related family of rational numbers \(\Omega(m,n)\) called the _circular super Catalan numbers_ play a prominent role. They are defined as
\[S(m,n):=\frac{(2m)!(2n)!}{m!n!(m+n)!},\quad\Omega(m,n):=\frac{S(m,n)}{4^{m+n}}\]
and are indexed by two elements in \(\mathbb{N}\) which for us includes \(0\).
The super Catalan numbers were first introduced by Catalan [3] in 1874 and the first modern study of these numbers was initiated by Gessel [7] in 1992. They generalized the Catalan numbers \(c_{n}\) since \(S(1,n)=2c_{n}\). The integrality of \(S(m,n)\) can be observed from the relation \(4S\left(m,n\right)=S\left(m+1,n\right)+S\left(m,n+1\right)\) which yields the Pascal-like property \(\Omega(m,n)=\Omega(m+1,n)+\Omega(m,n+1)\).
No combinatorial interpretation of \(S(m,n)\) is known for general \(m\) and \(n\) to date, in contrast to over 200 interpretations of the Catalan numbers [12]. However, for \(m=2\), there is some interpretations in terms of cubic trees by Pippenger [10] and blossom trees by Schaeffer [11], and when \(m=2,3\), as pairs of Dyck paths with restricted heights by Gessel and Xin [8]. When \(n=m+s\) for \(0\leq s\leq 3\), Chen and Wang showed that there is an interpretation in terms of restricted lattice paths [4]. There is also some weighted interpretation of \(S\left(m,n\right)\) as a certain value of Krawtchouk
polynomials by the work of Georgiadis, Munemasa, and Tanaka [6] and another in terms of positive and negative 2-Motzkin paths by Allen and Gheorghiciuc [1].
The aim of this paper is twofold. The first one is to build, in a rather elementary way, a polynomial integration theory over circles in the Euclidean geometry over general fields of characteristic zero without recourse to the usual Riemann integral and limiting processes. We shall see that this allows us to give the super Catalan numbers a purely algebraic interpretation, which is our second objective.
Here and throughout, \(\mathbb{F}\) is a general field of characteristic zero with multiplicative identity \(1_{\mathbb{F}}\) or sometimes just \(1\) if the context is clear. We denote by \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) the algebra of polynomials in \(\alpha_{1}\) and \(\alpha_{2}\) over \(\mathbb{F}\) with multiplicative identity \(\mathbf{1}\). Our algebraic integral over a circle \(C\) is a linear functional \(\phi\) on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\), called a circular integral functional with respect to \(C\), which satisfies three conditions: \(\phi(\mathbf{1})=1_{\mathbb{F}}\) (_Normalization_), \(\phi(\pi)=0\) whenever \(\pi\) evaluates to the zero function on \(C\) (_Locality_), and \(\phi\) is rotationally-invariant (_Invariance_).
When \(\mathbb{F}=\mathbb{R}\), there is a well-known formula for the integral of polynomials on the unit sphere \(S^{n-1}\) (see [2] or [5]).
**Theorem 1.1**: _Let \(n\geq 2\) and \(S^{n-1}\) denote the \((n-1)\)-dimensional unit sphere in \(\mathbb{R}^{n}\). If \(\mu\) is the usual rotationally invariant measure on \(S^{n-1}\), then by writing \(b_{i}=\frac{1}{2}\left(d_{i}+1\right)\),_
\[\int_{S^{n-1}}x_{1}^{d_{1}}x_{2}^{d_{2}}\ldots x_{n}^{d_{n}}\,d\mu=\begin{cases} \frac{2}{\Gamma\left(b_{1}+b_{2}+\cdots+b_{n}\right)}{\prod_{i=1}^{n}} \Gamma\left(b_{i}\right)&\text{if each $d_{i}$ is even}\\ 0&\text{otherwise.}\end{cases}\]
We may see from Theorem 1.1 above, when \(n=2\), \(d_{1}=2m\), and \(d_{2}=2n\), we obtain
\[\frac{2\Gamma\left(b_{1}\right)\Gamma\left(b_{2}\right)}{\Gamma\left(b_{1}+b_ {2}\right)}=\frac{2\Gamma\left(m+\frac{1}{2}\right)\Gamma\left(n+\frac{1}{2} \right)}{\Gamma\left(m+n+1\right)}=2\pi\frac{(2m)!(2n)!}{4^{m+n}m!n!(m+n)!}=2 \pi\Omega(m,n)\]
so if the integral is normalized, we get just the circular super Catalan numbers.
In [9], we showed that the polynomial integration theory over finite field of odd characteristics is analogous to the \(\mathbb{F}=\mathbb{R}\) case, which we summarize below.
**Theorem 1.2**: _Let \(p>2\) be a prime and \(q=p^{r}\) for some \(r\in\mathbb{N}\). In the Euclidean geometry over \(\mathbb{F}_{q}\) with multiplicative identity \(1_{q}\), the unit circle is \(S^{1}=\left\{\left[x_{1},x_{2}\right]\in\mathbb{F}_{q}^{2}\colon x^{2}+y^{2}=1 _{q}\right\}\). Let \(k\) and \(l\) be any natural numbers for which \(0\leq k+l<q-1\). Then the functional \(\psi_{b,q}\) on \(\mathbb{F}_{q}\left[\alpha_{1},\alpha_{2}\right]\) given by_
\[\psi_{b,q}\left(\alpha_{1}^{k}\alpha_{2}^{l}\right)=-\left(\frac{-1}{p}\right) ^{r}\sum_{[x,y]\in S^{1}}x^{k}y^{l}=\begin{cases}\Omega\left(m,n\right)\mathrm{ mod}\,p&\text{if $k=2m$ and $l=2n$}\\ 0&\text{otherwise}\end{cases}\]
_is the unique circular integral functional with respect to \(S^{1}\). Here \(\left(\frac{-1}{p}\right)\) is the usual Legendre symbol._
Now we present our main result. For \(a\in\mathbb{Q}\), \(a1_{\mathbb{F}}\) is the embedding of \(a\) in \(\mathbb{F}\). The unit circle \(S^{1}\) in this setting will be defined in the next section.
**Theorem 1.3**: _For any \(k,l\in\mathbb{N}\), the linear functional \(\psi\) on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) defined by_
\[\psi\left(\alpha_{1}^{k}\alpha_{2}^{l}\right)=\left\{\begin{array}{ll}\Omega \left(m,n\right)1_{\mathbb{F}}&\text{if $k=2m$ and $l=2n$,}\\ 0&\text{otherwise}\end{array}\right. \tag{1}\]
_is the unique circular integral functional with respect to \(S^{1}\)._
## 2 Circular Integral Functional
Denote by \(\mathbb{A}=\mathbb{A}(\mathbb{F})\) the two-dimensional _affine plane_\(\left\{\left[x,y\right]\,:\,x,y\in\mathbb{F}\right\}\), with the objects \(\left[x,y\right]\) called _points_. There is then the space \(\mathbb{F}^{\mathbb{A}}\) consisting of functions from \(\mathbb{A}\) to \(\mathbb{F}\) which is an \(\mathbb{F}\)-algebra under pointwise addition and multiplication, and the evaluation map \(\varepsilon\colon\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\to\mathbb{F}^{ \mathbb{A}}\) which is an algebra homomorphism. Clearly we may regard \(\mathbb{F}\left[\alpha_{1}\right]\) as a subalgebra of \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\). Recall that any non-zero polynomial in \(\mathbb{F}\left[\alpha_{1}\right]\) of degree \(d\) has at most \(d\) roots.
The group \(GL(2,\mathbb{F})\) of invertible \(2\times 2\) matrices with entries in \(\mathbb{F}\) left acts on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) as follows: if \(\pi=\pi\left(\alpha_{1},\alpha_{2}\right)\) then
\[h\cdot\pi=\begin{pmatrix}h_{11}&h_{12}\\ h_{21}&h_{22}\end{pmatrix}\cdot\pi:=\pi\left(h_{11}\alpha_{1}+h_{21}\alpha_{2 },h_{12}\alpha_{1}+h_{22}\alpha_{2}\right). \tag{2}\]
Additionally, \(GL(2,\mathbb{F})\) right acts on \(\mathbb{A}\) and left acts on \(\mathbb{F}^{\mathbb{A}}\) as follows:
\[\left[x_{1},x_{2}\right]\cdot h :=\left[h_{11}x_{1}+h_{21}x_{2},h_{12}x_{1}+h_{22}x_{2}\right]\] \[(h\cdot f)(x_{1},x_{2}) :=f\left(\left[x_{1},x_{2}\right]\cdot h\right)=f(h_{11}x_{1}+h_{2 1}x_{2},h_{12}x_{1}+h_{22}x_{2}).\]
The group \(SO(2,\mathbb{F})\) of matrices \(h\) that satisfy \(h^{-1}=h^{T}\) of determinant \(1_{\mathbb{F}}\) is then a subgroup of \(GL(2,\mathbb{F})\) and is called the rotation group. The action of \(SO(2,\mathbb{F})\) on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) is induced as the restriction of the action of \(GL(2,\mathbb{F})\) on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\). This action respects evaluation: for any \(h\in SO(2,\mathbb{F})\) and \(\pi\in\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\),
\[\varepsilon\left(h\cdot\pi\right)=h\cdot\varepsilon\left(\pi\right). \tag{3}\]
In a similar manner, we also get an action of \(SO(2,\mathbb{F})\) on \(\mathbb{A}\) and \(\mathbb{F}^{\mathbb{A}}\).
We define a symmetric bilinear form on \(\mathbb{A}\), given by \(\left[x_{1},y_{1}\right]\cdot\left[x_{2},y_{2}\right]:=x_{1}x_{2}+y_{1}y_{2}\). The associated quadratic form \(\left[x,y\right]\cdot\left[x,y\right]=x^{2}+y^{2}\) then gives rise to the (Euclidean) unit circle
\[S^{1}=S^{1}\left(\mathbb{F}\right):=\left\{\left[x,y\right]\in\mathbb{A}\colon x ^{2}+y^{2}=1_{\mathbb{F}}\right\}.\]
**Lemma 2.1**: _Each point on \(S^{1}\) except \(\left[-1,0\right]\) can be written as \(\left[\frac{1-u^{2}}{1+u^{2}},\frac{2u}{1+u^{2}}\right]\) for some \(u\in\mathbb{F}\) such that \(1+u^{2}\neq 0\). Consequently, \(S^{1}\) is an infinite set._
**Proof.** The identity \(\left(\frac{1-u^{2}}{1+u^{2}}\right)^{2}+\left(\frac{2u}{1+u^{2}}\right)^{2}=1 _{\mathbb{F}}\) holds for all \(u\in\mathbb{F}\) for which \(u^{2}\neq-1\). The line \(y=ux+u\) through the points \(\left[-1,0\right]\) and \(\left[0,u\right]\) intersects \(S^{1}\) in exactly two points, \(\left[-1,0\right]\) and \(\left[\frac{1-u^{2}}{1+u^{2}},\frac{2u}{1+u^{2}}\right]\). Hence every point on \(S^{1}\) except \(\left[-1,0\right]\) corresponds to exactly one \(u\in\mathbb{F}\) for which \(u^{2}\neq-1\). Since there are infinitely many \(u\in\mathbb{F}\) for which \(u^{2}\neq-1\), \(S^{1}\) is an infinite set.
**Corollary 2.2**: _The rotation group \(SO(2,\mathbb{F})\) admits a parametrization_
\[SO(2,\mathbb{F})=\left\{h_{u}=\frac{1}{1+u^{2}}\begin{pmatrix}1-u^{2}&-2u\\ 2u&1-u^{2}\end{pmatrix}:\ u\in\mathbb{F},u^{2}\neq-1\right\}\cup\left\{-I \right\}.\]
We now introduce the central object of this paper: a linear functional on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) that generalizes normalized integration over the Euclidean unit circle over \(\mathbb{R}\). We say that a linear functional \(\phi\colon\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\to\mathbb{F}\) is a _circular integral functional_ on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) with respect to \(S^{1}\) precisely when it satisfies the following three conditions:
**(Normalization)**: For the multiplicative identity \({\bf 1}\) of \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\), we have \(\phi\left({\bf 1}\right)=1_{\mathbb{F}}\).
**(Locality)**: If \(\pi\in\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) such that \(\varepsilon(\pi)\) is the zero function on \(S^{1}\), then \(\phi(\pi)=0\).
**(Invariance)**: The functional \(\phi\) is \(SO(2,\mathbb{F})\)-invariant: \(\phi(h\cdot\pi)=\phi(\pi)\) for any \(\pi\in\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) and \(h\in SO(2,\mathbb{F})\).
## 3 Existence and Uniqueness
Our strategy to prove Theorem 1.3 is divided into two main steps. First, we show that \(\psi\) satisfies the Normalization, Locality, and Invariance conditions. Next, we demonstrate that if such a circular integral functional \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) with respect to \(S^{1}\) exists, it is uniquely determined.
It is easy to see that the Normalization condition holds. The next two lemmas are needed to prove the Locality of \(\psi\).
**Lemma 3.1**: _Both \(S^{1}_{x_{1}}=\{x_{1}\in\mathbb{F}:\ [x_{1},x_{2}]\in S^{1}\}\) and \(S^{1}_{x_{2}}=\{x_{2}\in\mathbb{F}:\ [x_{1},x_{2}]\in S^{1}\}\) have infinitely many elements._
**Proof.** For any \([x_{1},x_{2}]\in S^{1}\), we have that \([x_{2},x_{1}]\in S^{1}\), so \(S^{1}_{x_{1}}=S^{1}_{x_{2}}\). If \(S^{1}_{x_{1}}=S^{1}_{x_{2}}\) is finite, then so is \(S^{1}_{x_{1}}\times S^{1}_{x_{2}}\) and consequently \(S^{1}\). This contradicts the fact that \(S^{1}\) is infinite from Lemma 2.1.
The crucial property of polynomials in \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) that evaluate to the zero function on \(S^{1}\) is that they must lie in \(\left\langle\alpha_{1}^{2}+\alpha_{2}^{2}-1\right\rangle\), the ideal generated by \(\alpha_{1}^{2}+\alpha_{2}^{2}-1\). We offer an elementary proof below by utilising the multivariate polynomial division which requires a choice of monomial ordering. This has a flavour of Hilbert's Nullstellensatz which usually works over algebraically closed fields, although our argument does not assume that \(\mathbb{F}\) is an algebraically closed field.
**Lemma 3.2**: _If \(\pi\in\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) satisfies \(\varepsilon(\pi)=0\) on \(S^{1}\), then \(\pi\in\left\langle\alpha_{1}^{2}+\alpha_{2}^{2}-1\right\rangle\)._
**Proof.** Fix a monomial ordering \(\preccurlyeq\) such that \(\alpha_{1}^{k_{1}}\alpha_{2}^{l_{1}}\preccurlyeq\alpha_{1}^{k_{2}}\alpha_{2} ^{l_{2}}\) if either \(k_{1}<k_{2}\) or \(k_{1}=k_{2}\) and \(l_{1}<l_{2}\). With respect to this ordering, any \(\pi\in\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) can be written as \(\pi=\left(\alpha_{1}^{2}+\alpha_{2}^{2}-1\right)\pi_{0}+\alpha_{2}\omega+\rho\) for some \(\pi_{0}\in\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) and \(\omega,\rho\in\mathbb{F}\left[\alpha_{1}\right]\).
Since \(\pi\) evaluates to the zero function on \(S^{1}\), we have that
\[0=\varepsilon(\pi)(x_{1},x_{2})=x_{2}\varepsilon(\omega)(x_{1},x_{2})+ \varepsilon(\rho)(x_{1},x_{2})=x_{2}\varepsilon(\omega)(x_{1},0)+\varepsilon( \rho)(x_{1},0) \tag{4}\]
for all \([x_{1},x_{2}]\in S^{1}\), with the last equation is due to \(\omega,\rho\in\mathbb{F}\left[\alpha_{1}\right]\). Now consider the set \(S^{1}_{*}=\left\{\left[x_{1},x_{2}\right]\in S^{1}:\ x_{2}\neq 0\right\}\) which is non-empty since \([0,1]\in S^{1}_{*}\). For any \([x_{1},x_{2}]\in S^{1}_{*}\), the point \([x_{1},-x_{2}]\in S^{1}_{*}\) is different from \([x_{1},x_{2}]\) so (4) forces \(\varepsilon(\omega)(x_{1},0)=\varepsilon(\rho)(x_{1},0)=0\) for all \([x_{1},x_{2}]\in S^{1}\). By Lemma 3.1, \(\omega\) and \(\rho\) have infinitely many roots, so they must be the zero polynomial.
**Theorem 3.3** (Locality of \(\psi\)): _The linear functional \(\psi\) satisfies the Locality condition._
**Proof.** If \(\pi\in\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) evaluates to the zero function on \(S^{1}\), then \(\pi\in\left\langle\alpha_{1}^{2}+\alpha_{2}^{2}-1\right\rangle\) from Lemma 3.2. By linearity, it suffices to show that
\[\psi\left(\left(\alpha_{1}^{2}+\alpha_{2}^{2}-1\right)\alpha_{1}^{k}\alpha_{2 }^{l}\right)=\psi\left(\alpha_{1}^{k+2}\alpha_{2}^{l}\right)+\psi\left(\alpha _{1}^{k}\alpha_{2}^{l+2}\right)-\psi\left(\alpha_{1}^{k}\alpha_{2}^{l}\right)\]
vanishes. Clearly it does if either \(k\) or \(l\) is odd, and if \(k=2m\) and \(l=2n\), the right-hand side simplifies to \(\Omega\left(m+1,n\right)+\Omega\left(m,n+1\right)-\Omega\left(m,n\right)=0\) by the Pascal-like property.
Finally, to prove the Invariance of \(\psi\), we need the following three lemmas.
**Lemma 3.4**: _For any \(h\in SO(2,\mathbb{F})\) and natural numbers \(k,l\), each term of \(h\cdot\alpha_{1}^{k}\alpha_{2}^{l}\) has degree \(k+l\)._
**Proof.** By using (2), \(h\cdot\alpha_{1}^{k}\alpha_{2}^{l}=\left(h_{11}\alpha_{1}+h_{21}\alpha_{2} \right)^{k}\left(h_{12}\alpha_{1}+h_{22}\alpha_{2}\right)^{l}\). Expanding this using the Binomial Theorem, we see that the degree of each term is always \(k+l\).
**Lemma 3.5**: _For any natural numbers \(m\) and \(h\in SO(2,\mathbb{F})\), we have that \(\psi\left(h\cdot\alpha_{1}^{2m}\right)=\psi\left(\alpha_{1}^{2m}\right)\) and \(\psi\left(h\cdot\alpha_{1}^{2m-1}\alpha_{2}\right)=\psi\left(\alpha^{2m-1} \alpha_{2}\right)\)._
**Proof.** The statement is obviously true for \(h=-I\). Now for \(h=h_{u}\) defined in Corollary 2.2,
\[\psi\left(h_{u}\cdot\alpha_{1}^{2m}\right)=\sum_{s=0}^{2m}{2m\choose s}\left( \frac{1-u^{2}}{1+u^{2}}\right)^{s}\left(\frac{2u}{1+u^{2}}\right)^{2m-s}\psi \left(\alpha_{1}^{s}\alpha_{2}^{2m-s}\right)\]
but since the odd indices do not contribute to the sum, we just need to consider the even indices:
\[\psi\left(h_{u}\cdot\alpha_{1}^{2m}\right) =\sum_{s=0}^{m}{2m\choose 2s}\left(\frac{1-u^{2}}{1+u^{2}} \right)^{2s}\left(\frac{2u}{1+u^{2}}\right)^{2m-2s}\psi\left(\alpha_{1}^{2s} \alpha_{2}^{2m-2s}\right)\] \[=\sum_{s=0}^{m}{2m\choose 2s}\Omega\left(s,m-s\right)\left(\frac{ 1-u^{2}}{1+u^{2}}\right)^{2s}\left(\frac{2u}{1+u^{2}}\right)^{2m-2s}1_{ \mathbb{F}}\] \[=\frac{(2m)!}{4^{m}m!m!}\sum_{s=0}^{m}{m\choose s}\left(\frac{1-u ^{2}}{1+u^{2}}\right)^{2s}\left(\frac{2u}{1+u^{2}}\right)^{2m-2s}1_{\mathbb{F}}. \tag{5}\]
Using the Binomial Theorem, (3.5) simplifies to
\[\psi\left(\alpha_{1}^{2m}\right)\left(\left(\frac{1-u^{2}}{1+u^{2}}\right)^{2 }+\left(\frac{2u}{1+u^{2}}\right)^{2}\right)^{m}=\psi\left(\alpha_{1}^{2m} \right)1_{\mathbb{F}}.\]
The proof that \(\psi\left(h_{u}\cdot\alpha_{1}^{2m-1}\alpha_{2}\right)=\psi\left(\alpha_{1}^{ 2m-1}\alpha_{2}\right)\) is more involved but done similarly.
**Lemma 3.6**: _If \(\pi\in\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) evaluates to the zero function on \(S^{1}\), then so does \(h\cdot\pi\) for any \(h\in SO(2,\mathbb{F})\)._
**Proof.** For an arbitrary \(h\in SO(2,\mathbb{F})\), the map \([x_{1},x_{2}]\mapsto[x_{1},x_{2}]\cdot h\) is a bijection on \(S^{1}\). Choose any \([x_{1},x_{2}]\in S^{1}\), then \([x_{1},x_{2}]=[u_{1},u_{2}]\cdot h^{-1}\) for some \([u_{1},u_{2}]\in S^{1}\). Using (3),
\[\varepsilon(h\cdot\pi)(x_{1},x_{2})=\left(h\cdot\varepsilon(\pi)\right)(x_{1},x_{2})=\varepsilon(\pi)([x_{1},x_{2}]\cdot h)=\varepsilon(\pi)(u_{1},u_{2})=0\]
where the last equality follows from the assumption on \(\pi\). Consequently \(h\cdot\pi\) evaluates to the zero function on \(S^{1}\).
**Theorem 3.7** (Invariance of \(\psi\)): _The linear functional \(\psi\) satisfies the Invariance condition._
**Proof.** It is sufficient to show that \(\psi\left(h\cdot\alpha_{1}^{k}\alpha_{2}^{l}\right)=\psi\left(\alpha_{1}^{k} \alpha_{2}^{l}\right)\) for any \(k,l\in\mathbb{N}\) and \(h\in SO(2,\mathbb{F})\). As before, the statement is obviously true for \(h=-I\), so we will only show that \(\psi\left(h_{u}\cdot\alpha_{1}^{k}\alpha_{2}^{l}\right)=\psi\left(\alpha_{1}^ {k}\alpha_{2}^{l}\right)\). If \(k+l\) is odd, then by Lemma 3.4 each term of \(h_{u}\cdot\alpha_{i}^{k}\alpha_{2}^{l}\) has an odd degree and therefore \(\psi\left(h_{u}\cdot\alpha_{1}^{k}\alpha_{2}^{l}\right)=0=\psi\left(\alpha_{1} ^{k}\alpha_{2}^{l}\right)\).
The polynomial \(\pi=\alpha_{1}^{2m}\alpha_{2}^{2n}-\alpha_{1}^{2m}\left(1-\alpha_{1}^{2}\right) ^{n}\) evaluates to the zero function on \(S^{1}\) and therefore by Lemma 3.6,
\[\psi\left(h_{u}\cdot\alpha_{1}^{2m}\alpha_{2}^{2n}\right) = \psi\left(h_{u}\cdot\alpha_{1}^{2m}\left(1-\alpha_{1}^{2}\right)^{ n}\right)=\sum_{s=0}^{n}\left(-1\right)^{s}{n\choose s}\psi\left(h_{u}\cdot \alpha_{1}^{2m+2s}\right).\]
Now by Lemma 3.5, \(\psi\left(h_{u}\cdot\alpha_{1}^{2m+2s}\right)=\psi\left(\alpha_{1}^{2m+2s}\right)\) and therefore this lets us retrace the steps:
\[\psi\left(h_{u}\cdot\alpha_{1}^{2m}\alpha_{2}^{2n}\right)=\sum_{s=0}^{n}{(-1)^{s }\binom{n}{s}}\psi\left(\alpha_{1}^{2m+2s}\right)=\psi\left(\alpha_{1}^{2m} \left(1-\alpha_{1}^{2}\right)^{n}\right)=\psi\left(\alpha_{1}^{2m}\alpha_{2}^ {2n}\right)\]
where in the last equality we used the Locality of \(\pi\) again. The case \(k\) and \(l\) are both odd is treated similarly. The conclusion thus follows by the linearity of \(\psi\).
Next, we proceed to show that \(\psi\) is the only circular integral functional on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) with respect to \(S^{1}\).
**Theorem 3.8** (Existence implies uniqueness): _If \(\phi\) is any circular integral functional on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) with respect to \(S^{1}\), then \(\phi\) is uniquely determined._
**Proof.** By linearity, it suffices to show that \(\phi(\alpha_{1}^{k}\alpha_{2}^{l})\) is uniquely determined for any \(k,l\in\mathbb{N}\). Using the Invariance property with \(h=-I\), we obtain \(\phi(\alpha_{1}^{k}\alpha_{2}^{l})=\phi(-I\cdot\alpha_{1}^{k}\alpha_{2}^{l})= (-1)^{k+l}\phi(\alpha_{1}^{k}\alpha_{2}^{l})\) so \(\phi(\alpha_{1}^{k}\alpha_{2}^{l})=0\) whenever \(k+l\) is odd.
For \(m\geq 1\), another application of the Invariance property with \(h=h_{u}\) from Corollary 2.2 gives
\[\phi\left(\alpha_{1}^{2m}\right)=\phi\left(h_{u}\cdot\alpha_{1}^{2m}\right)= \frac{1}{\left(1+u^{2}\right)^{2m}}\sum_{s=0}^{2m}\binom{2m}{s}\left(1-u^{2} \right)^{s}\left(2u\right)^{2m-s}\phi\left(\alpha_{1}^{s}\alpha_{2}^{2m-s} \right).\]
We multiply both sides by \(\left(1+u^{2}\right)^{2m}\) and split the summation depending on the parity of \(s\) to obtain
\[\left(1+u^{2}\right)^{2m}\phi\left(\alpha_{1}^{2m}\right) =\sum_{s=0}^{m}\binom{2m}{2s}\left(1-u^{2}\right)^{2s}\left(2u \right)^{2m-2s}\phi\left(\alpha_{1}^{2s}\alpha_{2}^{2m-2s}\right)+\] \[\sum_{s=1}^{m}\binom{2m}{2s-1}\left(1-u^{2}\right)^{2s-1}\left(2u \right)^{2m-2s+1}\phi\left(\alpha_{1}^{2s-1}\alpha_{2}^{2m-2s+1}\right). \tag{6}\]
The polynomials \(\pi_{1}=\alpha_{1}^{2s}\alpha_{2}^{2m-2s}-\alpha_{1}^{2s}\left(1-\alpha_{1}^{ 2}\right)^{m-s}\) and \(\pi_{2}=\alpha_{1}^{2s-1}\alpha_{2}^{2m-2s+1}-\alpha_{1}^{2s-1}\left(1-\alpha_ {1}^{2}\right)^{m-s}\alpha_{2}\) both evaluate to the zero function on \(S^{1}\) so by Locality, we must have that
\[\phi\left(\alpha_{1}^{2s}\alpha_{2}^{2m-2s}\right) =\phi\left(\alpha_{1}^{2s}\left(1-\alpha_{1}^{2}\right)^{m-s} \right)=\sum_{t=0}^{m-s}{(-1)^{t}\binom{m-s}{t}}\phi\left(\alpha_{1}^{2s+2t} \right), \tag{7}\] \[\phi\left(\alpha_{1}^{2s-1}\alpha_{2}^{2m-2s+1}\right) =\phi\left(\alpha_{1}^{2s-1}\left(1-\alpha_{1}^{2}\right)^{m-s} \alpha_{2}\right)=\sum_{t=0}^{m-s}{(-1)^{t}\binom{m-s}{t}}\phi\left(\alpha_{1}^ {2s+2t-1}\alpha_{2}\right) \tag{8}\]
respectively. By (7) and (8), equation (6) becomes
\[\left(1+u^{2}\right)^{2m}\phi\left(\alpha_{1}^{2m}\right) = \sum_{s=0}^{m}\sum_{t=0}^{m-s}{(-1)^{t}\binom{2m}{2s}\binom{m-s} {t}\left(1-u^{2}\right)^{2s}\left(2u\right)^{2m-2s}\phi\left(\alpha_{1}^{2s+2t }\right)}+\] \[\sum_{s=0}^{m}\sum_{t=0}^{m-s}{(-1)^{t}\binom{2m}{2s-1}\binom{m-s} {t}\left(1-u^{2}\right)^{2s-1}\left(2u\right)^{2m-2s+1}\phi\left(\alpha_{1}^{ 2s+2t-1}\alpha_{2}\right).\]
Now the following polynomial of degree at most \(4m\) in \(\mathbb{F}\left[\beta\right]\), namely
\[\pi =\left(1+\beta^{2}\right)^{2m}\phi\left(\alpha_{1}^{2m}\right)- \sum_{s=0}^{m}\sum_{t=0}^{m-s}{(-1)^{t}\binom{2m}{2s}\binom{m-s}{t}\left(1- \beta^{2}\right)^{2s}\left(2\beta\right)^{2m-2s}\phi\left(\alpha_{1}^{2s+2t} \right)}-\] \[\sum_{s=0}^{m}\sum_{t=0}^{m-s}{(-1)^{t}\binom{2m}{2s-1}\binom{m-s }{t}\left(1-\beta^{2}\right)^{2s-1}\left(2\beta\right)^{2m-2s+1}\phi\left( \alpha_{1}^{2s+2t-1}\alpha_{2}\right)}\]
has infinitely many roots, so \(\pi\) is identically zero. By extracting the coefficient of \(\beta\) and \(\beta^{2}\) respectively we get \(4m\phi\left(\alpha_{1}^{2m-1}\alpha_{2}\right)=0\) and \(8m^{2}\phi\left(\alpha_{1}^{2m}\right)-4m\left(2m-1\right)\phi\left(\alpha_{1}^ {2m-2}\right)=0\).
Since \(m\) is arbitrary, we must have for any \(m\geq 1\), \(\phi\left(\alpha_{1}^{2m-1}\alpha_{2}\right)=0\) and the first-order recurrence relation \(2m\phi\left(\alpha_{1}^{2m}\right)=\left(2m-1\right)\phi\left(\alpha_{1}^{2m-2}\right)\) with the initial condition \(\phi\left(\mathbf{1}\right)=1_{\mathbb{F}}\). Thus we see that \(\phi\left(\alpha_{1}^{2m}\right)\) and \(\phi\left(\alpha_{1}^{2m-1}\alpha_{2}\right)\) are uniquely determined for all \(m\geq 1\).
Finally, by utilizing the Locality condition again, both \(\phi\left(\alpha_{1}^{2m+1}\alpha_{2}^{2n+1}\right)\) and \(\phi\left(\alpha_{1}^{2m}\alpha_{2}^{2n}\right)\) are uniquely determined since
\[\phi\left(\alpha_{1}^{2m+1}\alpha_{2}^{2n+1}\right) =\phi\left(\alpha_{1}^{2m+1}\left(1-\alpha_{1}^{2}\right)^{n} \alpha_{2}\right)=\sum_{s=0}^{n}\left(-1\right)^{n}\binom{n}{s}\phi\left( \alpha_{1}^{2m+2s+1}\alpha_{2}\right),\] \[\phi\left(\alpha_{1}^{2m}\alpha_{2}^{2n}\right) =\phi\left(\alpha_{1}^{2m}\left(1-\alpha_{1}^{2}\right)^{n} \right)=\sum_{s=0}^{n}\left(-1\right)^{n}\binom{n}{s}\phi\left(\alpha_{1}^{2m+ 2s}\right).\]
This concludes the proof.
## 4 Generalization to Arbitrary Circles
Fix a point \(\left[a,b\right]\in\mathbb{A}\) and a non-zero \(r\in\mathbb{F}\). We define \(S_{r,\left[a,b\right]}^{1}\) to be the collection of points \(\left[x,y\right]\in\mathbb{A}\) such that \(\left(x-a\right)^{2}+\left(y-b\right)^{2}=r^{2}\). A linear functional \(\phi_{r,\left[a,b\right]}\colon\mathbb{F}\left[\alpha_{1},\alpha_{2}\right] \rightarrow\mathbb{F}\) is a circular integral functional on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) with respect to \(S_{r,\left[a,b\right]}^{1}\) if the following conditions are satisfied:
**(Normalization)**: For the multiplicative identity \(\mathbf{1}\) of \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\), we have \(\phi_{r,\left[a,b\right]}(\mathbf{1})=r\).
**(Locality)**: If \(\pi\in\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) such that \(\varepsilon(\pi)=0\) on \(S_{r,\left[a,b\right]}^{1}\), we have \(\phi_{r,\left[a,b\right]}\left(\pi\right)=0\).
**(Invariance)**: For any \(\pi\in\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) and \(h\in SO(2,\mathbb{F})\), \(\phi_{r,\left[a,b\right]}\left(h\cdot\pi\right)=\phi_{r,\left[a,b\right]} \left(\pi\right)\).
By employing the same analysis, the existence and uniqueness of a circular integral functional on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) with respect to \(S_{r,\left[a,b\right]}^{1}\) can be derived from that of \(\psi\).
**Theorem 4.1**: _There is one and only one circular integral functional on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) with respect to \(S_{r,\left[a,b\right]}^{1}\), given by_
\[\psi_{r,\left[a,b\right]}\left(\alpha_{1}^{k}\alpha_{2}^{l}\right)=r\psi\left( \left(a+r\alpha_{1}\right)^{k}\left(b+r\alpha_{2}\right)^{l}\right)\]
_where \(\psi\) is the circular integral functional on \(\mathbb{F}\left[\alpha_{1},\alpha_{2}\right]\) with respect to \(S^{1}\)._
Now we are finally able to give an algebraic interpretation of the super Catalan numbers \(S(m,n)\).
**Theorem 4.2** (An algebraic interpretation of \(S(m,n)\)): _Over \(\mathbb{Q}\), for any \(m,n\in\mathbb{N}\), we have that \(2S(m,n)=\psi_{2,\left[0,0\right]}\left(\alpha_{1}^{2m}\alpha_{2}^{2n}\right)\)._
**Proof.** It follows immediately from Theorem 4.1. We have that
\[\psi_{2,\left[0,0\right]}\left(\alpha_{1}^{2m}\alpha_{2}^{2n}\right)=2\psi \left(\left(2\alpha_{1}\right)^{2m}\left(2\alpha_{2}\right)^{2n}\right)=\frac{ 2^{2m+2n+1}}{4^{m+n}}S\left(m,n\right)1_{\mathbb{Q}}=2S\left(m,n\right)\]
as desired.
## Acknowledgement
The author thanks Hopein Tang for the helpful discussions.
|
2302.06953 | A Bandit Approach to Online Pricing for Heterogeneous Edge Resource
Allocation | Edge Computing (EC) offers a superior user experience by positioning cloud
resources in close proximity to end users. The challenge of allocating edge
resources efficiently while maximizing profit for the EC platform remains a
sophisticated problem, especially with the added complexity of the online
arrival of resource requests. To address this challenge, we propose to cast the
problem as a multi-armed bandit problem and develop two novel online pricing
mechanisms, the Kullback-Leibler Upper Confidence Bound (KL-UCB) algorithm and
the Min-Max Optimal algorithm, for heterogeneous edge resource allocation.
These mechanisms operate in real-time and do not require prior knowledge of
demand distribution, which can be difficult to obtain in practice. The proposed
posted pricing schemes allow users to select and pay for their preferred
resources, with the platform dynamically adjusting resource prices based on
observed historical data. Numerical results show the advantages of the proposed
mechanisms compared to several benchmark schemes derived from traditional
bandit algorithms, including the Epsilon-Greedy, basic UCB, and Thompson
Sampling algorithms. | Jiaming Cheng, Duong Thuy Anh Nguyen, Lele Wang, Duong Tung Nguyen, Vijay K. Bhargava | 2023-02-14T10:21:14Z | http://arxiv.org/abs/2302.06953v1 | # A Bandit Approach to Online Pricing for Heterogeneous Edge Resource Allocation
###### Abstract
Edge Computing (EC) offers a superior user experience by positioning cloud resources in close proximity to end users. The challenge of allocating edge resources efficiently while maximizing profit for the EC platform remains a sophisticated problem, especially with the added complexity of the online arrival of resource requests. To address this challenge, we propose to cast the problem as a multi-armed bandit problem and develop two novel online pricing mechanisms, the Kullback-Leibler Upper Confidence Bound (KL-UCB) algorithm and the Min-Max Optimal algorithm, for heterogeneous edge resource allocation. These mechanisms operate in real-time and do not require prior knowledge of demand distribution, which can be difficult to obtain in practice. The proposed posted pricing schemes allow users to select and pay for their preferred resources, with the platform dynamically adjusting resource prices based on observed historical data. Numerical results show the advantages of the proposed mechanisms compared to several benchmark schemes derived from traditional bandit algorithms, including the Epsilon-Greedy, basic UCB, and Thompson Sampling algorithms.
Edge computing, Bandit learning, online pricing.
## I Introduction
The proliferation of mobile devices and services has spurred the growth of new and innovative applications such as augmented reality (AR), virtual reality (VR), autonomous driving, real-time analytics, and the tactile Internet. Edge computing (EC) has established itself as a crucial technology that works in conjunction with central clouds to fulfill the stringent requirements of new applications and services. By positioning computing resources at the network edge, in close proximity to end-users, devices, and sensors, EC effectively reduces bandwidth consumption while ensuring high reliability and supporting delay-sensitive applications [1, 2, 3, 4].
The efficient allocation of resources from geographically dispersed, heterogeneous edge nodes (ENs) with limited capacity to various users with diverse preferences is a crucial concern that must be addressed. In this paper, we examine the operation of an EC platform, which serves as the edge network infrastructure provider and manages the allocation of edge resources to meet the demands of customers, also referred to as resource buyers. The buyers (e.g., service providers, AR/VR companies, vertical industries, and enterprises) can purchase resources from the platform to place their data and applications on various ENs to provide low-latency services to their users. While every buyer seeks powerful ENs that are located close to their users, the capacity of each EN is limited.
To effectively balance the demands of buyers with the constraints of the available resources at geographically dispersed edge nodes (ENs) and optimize the profit of the platform, we propose the use of dynamic pricing as a solution. In particular, we consider the setting where the platform offers multiple virtual machine (VM) instances, situated in diverse ENs, at varying prices to various buyers who arrive and make resource requests in real time. Our proposed platform differs from previous approaches in that it operates in a dynamic environment and does not necessitate prior knowledge of the user demand. Furthermore, it takes into account that each buyer may have their own private valuation of the edge resources, which remains undisclosed to the platform.
**Related Work:** The utilization of VMs in cloud/edge computing plays a pivotal role in fulfilling the required flexibility and scalability to cope with the ever-increasing demands of a mobile-centric environment. Amazon's Elastic Compute Cloud (EC2) [5] and Microsoft Azure are examples of platforms that have emerged, offering customers an array of VM instances that can be tailored to their specific preferences and usage, enabling customers to dynamically optimize their computing resources. Despite the enhanced granularity of resource provisioning, static pricing policies are still commonly adopted, which lack market flexibility and efficiency, and pose a threat to the platform's profitability and customer satisfaction.
Thus, creating an effective edge resources market involves addressing various challenges, with optimizing resource allocation and pricing mechanisms being the foremost priorities. The optimization of EC pricing models has gained substantial research attention in recent years. Reference [6] examines double auction-based schemes, while [7] and [2] explore market equilibrium approaches to ensure fair and efficient resource allocation to multiple budget-constrained services. Additionally, [8] presents a pricing framework based on the Stackelberg game theory, and [3] proposes a bilevel optimization model to solve the joint edge resource management and pricing problems. Nevertheless, these works typically assume a static environment and predetermined user demand.
In dynamic environments characterized by online user arrivals, several studies have explored online/dynamic pricing mechanisms in various contexts. For instance, [9] proposes an online mechanism for a crowdsensing system with uncertain task arrivals, while [10] employs a three-stage Stackelberg game approach to address time-varying scenarios with uncertainty and maximize long-term profit. Similarly, the dynamic pricing of cloud resources has been investigated, primarily following the introduction of Amazon EC2 spot instances [5]. Various online pricing mechanisms have been developed, including online VM auction optimization [11], online learning-based marketplace [12], online combinatorial auction [13], and
online auction based on the price function [14]. However, these mechanisms rely on prior knowledge of user demands.
Multi-armed bandit (MAB), an effective online learning and optimization framework with partial feedback, has recently been applied to pricing and resource allocation. Several bandit-based mechanisms have been proposed for general online pricing problems, such as the auction-based combinatorial MAB mechanism in [15] and the bandit-based mechanism for identical items with time-sensitive valuations introduced in [16]. For cloud/edge computing, [17] designs a bandit-based algorithm for cloud resource pricing that allows the purchase of one or multiple instances of a single product. Additionally, [18] employs a MAB model to assign each computational task to a single edge server. It is noteworthy that previous studies have solely focused on either distinct VM types in the same location or identical VM types in different geographic locations. Moreover, they have confined buyers to procuring either a single or several instances of the same product.
**Contributions:** Designing an efficient, _prior-independent_ online pricing mechanism poses significant challenges in balancing the exploration-exploitation trade-off. In response, we propose a new MAB framework for online edge resource pricing. Unlike previous bandit-based online pricing mechanisms in cloud/edge computing, our approach takes into account the interdependence between the computing power of the virtual machines (VMs) and their geographic locations. This is particularly important for delay-sensitive applications, as the buyers' valuations towards the edge resources are influenced by both the VM type and the location of the VM. Furthermore, our model allows buyers to purchase multiple products, unlike existing works that restrict buyers to one product.
We develop two _distribution-free_ algorithms, including the Kullback-Leibler Upper Confidence Bound (UCB) [19] and the min-max optimal strategy (MOSS) [20], for online edge resource pricing. These algorithms operate in real-time and eliminate the need for prior knowledge of the demand distribution and ensure truthfulness as the online posted price is independent of the newly arrived buyer's valuation. We evaluate the performance of our proposed mechanisms by comparing them with traditional bandit algorithms such as Epsilon-Greedy [21], basic UCB [22], and Thompson Sampling [23]. The performance is measured using the concept of regret, which is the difference between the expected reward of the best arm and the expected reward of the selected arms using our pricing strategies. Our goal is to minimize regret and optimize revenue and resource utilization.
The remaining paper is organized as follows. Section II presents the system model and problem formulation. Section III provides the solution approach. Finally, Section IV shows the simulation results, followed by conclusions in Section V.
## II System Model and Problem formulation
### _Edge Resource Allocation and Pricing Problem_
We consider an edge resource allocation problem for an EC system with a platform that manages a set of \(N\) resource-constrained heterogeneous ENs, denoted as \(\mathcal{N}=\{1,\ldots,N\}\), located in different areas, to provide computing resources in the form of virtual machines (VMs). In each EN, there is a set \(\mathcal{M}=\{1,\ldots,M\}\) of \(M\) types of VM instances available to serve users, each with different resource configurations in terms of vCPU, memory, and storage [5]. The platform offers a total of \(MN\) different products, each represented by a tuple \((i,j)\) with \(i\in\mathcal{M}\) and \(j\in\mathcal{N}\). Each tuple represents VM type \(i\) on EN \(j\). By considering both VM types and their physical locations, this approach is particularly useful for buyers with delay-sensitive applications.
We consider a set \(\mathcal{T}=\{1,\ldots,T\}\) of \(T\) buyers that arrive and request edge resources in an online manner. Each buyer \(t\) has unique computing tasks, which may require VMs in specific locations to meet latency requirements. This leads to varying valuations for different products \((i,j)\), which are captured by the valuation function \(v_{i,j}^{t}\). The _valuation function for each buyer is private and unknown to the platform_. When a buyer \(t\) arrives, the EC platform updates the price \(p_{i,j}^{t}\) for each product \((i,j)\), and the buyer determines their demand based on the prices set by the platform. While a buyer can purchase multiple products, they are permitted to buy at most one unit of each product. Each buyer aims to maximize her utility, while the platform's objective is to maximize the total revenue generated from selling resources to buyers.
### _Dynamic Edge Resource Pricing as a MAB Problem_
Our goal is to design online mechanisms that allow the platform to make online decisions with performance guarantees. Since no prior information about the valuation functions of buyers is available to the platform, we approach this problem by casting it as a MAB problem. Specifically, the platform selects a single price vector of the products to offer to a newly arrived buyer at each time \(t\), resembling the act of pulling a single arm from a set of available arms (prices). The outcome, which can either be a purchase or a refusal to buy, is then observed and accompanied by a reward.
In MAB problems, the decision maker is limited to a finite set of choices, known as the set of arms. However, in an online pricing problem, the action space for pricing may be very large or even infinite. To overcome this, we take advantage of the structured nature of the action space, where prices are simply numbers within a fixed interval, and discretize the action space by assuming the existence of a predefined set of discrete product prices at each EN, i.e., we have:
\[p_{i,j}^{t}\in\{p_{i,j}^{t,1},\ldots,p_{i,j}^{t,V}\},\ \forall i,j,t,\]
where \(v\in\{1,\ldots,V\}\) represents different price options \(p_{i,j}^{t,1}<p_{i,j}^{t,2}<\ldots<p_{i,j}^{t,V}\). The assumption is reasonable given that the price options can represent a variety of price levels (e.g., very low price, low price, medium price, high price, very high price). This approach is referred to as pre-adjusted discretization, a concept that has been explored in various literature. In our formulation, we consider a fixed and known set of \(K\) arms, each representing a possible price vector, denoted by \(\mathcal{P}\) with a cardinality of \(|\mathcal{P}|=K\).
The algorithm is the monopolist EC platform that interacts with a set \(\mathcal{T}\) of \(T\) potential buyers whose requests are arriving one by one. The time proceeds in \(T\) rounds as buyers arrive, where \(T\) is a finite, known time horizon. In each round \(t\), buyer \(t\) arrives, the algorithm picks an arm \(\mathbf{p}^{t}\in\mathcal{P}\), i.e., a vector of prices \(\mathbf{p}^{t}=(p_{i,1}^{t},\ldots,p_{1,N}^{t},p_{2,1}^{t},\ldots,p_{M,N}^{t})\) and offers at most one unit of each VM \((i,j)\) at price \(p_{i,j}^{t}\) to
buyer \(t\). Recalling that each buyer \(t\) has its own valuations \(v^{t}_{i,j}\) to each of the offered product type \((i,j)\). The buyer then chooses to procure a subset of products based on their valuations and leaves. In particular, the buyer purchases the product \((i,j)\), i.e, VM type \(i\) at EN \(j\), if \(v^{t}_{i,j}\geq p^{t}_{i,j}\). The valuation function is assumed to be drawn _from a fixed (but unknown) distribution_ over the possible valuation functions, called the demand distribution.
Once the buyer \(t\) decides on the set of products to buy at the offered prices, the platform receives the payment from the buyer, i.e., the reward \(r^{t}\) which is assumed to be bounded in \([0,1]\) (\(r^{t}\in[a,b]\) can be scaled to satisfy the assumed bound). The EC platform then allocates the requested resources to the buyer \(t\), and consumes the amount \(c^{t}_{i,j}\in\{0,1\}\) of product \((i,j)\). This implies:
\[\begin{cases}c^{t}_{i,j}=1&\text{if}\;\;v^{t}_{i,j}\geq p^{t}_{i,j},\\ c^{t}_{i,j}=0&\text{if}\;\;0\leq v^{t}_{i,j}<p^{t}_{i,j}.\end{cases} \tag{1}\]
Then, the utility of buyer \(t\) can be expressed as \(u_{t}=\sum_{i=1}^{M}\sum_{j=1}^{N}(v^{t}_{i,j}-p^{t}_{i,j})c^{t}_{i,j}\). For clarity, we denote the following consumption vector for the resources including \(MN\) products at round \(t\):
\[\mathbf{c}^{t}=(c^{t}_{1,1},\dots,c^{t}_{1,N},c^{t}_{2,1},\dots,c^{t}_{M,N}) \in\{0,1\}^{MN}.\]
We define outcome vector \((r^{t};\mathbf{c}^{t})\in[0,1]\times\{0,1\}^{MN}\) for round \(t\) that includes the reward \(r^{t}\) and resource consumption vector \(\mathbf{c}^{t}\). The values of \(r^{t}\) and \(\mathbf{c}^{t}\) can be only revealed to the platform after the buyer makes decisions at round \(t\).
The platform aims to determine a "_policy_" to associate its next decisions \(\mathbf{p}^{t}\) using past observations \(\big{(}(\mathbf{p}^{1},r^{1}),(\mathbf{p}^{2},r^{2}),\dots,(\mathbf{p}^{t-1}, r^{t-1})\big{)}\). The optimal decisions is the set of price vector \(\{\mathbf{p}^{t*}:t=1,2,\dots\}\) with maximal expected reward \(\mu^{*}\). The performance of the policy is evaluated using the _regret_, which measures the difference between the expected reward of the best arm and the expected reward of the selected arms. We denote \(\textit{REG}_{t}\) as the regret of the policy with respect to the optimal policy evaluated at time \(t\).
## III Solution Approaches
By leveraging the classical UCB algorithm [22], we develop two online posted-price mechanisms, namely Kullback-Leibler UCB (KL-UCB) and min-max optimal strategy (MOSS). KL-UCB is a model-based technique that focuses on upper-bounding the expected reward, while MOSS is geared towards minimizing the regret. Specifically, the KL-UCB algorithm calculates an upper confidence bound on the expected reward for each arm and selects the arm with the highest bound in each round. MOSS, on the other hand, determines the minimum regret for each arm by considering the worst-case scenario and selects the arm with the minimum regret in each round.
### _KL-UCB Algorithm_
We present the KL-UCB algorithm, a variant of the popular UCB algorithm [22], in the context of dynamic edge resource pricing. We analyze the regret of KL-UCB, which uses the Kullback-Leibler (KL) divergence as a measure of uncertainty and reaches the lower bound of Lai and Robbins [24] in the special case of Bernoulli rewards. For arbitrary bounded rewards, KL-UCB is the only method that satisfies a uniformly better regret bound than the basic UCB policy. First, we define the following empirical reward of price vector \(\mathbf{p}\) at time \(t\):
\[\hat{r}^{t}(\mathbf{p})=\frac{R_{t}}{n_{t}(\mathbf{p})}=\frac{\sum_{\tau=1}^{t }r^{\tau}(\mathbf{p})}{n_{t}(\mathbf{p})},\]
where \(n_{t}(\mathbf{p})\) denotes the number of times the price vector \(\mathbf{p}\) has been played up to time \(t\) and \(R_{t}=\sum_{\tau=1}^{t}r^{\tau}(\mathbf{p})\) denotes the accumulative sum of reward up to time \(t\).
Departure from UCB in [22], KL-UCB utilizes a distinct form of UCB estimates, which results in different regret bounds. The algorithm selects the available price vector \(\mathbf{p}\) with the highest \(\textbf{UCB}^{\text{KL}}_{\mathbf{p},t}\), which is defined as:
\[\textbf{UCB}^{\text{KL}}_{\mathbf{p},t}=\max\left\{\mathbf{q}\in[0,1]:d\left( \hat{r}^{t}(\mathbf{p}),\mathbf{q}\right)n_{t}(\mathbf{p})\leq f(t)\right\},\]
where the level of confidence is set by the exploration function \(f(t)=\log(t)+\gamma\log(\log(t))\). Here, \(d(\cdot,\cdot)\) is the KL divergence between two probability distributions, the estimated reward distribution and the prior distribution. If the inputs are vectors, it is understood as being computed component-wise. For example, KL divergence between Bernoulli distributions of parameters \(u\) and \(v\) is represented by \(d(u,v)=u\log\frac{u}{v}+(1-u)\log\frac{1-u}{1-v}\). It is important to note that \(d(u,v)\) is strictly convex and increasing on the interval \([u,1]\) for any \(u\in[0,1]\). The hyper-parameter \(\gamma\) is chosen to be equal to \(0\) in practice for optimal results. Algorithm 1 provides the pseudocode for the KL-UCB algorithm.
```
1: Try each arm once
2:for\(t=K+1\) to \(T\) (i.e., until resource exhausted) do
3: pick a price vector \(\mathbf{p}^{t}=\operatorname*{argmax}_{\mathbf{p}\in\mathcal{P}}\textbf{UCB}^{ \text{KL}}_{\mathbf{p},t}\)
4: observe the consumption \(\mathbf{c}^{t}\) and reward \(r^{t}\).
```
**Algorithm 1** Kullback-Leibler UCB (KL-UCB)
Let \(d(\mu_{\mathbf{p}},\mu_{\mathbf{p}^{*}})\) be the KL divergence between estimated reward \(\mu_{\mathbf{p}}\) from choosing arm \(\mathbf{p}\) and the maximal expected reward \(\mu_{\mathbf{p}^{*}}\) from choosing the optimal arm \(\mathbf{p}^{*}\). For any integer \(t>0\), we have the following bounds for the number of draws up to time \(t\), \(n_{t}(\mathbf{p})\), of any sub-optimal arm \(\mathbf{p}\) (see [19]):
**Theorem 1**.: _Assume that the distribution of observed reward for the arm \(\mathbf{p}\) belongs to a family \(\{p_{\theta,\theta\in\Theta_{\mathbf{p}}}\}\) of distributions. For any uniformly good policy, \(n_{t}(\mathbf{p})\) is lower bounded by_
\[n_{t}(\mathbf{p})\!\!\!\geq\!\!\left(\frac{1}{\inf_{\theta\in\Theta_{\mathbf{p} ^{*}}\mathbb{E}[p_{\theta_{\mathbf{p}^{*}}]>\mu_{\mathbf{p}^{*}}}}d(\mu_{ \mathbf{p}},\mu_{\mathbf{p}^{*}})}\!+\!o(1)\right)\!\!\log(k), \tag{2}\]
_where \(\mathbb{E}(p_{\theta})\) is the expectation under \(p_{\theta}\); hence, the regret is lower bounded as follows_
\[\liminf_{t\to\infty}\frac{\mathbb{E}[\textit{REG}_{t}]}{\log(t)}\!\!\!\geq\!\! \!\!\!\sum_{\mathbf{p}:\mu_{\mathbf{p}}\leq\mu_{\mathbf{p}^{*}}}\!\!\!\!\frac{ \mu_{\mathbf{p}^{*}}-\mu_{\mathbf{p}}}{\inf_{\theta\in\Theta_{\mathbf{p}^{*}} \mathbb{E}[p_{\theta_{\mathbf{p}^{*}}]>\mu_{\mathbf{p}^{*}}}}d(\mu_{\mathbf{p}}, \mu_{\mathbf{p}^{*}})}. \tag{3}\]
**Remark 1** (Corollary 3 of [19]).: _The KL-UCB pricing scheme is asymptotically optimal if \(r^{t}\) follows Bernoulli distribution [24]:_
\[n_{t}(\mathbf{p})\geq\left(\frac{1}{d(\mu_{\mathbf{p}},\mu_{\mathbf{p}^{*}})}+o(1 )\right)\log(t),\]
_with a probability tending to \(1\)._
**Theorem 2** (Theorem 2 of [19]).: _For the KL-UCB algorithm, let \(\gamma=3\) in Algorithm 1, \(n_{t}(\mathbf{p})\) is upper-bounded by_
\[\mathbb{E}[n_{t}(\mathbf{p})]\!\leq\!\frac{\log(t)}{d(\mu_{\mathbf{p}},\mu_{ \mathbf{p}^{*}})}(1+\epsilon)\!+\!C_{1}\log(\log(t))\!+\!\frac{C_{2}(\epsilon) }{t^{b(\epsilon)}} \tag{4}\]
_where \(C_{1}\) is a positive constant and \(C_{2}(\epsilon)\) and \(b(\epsilon)\) denote positive function of \(\epsilon>0\). Hence,_
\[\limsup_{t\to\infty}\frac{\mathbb{E}[\text{REG}_{t}]}{\log(t)}\leq\sum_{ \mathbf{p}:\mu_{\mathbf{p}^{*}}\leq\mu_{\mathbf{p}^{*}}}\frac{\mu_{\mathbf{p} ^{*}}-\mu_{\mathbf{p}}}{d(\mu_{\mathbf{p}},\mu_{\mathbf{p}^{*}})}. \tag{5}\]
Compared to the UCB algorithm [22], KL-UCB has a strictly better theoretical guarantee as the divergence \(d(\mu_{\mathbf{p}},\mu_{\mathbf{p}^{*}})>2(\mu_{\mathbf{p}}-\mu_{\mathbf{p}^ {*}})^{2}\). This superiority has also been observed in simulations. KL-UCB can be easily adapted to handle other reward distributions by choosing a proper divergence function \(d(\cdot,\cdot)\). For exponential rewards, the choice of divergence function should be \(d(u,v)=\frac{v}{u}-1-\log(\frac{u}{v})\).
### _Min-max optimal strategy (MOSS)_
In this section, we present the MOSS algorithm, which has been proven to achieve the best distribution-free regret of \(\sqrt{TK}\) for stochastic bandits [20]. MOSS utilizes a unique UCB based on the empirical mean reward, defined as follows:
\[\text{UCB}^{\text{MOSS}}_{\mathbf{p},t}=\hat{r}^{t}(\mathbf{p})+\sqrt{\frac{ \max\left(\log(\frac{T}{Kn_{t}(\mathbf{p})}),0\right)}{n_{t}(\mathbf{p})}}, \tag{6}\]
where \(\hat{r}^{t}(\mathbf{p})\) refers to the empirical reward of arm \(\mathbf{p}\) at time \(t\) and \(n_{t}(\mathbf{p})\) represents the number of times arm \(\mathbf{p}\) has been played. The platform chooses the price vector that maximizes \(\text{UCB}^{\text{MOSS}}_{\mathbf{p},t}\) at each time step. Algorithm 2 provides the pseudocode for the MOSS algorithm.
```
1: Try each arm once
2:for\(t=K+1\) to \(T\) (i.e., until resource exhausted) do
3: pick a price vector \(\mathbf{p}^{t}=\operatorname*{argmax}_{\mathbf{p}\in\mathcal{P}}\text{UCB}^{ \text{MOSS}}_{\mathbf{p},\mathbf{p}}\)
4: observe the consumption \(\mathbf{c}^{t}\) and reward \(r^{t}\).
```
**Algorithm 2** Min-max optimal strategy in the stochastic case (MOSS)
The following result states the bound for the regret of the distribution-free MOSS policy.
**Theorem 3** (Theorem 5 of [20]).: _MOSS satisfies_
\[\sup\text{REG}_{T}\leq 49\sqrt{TK}, \tag{7}\]
_where the supremum is taken over all \(K\)-tuple of probability distributions on \([0,1]\)._
## IV Simulation
In this section, we evaluate the effectiveness of the proposed mechanisms by comparing them with traditional bandit algorithms, namely the Epsilon-Greedy [21], UCB [22], and Thompson Sampling (TS) algorithms. We present these algorithms in our technical report [25]. Note that the proposed mechanisms are distribution-free and do not require prior knowledge of the buyers' valuations. Thus, to assess the performance of the other algorithms, we simulate buyer valuations as they make decisions based on their preferences for heterogeneous edge resources. These preferences may be influenced by factors such as network delay and computational requirements. The platform updates the resource prices by observing and learning from the historical data and offers a take-it-or-leave-it price to each new buyer that arrives in an online fashion. Then, the newly arrived buyer chooses the product that offers the highest value for her money to maximize her utility. In our simulation, we consider \(N=3\) ENs and \(M=3\) types of VM. The platform interacts with \(T=100000\) buyers and has \(K=20\) pricing options [5] to choose from in each interaction. We model the environment in which each buyer's valuation (\(v_{t}\)) is independently and identically generated from a truncated distribution. We consider the following distributions:
1. **Uniform**: \(U[0,1]\).
2. **Gaussian:** with mean \(\mu=0.2\) and variance \(\sigma=0.2\).
3. **Exponential:** with mean \(\mu=\frac{1}{\lambda}=2\).
We run \(1000\) episodes for each distribution setting and compute the cumulative reward and cumulative regret. The results are depicted in Figure 1, Figure 2, and Figure 3.
The performance evaluation reveals that KL-UCB and MOSS display similar levels of performance in terms of regret and cumulative reward. The arm selections of these two algorithms are also comparable, as illustrated in Figure 3(b). Notably, our proposed algorithms outperform traditional algorithms, such as EG and basic UCB, in both regret and cumulative reward. This outcome validates the theoretical results previously established. While KL-UCB exhibits better regret performance compared to MOSS, as evidenced in Figures 1(b) and 2(b), MOSS proves to be more computationally efficient, as shown in Table I. This aligns with MOSS's design, which is intended to handle MAB problems with numerous arms and high dimensions. In contrast, KL-UCB incurs a higher computational cost due to the need to update the UCB for each arm at every time step. This process involves solving an optimization problem and computing the KL divergence between the estimated and prior distributions, making it more complex than MOSS's simpler sampling and updating methods.
Based on our simulation results, we have found that the TS algorithm performs optimally when dealing with uniform and Gaussian distributions. This is due to the simplicity and well-established mathematical representation of these distributions. Therefore, the TS algorithm is able to effectively model the uncertainty of the reward distribution and make informed decisions regarding the exploration-exploitation trade-off. However, when the buyer's valuation is drawn from an exponential distribution, as shown in Figure 3(a), the performance of the TS algorithm is the poorest compared to the other algorithms tested. This can be attributed to the heavier tails of exponential distributions, which tend to concentrate the expected rewards in a small number of arms, making it more difficult for the TS algorithm to determine the best arm to play, as shown in Figure 3(b). In such scenarios, alternative algorithms such as EG, UCB, MOSS, or KL-UCB have been found to perform better as they adopt different strategies to handle the exploration-exploitation trade-off.
## V Conclusion and future work
In this paper, we presented two novel real-time online pricing mechanisms for allocating heterogeneous edge resources without requiring prior knowledge of demand distribution. To capture the preferences of delay-sensitive buyers, our proposed MAB model takes into consideration both the VM types and their geographic locations, while allowing multi-product purchases. The proposed algorithms with performance guarantees demonstrate efficient resource allocation and maximization of profit for the platform. The numerical results indicate the superiority of our proposed online mechanisms over the benchmark schemes derived from the traditional bandit literature.
|
2307.12120 | Quantum Money from Abelian Group Actions | We give a construction of public key quantum money, and even a strengthened
version called quantum lightning, from abelian group actions, which can in turn
be constructed from suitable isogenies over elliptic curves. We prove security
in the generic group model for group actions under a plausible computational
assumption, and develop a general toolkit for proving quantum security in this
model. Along the way, we explore knowledge assumptions and algebraic group
actions in the quantum setting, finding significant limitations of these
assumptions/models compared to generic group actions. | Mark Zhandry | 2023-07-22T16:39:48Z | http://arxiv.org/abs/2307.12120v4 | # Quantum Money from Abelian Group Actions
###### Abstract
We give a construction of public key quantum money, and even a strengthened version called quantum lightning, from abelian group actions, which can in turn be constructed from suitable isogenies over elliptic curves. We prove security in the generic group model for group actions under a plausible computational assumption, and develop a general toolkit for proving quantum security in this model. Along the way, we explore knowledge assumptions and algebraic group actions in the quantum setting, finding significant limitations of these assumptions/models compared to generic group actions.
## 1 Introduction
Quantum money, first envisioned by Wiesner [20], is a system of money where banknotes are quantum states. By the no-cloning theorem, such banknotes cannot be copied, leading to un-counterfeitable currency. A critical feature of quantum money, identified by [1], is _public verification_, allowing anyone to verify while only the unit can create new banknotes. Such public key quantum money is an important central object in the study of quantum protocols, but unfortunately convincing constructions have remained elusive. See Section 1.5 for a more thorough discussion of prior work in the area.
This Work.We construct public key quantum money from abelian group actions, which can be instantiated by suitable isogenies over ordinary elliptic curves. Group actions, and the isogenies they abstract, are one of the leading contenders for post-quantum secure cryptosystems. Our construction could plausibly even be quantum lightning, a strengthening of quantum money with additional applications. Our construction is arguably the first time group actions have been used to solve a classically-impossible cryptographic task that could not already be solved using other standard tools like LWE. Our construction is sketched in Section 1.1 below, and given in detail in Section 3.
While our main construction can be instantiated on a clean abelian group action -- often referred to as an "effective group action" (EGA) -- many isogeny-based group actions diverge from this convenient abstraction. We therefore provide an alternative candidate scheme which can be instantiated on so-called "restricted effective group actions" (REGAs); see Section 6 for details. We prove the quantum lightning security of our protocols in the generic group action model, under a new but natural strengthening of the discrete log assumption on group actions. Note that generic group actions cannot be used to give unconditional quantum hardness results, so some additional computational assumption is necessary. In order to prove our result, we develop a new toolkit for
quantum generic group action proofs; see Section 4. We believe ours is the first proof of security in the generic group action model.
Along the way, we explore knowledge assumptions and algebraic group actions in the quantum setting, finding significant limitations of these assumptions/models compared to generic group actions. Specifically, unlike the classical setting where knowledge assumptions typically hold unconditionally against generic attacks, we explain why such statements likely do not hold quantumly. In the specific case of group actions, we indeed show an efficient generic attack on an analog of the "knowledge of exponent" assumption. This potentially casts doubt on quantum knowledge assumptions in general. We do give a more complex definition that avoids our attack, but it is unclear if the assumption is sound and more analysis is needed. For completeness, we give an alternative proof of security for our construction under this new knowledge assumption.
We also discuss an algebraic model for group actions, which can be seen as a variant of the knowledge of exponent assumption. Unlike the classical setting where algebraic models live "between" the fully generic and standard models, we find that the algebraic group action model is likely incomparable to the generic group action model, and security proofs in the model are potentially problematic. As these issues do not appear for generic group actions, we therefore propose that generic group actions are the preferred idealized model for analyzing cryptosystems. See Section 5 for details.
We conclude in Section 7 with a discussion of possible generalizations and relation to approaches for building quantum money from LWE.
### Our Construction
Abelian Group Actions.We will use additive group notation for abelian groups. An abelian group action consists of an abelian group \(\mathbb{G}\) and a set \(\mathcal{X}\), such that \(\mathbb{G}\) "acts" on \(\mathcal{X}\) through the binary relation \(*:\mathbb{G}\times\mathcal{X}\to\mathcal{X}\) with the property that \(g*(h*x)=(g+h)*x\) for all \(g,h\in\mathbb{G},x\in\mathcal{X}\). We will also assume a _regular_ group action, which means that for every \(x\in\mathcal{X}\), the map \(g\mapsto g*x\) is a bijection.
The main group actions used in cryptography are those arising from isogenies over elliptic curves. For example, see [12, 13, 14, 15]. Group action cryptosystems rely at a minimum on the assumed hardness of discrete logarithms: given \(x,y=g*x\in\mathcal{X}\), finding \(g\). For isogeny-based actions, this corresponds to the hard problem of computing isogenies between elliptic curves. Other hard problems are possible, such as analogs of computational/decisional Diffie-Hellman, and more.
The QFT.Our quantum money scheme will utilize the quantum Fourier transform (QFT) over general abelian groups. This is a quantum procedure that maps
\[|g\rangle\mapsto\frac{1}{\sqrt{|\mathbb{G}|}}\sum_{h\in\mathbb{G}}\chi(g,h)|h \rangle\enspace.\]
Here, \(\chi\) is some potentially complex phase term. In the case of \(\mathbb{G}\) being the additive group \(\mathbb{Z}_{N}\), \(\chi(g,h)\) is defined as \(e^{i2\pi gh/N}\), with a slightly more complicated definition for non-cyclic groups1. The main property we need from \(\chi\) (besides making the QFT unitary) is that it is _bilinear_, in the sense that \(\chi(g,h_{1}+h_{2})=\chi(g,h_{1})\cdot\chi(g,h_{2})\). It is also symmetric: \(\chi(g,h)=\chi(h,g)\).
Footnote 1: Remember that the group aoperation is \(+\), so \(gh\) in the exponent is not the group operation, but instead multiplication in the ring \(\mathbb{Z}_{N}\).
Our Quantum Money Scheme.Our quantum money scheme is as follows; see Section 3 for additional details.
* Gen: initialize a register in the state \(\frac{1}{\sqrt{|\mathbb{G}|}}\sum_{g\in\mathbb{G}}|g\rangle\), which can be computed by applying the QFT to \(|0\rangle\). Let \(x\in\mathcal{X}\) be arbitrary. Then by computing the group action in superposition, compute \(\frac{1}{\sqrt{|\mathbb{G}|}}\sum_{h\in\mathbb{G}}|g\rangle|g*x\rangle\). Next, apply the QFT over \(\mathbb{G}\) to the first register. The result is: \[\frac{1}{|\mathbb{G}|}\sum_{g,h\in\mathbb{G}}\chi(g,h)|h\rangle|g*x\rangle= \frac{1}{\sqrt{|\mathbb{G}|}}\sum_{h}|h\rangle|\mathbb{G}^{h}*x\rangle\] Here, \(|\mathbb{G}^{h}*x\rangle\) is the state \(\frac{1}{\sqrt{|\mathbb{G}|}}\sum_{g\in\mathbb{G}}\chi(g,h)|g*x\rangle\). Note that \(|\mathbb{G}^{h}*x\rangle\) is, up to an overall phase, independent of \(x\). Now measure \(h\), in which case the second register collapses to \(|\mathbb{G}^{h}*x\rangle\). Output \(h\) as the serial number, and \(|\mathbb{G}^{h}*x\rangle\) as the money state.
* To verify a banknote \(\$\), choose a random \(u\in\mathbb{G}\), and initialize a new qubit with \((|0\rangle+|1\rangle)/\sqrt{2}\). Then apply the controlled group action \(|b,y\rangle\mapsto\begin{cases}|0,y\rangle&\text{ if }b=0\\ |1,u*y\rangle&\text{ if }b=1\end{cases}\). If \(\$\) is the honest banknote state, then the state of the system becomes: \[\frac{1}{\sqrt{2}}\left(|0\rangle+\chi(u,h)^{-1}|1\rangle\right)|\mathbb{G}^{ h}*x\rangle\] We can then measure the first qubit in the basis containing \(|0\rangle+\chi(u,h)^{-1}|1\rangle\), which will accept with probability \(1\) for honest banknote states. We can repeat this process \(\lambda\) times, and accept only if all trials accept. It is possible to show that if all \(\lambda\) trials accept, the result state is \(2^{\Theta(\lambda)}\)-close to the honest banknote state.
An instantiation using REGAs.For some isogeny-based group actions such as CSIDH [1], the operation \(*\) is only efficiently computable for a very small set \(S\subseteq\mathbb{G}\) of group elements. Such group actions are called "restricted effective group actions" (REGAs) [1]. Above, however, we see that we need to compute the group action on all possible elements in \(\mathbb{G}\), both for minting and for verification. We therefore give a variant of the construction above which only uses the ability to compute \(*\) for elements in \(S\). We show that we are still able to sample \(|\mathbb{G}^{h}*x\rangle\), but now the serial number has the form \(\mathbf{A}^{T}h+\mathbf{e}\bmod N\) for a known matrix \(\mathbf{A}\) and a "small" \(e\in\mathbb{Z}^{n}\)2. Under plausible assumptions, the serial number actually hides \(h\)3. We nevertheless show that we can use such a noisy serial number for verification. For details, see Section 6. The security of our alternate scheme is essentially equivalent to the main scheme.
Footnote 2: Here, we are interpreting \(h\) a vector in \(\mathbb{Z}_{N}^{n}\) for some \(n,N\), which is possible since \(\mathbb{G}\) is abelian.
### The security of our scheme
We do not know how to base the security of our schemes on any standard assumptions on isogenies. However, we are able to prove the security of our scheme in a generic group action model (GGAM), an analog of the generic group model [13, 14] adapted to group actions. Generic models for group actions have been considered previously [11, 1, 12, 13, 14]. However, to the best of our knowledge, ours is the first time the model has been used to prove security against quantum attacks.
The challenge with the quantum GGAM is that the query complexity of computing discrete logarithms is actually polynomial [1]. This means we cannot rely on query complexity alone to justify hardness, and must additionally make computational assumptions. This is in contrast to the classical setting, where the generic group (action) model allows for unconditional proofs of security by analyzing query complexity alone. In fact, most if not all generic group model proofs from the classical setting are unconditional query complexity proofs. This means that proofs in the quantum GGAM will look very different than classical proofs in the GGM/GGAM; in particular, proofs will require a reduction from the underlying hard problem. At the same time, in order to take advantage of the generic oracle setting, it would seem that quantum query complexity arguments are still needed. But a priori, it may not be obvious how to leverage query complexity in any useful way, given the preceding discussion.
Our Framework.In Section 4, we develop a new framework to help in the task of proving quantum hardness results relative to generic group actions. To illustrate our ideas, we first start with the following task. We want to show that discrete logarithms remain hard, even if the adversary is given quantum oracle access to the function that maps \(g*x\) to \((-g)*x\) (for some fixed starting set element \(x\)). This is an important setting in isogeny-based group actions, as these negation queries correspond to computing twists of elliptic curves. We want to prove generic hardness of this problem, assuming only plausible computational assumptions on a group action where such negation queries are _not_ permitted.
Suppose toward contradiction that there was a generic adversary which could utilize negation queries to solve discrete logarithms. Let \((*,\mathbb{G},\mathcal{X})\) be a plain group action where negation queries are not allowed. We will define a new group action \((\star,\mathbb{G},\mathcal{X}^{\prime})\) as follows. First sample a random injection \(\Pi:\mathcal{X}^{2}\to\{0,1\}^{m}\) whose inputs are _pairs_ of set elements. Then define \(\mathcal{X}^{\prime}\) as the image under \(\Pi\) of pairs of the form \((g*x,(-g)*x)\). \(\star\) acts in the natural way: \(g*\Pi(y,z)=\Pi(g*y,(-g)*z)\).
Our reduction will sample a \(\Pi\)4 and run the generic adversary on the new group action, using its knowledge of \(\Pi\) and its inverse to implement the action \(\star\). Notice now that our reduction also has the ability to compute negations: given \(\Pi(y,z)\) where \(y=g*x\) and \(z=(-g)*x\), the negation of \(\Pi(y,z)\) is exactly the element \(\Pi(z,y)\) obtained by swapping \(y\) and \(z\). Thus, our reduction is able to simulate the negation queries, even though the underlying group action does not support efficient negations. This is our main idea, though there are a couple lingering issues to sort out:
Footnote 4: A random injection is exponentially large and cannot be sampled efficient. Instead, the reduction will actually efficiently simulate a random injection \(\Pi\) using known techniques. For the purposes of our discussion here, we can ignore this issue.
* The reduction cannot perfectly simulate \((\star,\mathbb{G},\mathcal{X}^{\prime})\). The issue is that there are elements \(\Pi(y,z)\) where \(y,z\) do not have the form \(y=g*x,z=(-g)*x\) for some \(g\). In the group action \((\star,\mathbb{G},\mathcal{X}^{\prime})\), these elements will be identified as invalid set elements. On the other hand,
while our reduction can carry out the correct computation on \(y,z\) of the correct form, it will be unable to distinguish such \(y,z\) from ones of the incorrect form, and will act on these elements even though they are incorrect. As such, there will be elements that are not in \(\mathcal{X}^{\prime}\) that the reduction will nevertheless falsely identify as valid set elements. We resolve this problem by choosing the images of \(\Pi\) to be somewhat sparse, by setting the output length \(m\) sufficiently large. Our reduction only provides the adversary elements corresponding to valid \(y,z\), and we can show, roughly, that the adversary has a negligible chance of computing elements in the image of \(\Pi\) that correspond to invalid \(y,z\). This follows from standard query complexity arguments. Thus, we are able to simulate with negligible error the correct group action \((\star,\mathbb{G},\mathcal{X}^{\prime})\).
* We have not yet specified what problem the reduction actually solves. The problem we would like to solve is the plain discrete logarithm on \((*,\mathbb{G},\mathcal{X})\), where the reduction is given \(g*x\), and must compute \(g\). However, it is unclear what challenge the reduction should give to the adversary. The natural approach is to try to give the adversary \(\Pi(g*x,(-g)*x)\), which is just the discrete log instance relative to \((\star,\mathbb{G},\mathcal{X}^{\prime})\) with the same solution \(g\). However, this requires knowing \((-g)*x\), which is presumably hard to compute given just \(g*x\) (remember that negation queries are not allowed on \((*,\mathbb{G},\mathcal{X})\)). Our solution is to simply use a slight strengthening of discrete logarithms, where the adversary is given \((g*x,(-g)*x)\) and must compute \(g\). Under the assumed hardness of this strengthened discrete log problem (again, in ordinary group actions where negations are presumed hard), we can complete the reduction and prove the generic hardness of discrete logarithms in the presence of negation queries.
The security of our money scheme.We now turn to using our framework to prove the security of our quantum money scheme in the GGAM. Inspired by our negation example above, we will simulate a generic group action \((\star,\mathbb{G},\mathcal{X}^{\prime})\) using an injection \(\Pi\) applied to a vector of set elements. Our goal will be to use a quantum lightning adversary relative to \((\star,\mathbb{G},\mathcal{X}^{\prime})\) -- in particular, a pair of identical banknotes with the same serial number -- to break some distinguishing problem relative to \((*,\mathbb{G},\mathcal{X})\). Concretely, our starting assumption gives the adversary \(y=u*x\) for a random \(u\), and then allows the adversary a single quantum query to \(z\mapsto v*x\) for an unknown \(v\), where either \(v\) is random or \(v=2u\). The adversary then has to tell whether \(v=2u\) or not. It is straightforward to prove this assumption is true in the classical GGAM. In fact, it is a quantum analog of the classical group-based problem of distinguishing \(g,g^{a},g^{b}\) from \(g,g^{a},g^{a^{2}}\), a widely used Diffie-Hellman-like assumption.
Our idea is to have \(\mathcal{X}^{\prime}\) be elements of the form \(\Pi(g*x,g*y)\) where \(y=u*x\) is the challenge given by the assumption. Let \(X=\Pi(x,y)\in\mathcal{X}^{\prime}\). Now suppose we are given two copies of the banknote \(|\mathbb{G}^{h}\star X\rangle\) relative to \((\star,\mathbb{G},\mathcal{X}^{\prime})\) for some serial number \(h\). We then observe that, in the case where \(v=2u\), the following process preserves the banknote (up to phase): map \(\Pi(z_{1},z_{2})\) to
\(\Pi(z_{2},v*z_{1})\), where we compute \(v*z_{1}\) from \(z_{1}\) using the challenge oracle. Indeed, if \(v=2u\), then
\[|\mathbb{C}^{h}*X\rangle =\frac{1}{\sqrt{|\mathbb{G}|}}\sum_{g\in\mathbb{G}}\chi(g,h)|g*\Pi( x,y)\rangle=\frac{1}{\sqrt{|\mathbb{G}|}}\sum_{g\in\mathbb{G}}\chi(g,h)|\Pi(g*x,g*y)\rangle\] \[\mapsto\frac{1}{\sqrt{|\mathbb{G}|}}\sum_{g\in\mathbb{G}}\chi(g,h )|\Pi(g*y,(g+2u)*x)\rangle=\frac{1}{\sqrt{|\mathbb{G}|}}\sum_{g\in\mathbb{G}} \chi(g,h)|\Pi((g+u)*x,(g+2u)*x)\rangle\] \[=\chi(-u,h)\frac{1}{\sqrt{|\mathbb{G}|}}\sum_{g^{\prime}\in \mathbb{G}}\chi(g^{\prime},h)|\Pi(g^{\prime}*x,g^{\prime}*y)\rangle=\chi(-u,h) |\mathbb{G}^{h}*X\rangle\]
Above, we used the substitution \(g^{\prime}=g+u\).
On the other hand, if \(v\neq 2u\), then the transformation will produce a state whose support is not even on \(\mathcal{X}^{\prime}\). In particular, the transformed state would be orthogonal to the original state. So our reduction will apply the above transformation to one copy of \(|\mathbb{G}^{h}*X\rangle\), leaving the other as is. Then it will perform the SWAP test on the two states. If \(v=2u\), the states will be identical and the SWAP test will accept. If \(v\neq 2u\), the states will be orthogonal, and the swap test will accept only with probability \(1/2\). Thus, we achieve a distinguishing advantage between the two cases, contradicting the assumption.
We believe our proof gives convincing evidence that our scheme should be secure on a suitable group action, perhaps even those based on isogenies over elliptic curves. However, our underlying assumption is new, and needs further cryptoanalysis. One limitation of our assumption is that it is interactive, requiring a (quantum) oracle query to the challenger. One may hope instead to use a non-interactive assumption. We do not know how to make non-interactive assumptions work, in general. In particular, if we do not have an oracle that can transform the input for us, it seems like we are limited to strategies that only permute the inputs to \(\Pi\). But since the scheme has to be efficient, the inputs to \(\Pi\) can only consist of polynomial-length vectors of set elements. Any permutation on a polynomial-length set must have smooth order. On the other hand, the only permutations on \(\mathcal{X}^{\prime}\) which preserve \(|\mathbb{G}^{h}*X\rangle\) seem to have order that divides \(|\mathbb{G}|\). Thus, if, say, the order of \(\mathbb{G}\) were a large prime, it does not seem that permuting the inputs to \(\Pi\) alone will be able to preserve \(|\mathbb{G}^{h}*X\rangle\).
### On Knowledge Assumptions and Algebraic Group Actions
In Section 5, we show a different approach to justifying the security of our scheme, by adapting certain knowledge assumptions [10] to the setting of group actions. Despite some high-level similarities to [10], the underlying details are somewhat different. The advantage of this route is that it gives a standard-model security definition (albeit, a non-standard knowledge definition) rather than a generic model proof.
However, we find significant issues with using knowledge assumptions quantumly, that appear not to have been observed before. In particular, the straightforward way to adapt the knowledge assumptions of [10] to group actions actually results in _false_ assumptions, as we demonstrate. Interestingly, our attack on the assumption is entirely generic. This is quite surprising, as in the classical setting, knowledge assumptions generally trivially hold against generic attacks.
Concretely, we show how to construct a superposition over \(\mathcal{X}\) where the underlying discrete logarithms are hidden. To accomplish this, we observe that any set element \(x\) can be seen as a superposition over all possible banknotes \(|\mathbb{G}^{h}*x\rangle\); the superposition is uniform up to individual
phases. Then we show a procedure to compute, given \(|\mathbb{C}^{h}*x\rangle\), the serial number \(h\). This allows us to apply individual phases to the various banknotes in the superposition. Certain phases will simply map \(x\) to another set element \(y\). But other phases will map \(x\) to a uniform superposition (up to phases) over \(\mathcal{X}\). Call this state \(|\psi\rangle\).
Any meaningful knowledge assumption, and in particular the result of adapting [13] to group actions, would imply that if we were to measure \(|\psi\rangle\) to get a set element \(y\), then we must also "know" \(g\) such that \(y=g*x\). However, measuring \(|\psi\rangle\) simply gives a uniform set element, importantly without any side information about \(y\). As such, under the discrete log assumption, computing such a \(g\) is hard.
We resolve this particular problem by re-framing knowledge assumptions as follows: instead of saying that any algorithm \(A\) which produces a set element \(y\) must know \(g\) such that \(y=g*x\), we say that for any such \(A\) solving some task \(T\), there is another algorithm \(B\) that also solves \(T\) such that \(B\) knows \(g\). Thus, even if the original \(A\) is constructed in such a way that it does not know \(g\), at least \(B\) does, and we can apply any security arguments to \(B\) instead of \(A\). We demonstrate that this assumption, together with an appropriate generalization of the discrete log assumption, are enough to prove the security of our scheme. However, we are somewhat skeptical of our new knowledge assumption, and it certainly needs more cryptanalysis.
Algebraic Group Actions.The Algebraic Group Model (AGM) [16] is an important model for studying group-based cryptosystems. It is considered a refinement of the generic group model, meaning that a proof in the model is "at least as" convincing as a proof in the generic group model, potentially more convincing. A couple of recent works [14, 15] have considered the group action analog, the Algebraic Group Action Model (AGAM). Here, any time an adversary outputs a set element \(y\), it must "explain" \(y\) in terms of one of it's input set elements \(x_{1},\ldots,x_{n}\) by providing a group element \(g\) such that \(y=g*x_{i}\).
The AGM can be seen as an idealized model version of the knowledge of exponent assumption, and likewise the AGAM can be seen as an idealized model version of an appropriate knowledge assumption on group actions. After all, a knowledge assumption would say that any time the adversary outputs a \(y\), it must "know" how it derived \(y\) from its inputs. The AGM/AGAM simply require the adversary to output this knowledge.
In Section 5, we explore the AGAM in the presence of quantum attackers. We do not prove any formal results, but discuss why, unfortunately, the quantum AGAM appears problematic. For starters, given our attack on quantum knowledge assumptions, we are skeptical about the soundness of the quantum AGAM. In particular, our attack indicates that it is unlikely that the AGAM is a refinement of the generic group action model; rather they are likely incomparable.
Digging a little deeper, the problem with the AGAM is that it requires the adversary to produce extra information, namely the explanation \(g\) of any output element \(y\). The issue is that if the output is actually a superposition, this \(g\) will be entangled with the superposition, meaning the AGAM adversary's output will actually be a different state than if it did not output \(g\). For example, if an AGAM adversary had to output a banknote \(|\mathbb{C}^{h}*x\rangle\) (say, as part of the quantum lightning experiment), then if it also "explained" the banknote, the entanglement with \(g\) would actually cause the banknote state to fail verification. It therefore unclear how to interpret such an adversary. Does it actually break the scheme, even if it does not pass verification? In Section 5, we go into more details about this issue as well as pointing out several other issues with the AGAM.
We note that these issues are not present in the generic group action model. Thus, despite
classically being a "worse" model than the algebraic model, we propose for the quantum setting that the generic group action model is actually _preferred_ to the AGAM.
### Further Discussion
In Section 7, we generalize group actions to _quantum_ group actions, which replace classical set elements with quantum states, but otherwise behave mostly the same as standard group actions. We give a simple quantum group action based on the Learning with Errors (LWE) problem [11], where we can actually prove that the discrete log problem is hard under LWE. Despite this promising result, we expect that the LWE-based quantum group action will be of limited use. In particular, if we instantiate our quantum money construction over this group, the construction is _insecure_. The reason is that, in this group action, it is impossible to recognize the quantum states of the set. Our security proof crucially relies on such recognition, since it allows us to characterize states accepted by the verifier. Moreover, without recognition, there is an attack: it is possible to fool the verifier with dishonest banknotes that are different from the honest ones and moreover are clonable, thereby breaking security.
Interestingly, we explain that this failed instantiation is actually _equivalent_ to a folklore approach toward building quantum money from lattices, which has been more-or-less shown impossible to make secure [10, 11]. The key missing piece in getting the folklore approach to work has been how to efficiently verify honest banknotes -- if such verification were possible, the scheme could be readily proven secure. Under our equivalence, this missing piece exactly maps to the problem of recognizing set elements in our quantum group action. For details, see Section 7. We believe this adds to the confidence of our proposal, since in group actions based on isogenies it is possible to recognize set elements, presumably without otherwise compromising hardness.
### Related Work
Public key quantum money.In Wiesner's original scheme, the mint is required to verify banknotes, meaning the mint must be involved in any transaction. The involvement of the mint also leads to potential attacks [12]. Some partial solutions have been proposed, e.g. [13, 14]. The dream solution, however, is known as _public key_ quantum money [1]. Here, anyone can verify the banknote, while only the mint can create them.
Unlike Wiesner's scheme which is well-understood, secure public key quantum money has remained elusive. While there have been many proposals for public key quantum money [1, 2, 1, 1, 1, 15, 16, 17, 18, 19], they mostly either (1) have been subsequently broken (e.g. [1, 2, 1, 15] which were broken by [1, 1, 1, 19, 20, 21]), or (2) rely on new cryptographic building blocks that have received little attention from the cryptographic community (e.g. [1, 1, 16] from problems on knots or quaternion algebras). The two exceptions are:
* Building on a suggestion of [13], [18] proved that quantum money can be built from post-quantum indistinguishability obfuscation (iO). While iO has received considerable attention and even has a convincing _pre-quantum_ instantiation [17], the post-quantum study of iO has been much less thorough. While some post-quantum proposals have been made [1, 1, 19, 20], their post-quantum hardness is not well-understood.
* [17] construct quantum money from isogenies over super-singular elliptic curves. However, there is a crucial missing piece to their proposal, namely generating uniform superpositions over super-singular curves, which is currently unknown how to do. This is closely related to the major open question of obliviously sampling super-singular elliptic curves.
In light of the above, the existence of public key quantum money is largely considered open.
Cryptography from group actions and isogenies.Isogenies were first proposed for use in post-quantum cryptography by Couveignes [14] and Rostovtsev and Stolbunov [15]. Isogenies give a Diffie-Hellman-like structure, but importantly are immune to Shor's algorithm for discrete logarithms [16] due to a more restricted structure. This restricted structure, while helping preserve security against quantum attacks, also makes the design of cryptosystems based on them more complex. Thus, significant effort has gone into building secure classical cryptosystems from isogenies and understanding their post-quantum security (e.g. [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]).
Certain isogenies such as the original proposals of [14, 15] as well as CSIDH and its variants [14, 15] can be abstracted as abelian group actions. However, many other isogenies (such as SIDH [16] and OSIDH [17]) cannot be abstracted as abelian group actions. Even among abelian group actions, we must distinguish between "effective group actions" (EGAs) and _restricted_ EGAs (REGAs). The former satisfies the notion of a clean group action, whereas in the latter, the group action can only be efficiently computed for a certain small set of group elements. CSIDH could plausibly be a EGA at certain concrete security parameters, though asymptotically it only achieves quasi-polynomial security5. Our alternate construction also works on REGAs, which can plausibly be instantiated even asymptotically by CSIDH using a quantum computer6.
Footnote 5: With the state-of-the-art, evaluating CSIDH as an EGA would require time approximately \(2\sqrt[n]{n}\) on a quantum computer, while the best quantum attack is time \(2^{\sqrt{n}}\). For a thorough discussion, see [13]. By setting \(n=\log^{3}(\lambda)\), one gets polynomial-time evaluation and the best attack taking time \(\lambda^{\sqrt{\log(\lambda)}}\).
Footnote 6: In order for CSIDH to be a REGA, one needs to compute the structure of the group. While this is hard classically, it is easy with a quantum computer using Shor’s algorithm [16]. Since we always assume a quantum computer in this work, we can therefore treat CSIDH as a REGA.
While some non-isogeny abelian group actions have been proposed (e.g. [18]), currently all such examples have been broken (e.g. [16]). For this reason, group actions are largely considered synonymous with isogenies, though this may change if more secure group actions are found.
The vast majority of the isogeny and group action literature has focused on post-quantum cryptography -- classical protocols that are immune to quantum attacks. To the best of our knowledge, only two prior works have used isogenies/group actions to build quantum protocols for tasks that are _impossible_ classically. The first is [1], who build a proof of quantumness [14]. We note that proofs of quantumness can also be achieved under several "standard" cryptographic tools, such as LWE [1] or certain assumptions on hash functions [15]. In contrast, no prior quantum money protocol could be based on similar standard building blocks. We also note that [1] currently has no known asymptotic instantiation with better-than-quasi-polynomial security, as it requires a clean group action (EGA). The second quantum protocol based on isogenies is that of [17], who build quantum money from walkable invariants, and propose an instantiation using isogenies over super-singular elliptic curves. However, such isogenies cannot be described as abelian group actions, and even more importantly their proposal is incomplete, as
discussed above. Thus, ours is arguably the first application of group actions or isogenies to obtain classically impossible tasks that could not already be achieved under standard tools.
Relation to [13].Aside from using isogenies, our work has strong conceptual similarities to [13], though also crucial differences that allow us to specify a complete protocol. Here, we give a brief overview of the similarities and differences.
The walkable invariant framework of [13] is very general, but here we describe a special case of it that would apply to certain group actions, in order to illustrate the differences with our scheme. Consider a group action that is _not_ regular, so that the set \(\mathcal{X}\) is partitioned into many distinct orbits. For \(x,y\) in the same orbit there will exist a unique \(g\) such that \(y=g*x\), but for \(x,y\) in different orbits, there will not exist any group element mapping between them. We will also assume the ability to generate a uniform superposition over \(\mathcal{X}\). We finally assume an "invariant", a unique label for each orbit which can be efficiently computed from any element in the orbit.
The minting process generates the uniform superposition over \(\mathcal{X}\), and then measure the invariant, which becomes the serial number. The state then collapses to a uniform superposition over a single orbit, which becomes the banknote. This superposition can then be verified as follows. First check that the banknote has support on the right orbit by re-computing the invariant. Then check that the state is in uniform superposition by checking that the state is preserved under action by random group elements; this is accomplished using an analog of the swap test. [13] prove the security of their scheme under the certain assumptions which, when mapped to the group action setting above, correspond to the discrete log assumption and a knowledge assumption very similar to ours.
Unfortunately, there are no known instantiations of suitable group actions for their scheme. They propose using the set of ordinary elliptic curves as the set, the number of points on the curve as the invariant, and orbits being sets of curves with the same number of points. Isogenies between curves are then the action7, which do not change the number of points on the curve. The problem is that in general curves, it is not possible to efficiently compute the action, since the degree will be too high. The action _can_ be computed on smooth-order curves, but these are rare and there is no known way to compute a uniform superposition over such smooth-order curves. For reasons we will not get into here, [13] propose using instead supersingular curves with non-smooth order, but again these are rare and there is no known way to generate a uniform superposition over such curves.
Footnote 7: It is not a proper group action since different orbits will be acted on by different groups.
We resolve the issues with instantiating [13], without needing the ability to compute uniform superpositions over the set. Our key insight is that, if we can compute the group action efficiently (say because we are in an orbit of smooth-order elliptic curves), then this is enough to sample states that _are_ uniform over a given orbit, except for certain phase terms: namely the states \(|\mathbb{G}^{h}*x\rangle\) for uniform \(h\). Then, rather than the serial number indicating which orbit we are in (which is now useless since we are in a single orbit), the serial number is a description of the phase terms, namely \(h\).
### Acknowledgments
We thank Hart Montgomery for many helpful discussions about isogenies.
Preliminaries
Here we give our notation and definitions. We assume the reader is familiar with the basics of quantum computation.
### Quantum Fourier Transform over Abelian Groups
Let \(\mathbb{G}\) be an abelian group, which we will denote additively. We here define our notation for the quantum Fourier transform over \(\mathbb{G}\). Write \(\mathbb{G}=\mathbb{Z}_{n_{1}}\times\mathbb{Z}_{n_{2}}\times\mathbb{Z}_{n_{k}}\) where \(\mathbb{Z}_{n_{j}}\) are the additive cyclic groups on \(n_{j}\) elements, and associate elements \(g\in\mathbb{G}\) with tuples \(g=(g_{1},\ldots,g_{k})\) where \(g_{j}\in\mathbb{Z}_{n_{j}}\). Then define \(\chi:\mathbb{G}^{2}\to\mathbb{C}\) by
\[\chi_{\mathbb{G}}(g,h)=\prod_{j=1}^{k}e^{i2\pi g_{j}h_{j}/n_{j}}\]
Observe the following:
\[\chi_{\mathbb{G}}(g,h) =\chi_{\mathbb{G}}(h,g) \chi_{\mathbb{G}}(g_{1}+g_{2},h) =\chi_{\mathbb{G}}(g_{1},h)\times\chi_{\mathbb{G}}(g_{2},h)\] \[\chi_{\mathbb{G}}(-g,h) =\chi_{\mathbb{G}}(g,h)^{-1} \sum_{g\in\mathbb{G}}\chi_{\mathbb{G}}(g,h) =\begin{cases}|\mathbb{G}|&\text{ if }h=1_{\mathbb{G}}\\ 0&\text{ if }h\neq 1_{\mathbb{G}}\end{cases}\]
The quantum Fourier transform (QFT) over \(\mathbb{G}\) is the unitary \(\mathsf{QFT}_{\mathbb{G}}\) defined as
\[\mathsf{QFT}_{\mathbb{G}}|g\rangle=\frac{1}{\sqrt{|\mathbb{G}|}}\sum_{h\in \mathbb{G}}\chi(g,h)|h\rangle\enspace.\]
Observe that \(\mathsf{QFT}_{\mathbb{G}}=\mathsf{QFT}_{\mathbb{Z}_{n_{1}}}\otimes\cdots \otimes\mathsf{QFT}_{\mathbb{Z}_{n_{k}}}\). Therefore, since the standard QFT corresponds to \(\mathsf{QFT}_{\mathbb{Z}_{n_{j}}}\) and can be implemented efficiently, so can \(\mathsf{QFT}_{\mathbb{G}}\).
From this point on, we will only work with a single group, so we will drop the sub-script and simply write \(\chi(g,h),\mathsf{QFT}\), etc.
### Quantum Money and Quantum Lightning
Here we define quantum money and quantum lightning. In the case of quantum money, we focus on _mini-schemes_[1], which are essentially the setting where there is only ever a single valid banknote produced by the mint. As shown in [1], such mini-schemes can be upgraded generically to full quantum money schemes using digital signatures.
Syntax.Both quantum money mini-schemes and quantum lightning share the same syntax:
* \(\mathsf{Gen}(1^{\lambda})\) is a quantum polynomial-time (QPT) algorithm that takes as input the security parameter (written in unary) which samples a classical serial number \(\sigma\) and quantum banknote \(\$\).
* \(\mathsf{Ver}(\sigma,\$)\) takes as input the serial number and a supposed banknote, and either accepts or rejects, denoted by \(1\) and \(0\) respectively.
Correctness.Both quantum money mini-schemes and quantum lightning have the same correctness requirement, namely that valid banknotes produced by \(\mathsf{Gen}\) are accepted by \(\mathsf{Ver}\). Concretely, there exists a negligible function \(\mathsf{negl}(\lambda)\) such that
\[\Pr[\mathsf{Ver}(\sigma,\$)=1:(\sigma,\$)\leftarrow\mathsf{Gen}(1^{\lambda})] \geq 1-\mathsf{negl}(\lambda)\enspace.\]
Security.We now discuss the security requirements, which differ between quantum money and quantum lightning.
**Definition 2.1**.: Consider a QPT adversary \(\mathcal{A}\), which takes as input a serial number \(\sigma\) and banknote \(\$\), and outputs two potentially entangled states \(\$_{1},\$_{2}\), which it tries to pass off as two banknnotes. \((\mathsf{Gen},\mathsf{Ver})\) is a secure _quantum money mini-scheme_ if, for all such \(\mathcal{A}\), there exists a negligible \(\mathsf{negl}(\lambda)\) such that the following holds:
\[\Pr\left[\mathsf{Ver}(\sigma,\$_{1})=\mathsf{Ver}(\sigma,\$_{2})=1:\tfrac{( \sigma,\$_{1})\leftarrow\mathsf{Gen}(1^{\lambda})}{(\$_{1},\$_{2})\leftarrow \mathcal{A}(\sigma,\$)}\right]\leq\mathsf{negl}(\lambda)\enspace.\]
**Definition 2.2**.: Consider a QPT adversary \(\mathcal{B}\), which takes as input the security parameter \(\lambda\), and outputs a serial number \(\sigma\) and two potentially entangled states \(\$_{1},\$_{2}\), which it tries to pass off as two banknnotes. \((\mathsf{Gen},\mathsf{Ver})\) is a secure _quantum lightning_ scheme if, for all such \(\mathcal{B}\), there exists a negligible \(\mathsf{negl}(\lambda)\) such that the following holds:
\[\Pr\left[\mathsf{Ver}(\sigma,\$_{1})=\mathsf{Ver}(\sigma,\$_{2})=1:(\sigma, \$_{1},\$_{2})\leftarrow\mathcal{B}(1^{\lambda})\right]\leq\mathsf{negl}( \lambda)\enspace.\]
Quantum lightning trivially implies quantum money: any quantum money adversary \(\mathcal{A}\) can be converted into a quantum lightning adversary \(\mathcal{B}\) by having \(\mathcal{B}\) run both \(\mathsf{Gen}\) and \(\mathcal{A}\). But quantum lightning is potentially stronger, as it means that even if the serial number is chosen adversarially, it remains hard to devise two valid banknotes. This in particular means there is some security against the mint, which yields a number of additional applications, as discussed by [14].
_Remark 2.3_.: One limitation of quantum lightning as defined above is that it cannot hold against non-uniform attackers with quantum advice, as such attackers could have \(\sigma,\$_{1},\$_{2}\) hard-coded in their advice. The situation is analogous to the case of collision resistance, where unkeyed hash functions cannot be secure against non-uniform attackers. This limitation be remedied by either insisting on only uniform attackers or attackers with classical advice. Alternatively, one can work in a trusted setup model, where a trusted third party generates a common reference string that is then inputted into \(\mathsf{Gen},\mathsf{Ver}\). A third option is to use the "human ignorance" approach [13], in which we would formalize security proofs as explicitly transforming a quantum lightning adversary into an adversary for some other task, the latter adversary existing but is presumably unknown to human knowledge. We will largely ignore these issues throughout this work, but occasionally make brief remarks about what the various approaches would look like.
### Group Actions
An (abelian) group action consists of a family of (abelian) groups \(\mathbb{G}=(\mathbb{G}_{\lambda})_{\lambda}\) (written additively), a family of sets \(\mathcal{X}=(\mathcal{X}_{\lambda})_{\lambda}\), and a binary operation \(*:\mathbb{G}_{\lambda}\times\mathcal{X}_{\lambda}\rightarrow\mathcal{X}_{\lambda}\) satisfying the following properties:
* **Identity:** If \(0\in\mathbb{G}_{\lambda}\) is the identity element, then \(0*x=x\) for any \(x\in\mathcal{X}_{\lambda}\).
* **Compatibility:** For all \(g,h\in\mathbb{G}_{\lambda}\) and \(x\in\mathcal{X}_{\lambda}\), \((g+h)*x=g*(h*x)\).
We will additionally require the following properties:
* **Efficiently computable:** There is a QPT procedure Construct which, on input \(1^{\lambda}\), outputs a description of \(\mathbb{G}_{\lambda}\) and an element \(x_{\lambda}\in\mathcal{X}_{\lambda}\). The operation \(*\) is also computable by a QPT algorithm.
* **Efficiently Recognizable:** There is a QPT procedure Recog which recognizes elements in \(\mathcal{X}_{\lambda}\). That is, for any \(\lambda\) and any string \(y\) (not necessarily in \(\mathcal{X}_{\lambda}\)), Recog\((1^{\lambda},y)\) accepts \(y\) with overwhelming probability if \(y\in\mathcal{X}_{\lambda}\), and rejects with overwhelming probability if \(y\notin\mathcal{X}_{\lambda}\).
* **Regular:** For every \(y\in\mathcal{X}_{\lambda}\), there is exactly one \(g\in\mathbb{G}_{\lambda}\) such that \(y=g*x_{\lambda}\).
Cryptographic group actions.At a minimum, a cryptographically useful group action will satisfy the following discrete log assumption:
**Assumption 2.4**.: The _discrete log assumption_ (DLog) holds on a group action \((\mathbb{G},\mathcal{X},*)\) if, for all QPT adversaries \(\mathcal{A}\), there exists a negligible \(\lambda\) such that
\[\Pr[\mathcal{A}(g*x_{\lambda})=g:g\leftarrow\mathbb{G}_{\lambda}]\leq\mathsf{ negl}(\lambda)\enspace.\]
We will also consider stronger assumptions. One assumption we consider is the analog of DDH for group actions:
**Assumption 2.5**.: The _decisional Diffie-Hellman assumption_ (DDH) holds on a group action \((\mathbb{G},\mathcal{X},*)\) if, for all QPT adversaries \(\mathcal{A}\), there exists a negligible \(\lambda\) such that
\[\Pr\left[\mathcal{A}(a*x_{\lambda},b*x_{\lambda},c*x_{\lambda})=1:a,b,c \leftarrow\mathcal{G}_{\lambda}\right]-\Pr\left[\mathcal{A}(a*x_{\lambda},b*x_{ \lambda},(a+b)*x_{\lambda})=1:a,b\leftarrow\mathcal{G}_{\lambda}\right]\leq \mathsf{negl}(\lambda)\enspace.\]
_Remark 2.6_.: For simplicity, we model the group actions as being deterministically computed from the security parameter. We could alternatively imagine the group actions being probabilistic, in which case they would be set up by some probabilistic procedure. The parameters would then be part of a common reference string that is supplied to all parties, including the adversary.
## 3 Our Quantum Lightning Scheme
Here, we give our basic quantum lightning construction, which assumes a cryptographic group action.
**Construction 3.1**.: Let \(\mathsf{Gen},\mathsf{Ver}\) be the following QPT procedures:
* \(\mathsf{Gen}(1^{\lambda})\): Initialize quantum registers \(\mathcal{S}\) (for serial number) and \(\mathcal{M}\) (for money) to states \(|0\rangle_{\mathcal{S}}\) and \(|0\rangle_{\mathcal{M}}\), respectively. Then do the following:
* Apply \(\mathsf{QFT}_{\mathbb{G}_{\lambda}}\) to \(\mathcal{S}\), yielding the joint state \(\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{g\in\mathbb{G}_{\lambda}}|g \rangle_{\mathcal{S}}|0\rangle_{\mathcal{M}}\).
* Apply in superposition the map \(|g\rangle_{\mathcal{S}}|g\rangle_{\mathcal{M}}\mapsto|g\rangle_{\mathcal{S}}| y\oplus(g*x_{\lambda})\rangle_{\mathcal{M}}\). The joint state of the system \(\mathcal{S}\otimes\mathcal{M}\) is then \(\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{g\in\mathbb{G}_{\lambda}}|g \rangle_{\mathcal{S}}|g*x_{\lambda}\rangle_{\mathcal{M}}\).
* Apply \(\mathsf{QFT}_{\mathbb{G}_{\lambda}}\) to \(\mathcal{S}\) again, yielding \(\frac{1}{|\mathbb{G}_{\lambda}|}\sum_{g,h\in\mathbb{G}_{\lambda}}\chi(g,h)|h \rangle_{\mathcal{S}}|g*x_{\lambda}\rangle_{\mathcal{M}}\)
* Measure \(\mathcal{S}\), giving the serial number \(\sigma:=h\). The \(\mathcal{M}\) register then collapses to the banknote \(\$=|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle:=\frac{1}{\sqrt{|\mathbb{G}_ {\lambda}|}}\sum_{g\in\mathbb{G}_{\lambda}}\chi(g,h)|g*x_{\lambda}\rangle_{ \mathcal{M}}\). Output \((\sigma,\$)\).
* \(\mathsf{Ver}(\sigma,\$):\) First verify that the support of \(\$\) is contained in \(\mathcal{X}_{\lambda}\), by applying the assumed algorithm for recognizing \(\mathcal{X}_{\lambda}\) in superposition. Then repeat the following \(\lambda\) times:
* Initialize a new register \(\mathcal{H}\) to \((|0\rangle_{\mathcal{H}}+|1\rangle_{\mathcal{H}})/\sqrt{2}\).
* Choose a random group element \(u\in\mathbb{G}_{\lambda}\).
* Apply to \(\mathcal{H}\otimes\mathcal{M}\) in superposition the map \[\mathsf{Apply}|b\rangle_{\mathcal{H}}|y\rangle_{\mathcal{M}}\mapsto\begin{cases} |0\rangle_{\mathcal{H}}|y\rangle_{\mathcal{M}}&\text{ if }b=0\\ |1\rangle_{\mathcal{H}}|(-u)*y\rangle_{\mathcal{M}}&\text{ if }b=1\end{cases}\] In the case that \(\$\) is the correct banknote state \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\), the result of applying Apply is: \[\frac{1}{\sqrt{2|\mathbb{G}_{\lambda}|}}\left(|0\rangle_{ \mathcal{H}}\sum_{g\in\mathbb{G}_{\lambda}}\chi(g,h)|g*x_{\lambda}\rangle_{ \mathcal{M}}+|1\rangle_{\mathcal{H}}\sum_{g\in\mathbb{G}_{\lambda}}\chi(g,h)| (g-u)*x_{\lambda}\rangle_{\mathcal{M}}\right)\]
* Measure \(\mathcal{H}\) in the basis \(B_{h,u}:=\{(|0\rangle_{\mathcal{H}}+\chi(u,h)|1\rangle_{\mathcal{H}})/\sqrt{2},(|0\rangle_{\mathcal{H}}-\chi(u,h)|1\rangle_{\mathcal{H}})/\sqrt{2}\}\), giving a bit \(b_{u}\in\{0,1\}\). Discard the \(\mathcal{H}\) register. In the case that \(\$\) is the correct banknote state \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\), \(b_{u}\) will be \(0\) with probability \(1\), and \(\mathcal{M}\) will be left in the original banknote state. If all the \(b_{u}\) are \(0\) and the support of \(\$\) is contained in \(\mathcal{X}_{\lambda}\), then accept. If any of the \(b_{u}\) are \(1\), or if the support is not contained in \(\mathcal{X}_{\lambda}\), reject. We see that for the correct banknote, \(\mathsf{Ver}\) accepts with probability \(1\).
_Remark 3.2_.: If using a probabilistic setup of the group action, there are two options. The first is to have \(\mathsf{Gen}\) set up the group action, and have the parameters be included in the serial number. The second is to have a trusted third party set up the group action, and publish the parameters in a common reference string (CRS). If the goal is only quantum money security, then the former
option is always possible, since the security experiment uses an honestly generated serial number. If the goal is quantum lightning security, the former option may not be possible, as the adversary computes the serial number; it may be that there are bad choices of parameters for the group action (and hence the CRS inside the serial number) which make it easy to forge banknotes. Therefore, for quantum lightning security, we would expect using a trusted setup to generate a CRS containing the group action parameters.
### Accepting States of the Verifier
Above we showed that honest banknote states are accepted by the verifier. We now prove that, roughly, honest banknote states are the _only_ states accepted by the verifier, with overwhelming probability.
**Theorem 3.3**.: _Let \(|\psi\rangle\) be a state over \(\mathcal{M}\). Then \(\Pr[\mathsf{Ver}(h,|\psi\rangle)=1]=\|\langle\psi|\mathbb{G}_{\lambda}^{h}*x_ {\lambda}\rangle\|^{2}(1-2^{-\lambda})+2^{-\lambda}\)._
In other words, we can treat \(\mathsf{Ver}(h,|\psi\rangle)\) as projecting onto \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\), incurring only a negligible error. The remainder of this subsection is devoted to proving Theorem 3.3.
**Lemma 3.4**.: _For \(h^{\prime}\neq h\), \(\langle\mathbb{G}_{\lambda}^{h^{\prime}}*x_{\lambda}|\mathbb{G}_{\lambda}^{h}* x_{\lambda}\rangle=0\)_
Proof.: \[\langle\mathbb{G}_{\lambda}^{h^{\prime}}*x_{\lambda}|\mathbb{G}_{ \lambda}^{h}*x_{\lambda}\rangle =\frac{1}{|\mathbb{G}_{\lambda}|}\sum_{g,g^{\prime}\in\mathbb{G} _{\lambda}}\chi(g^{\prime},h^{\prime})^{-1}\chi(g,h)\langle g^{\prime}*x_{ \lambda}|g*x_{\lambda}\rangle\] \[=\frac{1}{|\mathbb{G}_{\lambda}|}\sum_{g\in\mathbb{G}_{\lambda}} \chi(g,h^{\prime})^{-1}\chi(g,h)=\frac{1}{|\mathbb{G}_{\lambda}|}\sum_{g\in \mathbb{G}_{\lambda}}\chi(g,h-h^{\prime})=0\]
Let \(|\psi\rangle\) be a a state with support on \(\mathcal{X}\). Since the \(|\mathbb{G}^{h^{\prime}}*x_{\lambda}\rangle\) are orthogonal and the number of \(h^{\prime}\) equals the size of \(\mathcal{X}\), the set \(\{|\mathbb{G}_{\lambda}^{h^{\prime}}*x_{\lambda}\rangle\}_{h^{\prime}}\) forms a basis for the set of states with support on \(\mathcal{X}\). We can then write \(|\psi\rangle=\sum_{h^{\prime}}\alpha_{h^{\prime}}|\mathbb{G}_{\lambda}^{h^{ \prime}}*x_{\lambda}\rangle\) where \(\sum_{h^{\prime}}\|\alpha_{h^{\prime}}\|^{2}=1\). We then have \(\|\alpha_{h}\|^{2}=\|\langle\psi|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle \|^{2}\).
Consider a single iteration of \(\mathsf{Ver}\) on serial number \(h\), which samples a random \(u\), initializes \(\mathcal{H}\) to \((|0\rangle+|1\rangle)/\sqrt{2}\), applies the map \(\mathsf{Apply}\), and then measures \(\mathcal{H}\) is basis \(B_{h,u}\) to get outcome \(b\). Let \(|\psi^{\prime}\rangle\) be the post-measurement state of \(\mathcal{M}\) conditioned on \(b=0\).
**Lemma 3.5**.: _Conditioned on \(u\), \(p:=\Pr[b_{u}=0]=\frac{1}{4}\sum_{h^{\prime}}\|\alpha_{h^{\prime}}\|^{2}\|1+ \chi(u,h-h^{\prime})\|^{2}\), and \(|\psi^{\prime}\rangle=\frac{1}{\sqrt{p}}\sum_{h^{\prime}}\alpha_{h^{\prime}} \frac{1+\chi(u,h-h^{\prime})}{2}|\mathbb{G}_{\lambda}^{h^{\prime}}*x_{\lambda} \rangle_{\mathcal{M}}\)._
Proof.: By adapting the correctness proof above, we see that the state after applying \(\mathsf{Apply}\) (but before measurement) is:
\[|\phi\rangle=\sum_{h^{\prime}\in\mathbb{G}_{\lambda}}\alpha_{h^{\prime}}\frac{1 }{\sqrt{2}}\left(|0\rangle_{\mathcal{H}}+\chi(u,h^{\prime})|1\rangle_{\mathcal{ H}}\right)|\mathbb{G}_{\lambda}^{h^{\prime}}*x_{\lambda}\rangle_{\mathcal{M}}\]
Then \(p\) is length squared of the projection of \(|\phi\rangle\) onto \((|0\rangle_{\mathcal{H}}+\chi(u,h)|1\rangle_{\mathcal{H}})/\sqrt{2}\). Therefore, \(p=\frac{1}{4}\sum_{h^{\prime}}\|\alpha_{h^{\prime}}\|^{2}\|1+\chi(u,h^{\prime })^{-1}\chi(u,h)\|^{2}=\frac{1}{4}\sum_{h^{\prime}}\|\alpha_{h^{\prime}}\|^{2} \|1+\chi(u,h-h^{\prime})\|^{2}\). Before re-normalization, the state of \(\mathcal{M}\) conditioned on \(b=0\) is then \(\sum_{h^{\prime}}\alpha_{h}\frac{1+\chi(u,h-h^{\prime})}{2}|\mathbb{G}_{\lambda} ^{h^{\prime}}*x_{\lambda}\rangle_{\mathcal{M}}\). Re-normalization gives \(|\psi^{\prime}\rangle\).
We now iterate, replacing \(\alpha_{h^{\prime}}\) with \(\alpha_{h^{\prime}}\frac{1+\chi(u,h-h^{\prime})}{2}/\sqrt{p}\). This means that after \(\lambda\) trials, conditioned on trial \(i\) using \(u_{i}\) and giving measurement outcome \(b_{i}\), we have that
\[p_{\mathsf{final}}:=\Pr[b_{1}=b_{2}=\cdots=b_{\lambda}=0]=\frac{1}{4^{\lambda }}\sum_{h^{\prime}}\|\alpha_{h^{\prime}}\|^{2}\prod_{1=1}^{\lambda}\|1+\chi(u_ {i},h-h^{\prime})\|^{2}\]
We now average over \(u\) to get \(\mathbb{E}[p_{\mathsf{final}}]\), the overall probability that \(\mathsf{Ver}\) accepts \(|\psi\rangle\).
\[\mathbb{E}[p_{\mathsf{final}}] =\frac{1}{(4|\mathbb{G}_{\lambda}|)^{\lambda}}\sum_{h^{\prime},u_ {1},\cdots,u_{\lambda}}\|\alpha_{h^{\prime}}\|^{2}\prod_{i=1}^{\lambda}\|1+ \chi(u_{i},h-h^{\prime})\|^{2}\] \[=\sum_{h^{\prime}}\|\alpha_{h^{\prime}}\|^{2}\prod_{i=1}^{\lambda }\left(\frac{1}{4|\mathbb{G}_{\lambda}|}\sum_{u}\|1+\chi(u,h-h^{\prime})\|^{2 }\right)\] \[=\sum_{h^{\prime}}\|\alpha_{h^{\prime}}\|^{2}\prod_{i=1}^{\lambda }\left(\frac{1}{4|\mathbb{G}_{\lambda}|}\sum_{u}2+\chi(u,h-h^{\prime})+\chi(u, h-h^{\prime})^{-1}\right)\] \[=\|\alpha_{h}\|^{2}+2^{\lambda}\sum_{h^{\prime}\neq h}\|\alpha_{h^ {\prime}}\|^{2}=\|\alpha_{h}\|^{2}+2^{-\lambda}(1-\|\alpha_{h}\|^{2})\] \[=\|\alpha_{h}\|^{2}(1-2^{-\lambda})+2^{-\lambda}\]
This completes the proof of Theorem 3.3.
### Computing the Serial Number
Here, we show that, given a valid banknote \(\$=|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\) with unknown serial number \(h\), it is possible to efficiently compute \(h\). This result is not needed for understanding the construction or its security, but will be used in Section 5 to break a certain natural knowledge assumption.
**Theorem 3.6**.: _There exists a QPT algorithm \(\mathsf{Findh}\) and a negligible function \(\mathsf{negl}(\lambda)\) such that, on input \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\), outputs \(h\) with probability at least \(1-\mathsf{negl}(\lambda)\)._
Proof.: Recall from the description of \(\mathsf{Ver}\) that, for a given \(u\) and given \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\), we can compute the state \(|\tau_{u,h}\rangle|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\) where \(|\tau_{u,h}\rangle:=\frac{1}{\sqrt{2}}\left(|0\rangle_{\mathcal{H}}+\chi(u,h)| 1\rangle_{\mathcal{H}}\right)\). Since this process still gives us \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\), we can repeat the process, computing \(|\tau_{u_{i},h}\rangle\) for many different \(u_{i}\).
A naive solution is to compute many copies of \(|\tau_{u,h}\rangle\) for some \(u\in\mathbb{G}_{\lambda}\), and then do state tomography to recover \(\chi(u,h)\). If \(\mathbb{G}_{\lambda}\) were cyclic, then \(\chi(u,h)\) will uniquely determine \(h\). The problem is that, since \(\mathbb{G}_{\lambda}\) is exponentially large, the distance between \(\chi(u,h)\) as \(h\) varies will be exponentially small. This means doing state tomography to a sufficiently small error to recover \(h\) would require exponentially-many samples and therefore be inefficient. However, by choosing the \(u_{i}\) carefully and being a bit more thoughtful, we can recover \(h\) in polynomial time.
Our strategy will still be to compute many copies of \(|\tau_{u,h}\rangle\) for some \(u\) and do state tomography to recover an estimate \(\hat{\chi}(u,h)\) for \(\chi(u,h)\). In time \(\mathsf{poly}(\lambda,1/\epsilon,\log(1/\delta))\), we can guarantee that \(\Pr[\|\hat{\chi}(u,h)-\chi(u,h)\|<\epsilon]\geq 1-\delta\), for any desired inverse-polynomial \(\epsilon\) and exponentially-small \(\delta\). We then do this for many different carefully chosen \(u\), which allows us to correct the errors arising from tomography, as we now explain.
The cyclic case.Suppose \(\mathbb{G}_{\lambda}\) is cyclic, and is therefore isomorphic to the additive group \(\mathbb{Z}_{N}\). In this case, \(\chi(u,h)=e^{i2\pi uh/N}=\omega_{N}^{uh}\), where \(\omega_{N}=e^{i2\pi/N}\).
Now when we do state tomography and recover \(\hat{\chi}(u,h)\), we learn an estimate of \(uh\bmod N\). In more detail, given real number \(a\) and real number \(R\), we let \(a\bmod R\) denote the unique value of \(a-Rk\) for integer \(k\) that lies in \((-R/2,R/2]\). Since we know \(\|\chi(u,h)\|=1\), we can assume, by normalizing if necessary, that \(\|\hat{\chi}(u,h)\|\) is also \(1\). Therefore, \(\hat{\chi}(u,h)=e^{i\theta}\) for some \(\theta\in(-\pi,\pi]\). Then by the tomography guarantee, we have \(|\ [\theta-(2\pi uh/N)]\bmod 2\pi\ |\leq\epsilon\), or equivalently \(|\ [N\theta/2\pi-uh]\bmod N\ |\leq\epsilon N/2\pi\), except with negligible probability
This means we reduce the computation of \(h\) to the following classical task: we get to choose arbitrary \(u_{i}\in\mathbb{Z}_{N}\) for \(i=1,\ldots,n\). In response, we learn \(u_{i}h+e_{i}\bmod N\), where \(e_{i}\) is some random variable in \([-\epsilon N/2\pi,\epsilon N/2\pi]\). In vector notation, we can write this as choosing a vector \(\mathbf{u}\in\mathbb{Z}_{N}^{n}\), and receiving \(h\mathbf{u}+\mathbf{e}\bmod N\), where \(\mathbf{e}\) is a vector whose components are independent random variables that are guaranteed to be in \([-\epsilon N/2\pi,\epsilon N/2\pi]\). The goal is to compute \(h\).
This looks very similar to a \(1\)-dimensional version of the LWE problem [10] (or more accurately, bounded distance decoding) except that in our case we get to choose the vector \(\mathbf{u}\) in whatever way so as to make the task _easy_. We can then use known techniques to find \(h\). In particular, we can choose \(\mathbf{u}=(1,2,4,,8,\cdots,2^{n-1})\) where \(n=\lceil\log_{2}N\rceil\). This is known as the gadget "matrix"9. Importantly, \(\mathbf{u}\) has an efficiently computable "trapdoor". That is, write \(N=\sum_{i=0}^{n-1}2^{i}\times N_{i}\) for bits \(N_{i}\), and let
Footnote 9: In our case the matrix has width \(1\), whereas in general applications the matrix will have many columns.
\[\mathbf{A}=\left(\begin{array}{ccccccccc}2&-1&0&0&0&\cdots&0&0\\ 0&2&-1&0&0&\cdots&0&0\\ 0&0&2&-1&0&\cdots&0&0\\ \vdots&\vdots&\ddots&\ddots&\ddots&\ddots&\vdots&\vdots\\ 0&0&0&0&0&\cdots&2&-1\\ N_{0}&N_{1}&N_{2}&N_{3}&N_{4}&\cdots&N_{n-2}&N_{n-1}\end{array}\right)\]
Then \(\mathbf{A}\) is full rank over the integers, but satisfies \(\mathbf{A}\cdot\mathbf{u}\bmod N=0^{n}\). Set \(\epsilon=\pi/3n\). Thus, given \(\mathbf{v}:=\mathbf{u}h+\mathbf{e}\bmod N\), we can compute
\[\mathbf{A}^{-1}\cdot(\mathbf{A}\cdot\mathbf{v}\bmod N)=\mathbf{A}^{-1}\cdot (\mathbf{A}\cdot\mathbf{e}\bmod N)=\mathbf{A}^{-1}\cdot(\mathbf{A}\cdot \mathbf{e})=\mathbf{e}\enspace.\]
Above, we used the fact that the entries of \(\mathbf{A}\cdot\mathbf{e}\) have absolute value at most \(n\times\max_{i,j}|\mathbf{A}_{i,j}|\times\max_{j}|\mathbf{e}_{j}|=n\times 2 \times\epsilon N/2\pi\leq N/3<N/2\), meaning that reduction mod \(N\) has no effect.
Once we compute \(\mathbf{e}\), we can then compute \(h\mathbf{u}=\mathbf{v}-\mathbf{e}\), and then \(h\) is just the first component.
The general case.We cow consider the case of general groups. Let \(\mathbb{G}_{\lambda}=\mathbb{Z}_{n_{1}}\times\mathbb{Z}_{n_{2}}\times\cdots \times\mathbb{Z}_{n_{k}}\). Write \(h=(h_{1},\cdots,h_{k})\). By choosing \(u=(u_{1},0,\cdots,0)\), the task of computing \(h_{1}\) reduces to the case where \(\mathbb{G}_{\lambda}=\mathbb{Z}_{n_{1}}\), which can be solved via the algorithm above. Likewise, we can compute \(h_{2},\cdots,h_{k}\), and hence \(h\).
## 4 A Quantum Toolkit for Generic Group Actions
Here, we recall a definition of the generic group action model (GGAM), and show how to use it to give quantum security proofs.
A Shoup-style generic group action.There have been several different proposals for how to define generic group actions [14, 15, 16, 17, 18]. Here, we briefly give a definition in the style of Shoup [19]. To help disambiguate between the different models, we will adapt terminology from [14] and refer to ours as the _Random Set Representation_ model.
We first fix a (family of) groups \(\mathbb{G}=(\mathbb{G}_{\lambda})_{\lambda}\). We also fix a length function \(m:\mathbb{Z}\to\mathbb{Z}\) with the property that \(m(\lambda)\geq\log_{2}|\mathbb{G}_{\lambda}|\). We call \(m\) the _label length_. In this model, for a given security parameter \(\lambda\), a random injection \(L:\mathbb{G}_{\lambda}\to\{0,1\}^{m}\) is chosen, where \(m=m(\lambda)\). Think of \(L(g)\) as representing \(g*x_{\lambda}\); we call \(L\) the labeling function. All parties are then given the following:
* As input, all parties receive the string \(L(0)\), where \(0\in\mathbb{G}_{\lambda}\) is the identity. \(L(0)\) represents \(x_{\lambda}\).
* All parties can then make "group action" queries. For classical algorithms, such a query takes the form \((\ell,g)\in\{0,1\}^{m}\times\mathbb{G}_{\lambda}\). The response to the query is \(L(g+L^{-1}(\ell))\); if \(\ell\) is not in the image of \(L\), then the response to the query is \(\bot\). For quantum algorithms, we follow the usual convention for modeling superposition queries to classical functions, and have the query perform the map: \[\sum_{\ell,g,\ell^{\prime}}\alpha_{\ell,g,\ell^{\prime}}|\ell,g,\ell^{\prime} \rangle\mapsto\sum_{\ell,g,\ell^{\prime}}\alpha_{\ell,g,\ell^{\prime}}|\ell, g,\ell^{\prime}\oplus L(g+L^{-1}(\ell))\rangle\] The set \(\mathcal{X}_{\lambda}\) will be interpreted as the image of \(\mathbb{G}_{\lambda}\) under \(L\). Note that group action queries allow for testing membership in \(\mathcal{X}_{\lambda}\): \(\mathcal{X}_{\lambda}\) are exactly the set of strings where the group action query does not output \(\bot\).
We call the oracle above \(\mathsf{GGAM}_{\mathbb{G},m}\). In the classical setting, we usually consider queries to the oracle to have unit cost while computation outside the oracle queries is free. If following this convention, our model essentially corresponds to the model considered in [15]. However, in the quantum setting, considering the query complexity alone is insufficient, as discrete logarithms can be solved in polynomial query complexity [1]. Therefore, it is necessary to consider the total cost of an algorithm as including both the queries (unit cost per query) and the computation outside the queries.
### On other Styles of Generic Group Actions
Other styles of generic group action are possible. For example, [14] consider a similar model except where the group itself is also hidden behind an oracle. We might call this the _Random Group, Random Set Representation_ model. It is also possible to consider a version that is akin to Maurer's [13] generic group model, where instead of random labels for every element one only receives handles. This is the kind of model considered in [18, 15]. Following the terminology of [14], this can be called the Type Safe model. We note that it does not make much sense to consider the group as an idealized object while allowing complete access to the set. Since quantum algorithms can solve discrete logarithms, they can essentially "undo" the labelling of the group. This means any such "Random Group Representation" effectively is just a standard-model group action, defeating the purpose of considering an idealized model.
Here, we discuss why these alternate models come with limitations. First we observe that hiding the group behind an oracle puts more idealized constraints on the adversary, and is therefore a less accurate modeling of group actions.
Worse is the case of Type Safe models. In the classical generic group setting, as first proved in [10] and clarified in [11], when it comes to proving security, the Type Safe and Random Representation models can usually be treated as equivalent10. This equivalence would carry over to the classical setting for generic group actions. However, we observe that the equivalence proved in [10, 11] does _not_ hold in the quantum setting. This observation was first made, but not elaborated on, by [11].
Footnote 10: This is not the case when using the models to prove _impossibility_ results, where even classical there is a major difference between the two models.
In more detail, one direction of the equivalence -- converting an adversary in the Type Safe model into one in the Random Representation model -- is trivial, both classically and quantumly. We just use the random labels from the random representation model as the handles for the Type Safe adversary. For the other direction, the classical proof will construct a Type Safe adversary out of a Random Representation adversary by choosing the random labels itself. The challenge is that the Random Representation adversary will expect identical labels on certain related queries, namely if it computes the same element \(g*x_{\lambda}\) in multiple ways. To account for this, the Type Safe adversary maintains a table of all the queries made so far, and the labels generated for those queries. Then if it ever needs to output an element that was already produced, it can use the table to make sure it uses the same label.
In the quantum setting, maintaining this table is problematic, as it requires recording the queries made by the adversary. Quantum queries cannot be recorded without perturbing them, and if the adversary detects any disturbance it may abort and refuse to work. Such an adversary would break the classical reduction. We note that sometimes it is possible to record quantum queries [11], but the recording has to be done in careful ways that limit applications. In particular, such query recording is usually done on random oracles, and there has so far been no techniques for recording queries for complicated structures like group action oracles.
Thus, based on our current understanding, the Random Set Representation model defined above seems to be "at least as good" as any other model for group actions in the quantum setting, and may in fact be "better" than the other models. For this reason, we focus on the Random Set Representation model. We leave exploring the exact relationship between the models as an interesting open question.
Algebraic Group Action Model.In Section 5, we consider a different idealized model called the Algebraic Group Action Model, the quantum and group action version of the classical Algebraic Group Model (AGM) [13]. In the classical world, this model is "between" the Type Safe model and the standard model, in the sense that security in the algebraic model implies security in the Type Safe model (which in turn often implies security in the Random Representation model, per [11]). However, in Section 5, we explain that the quantum analog of this model is actually problematic, and the proof of "between-ness" does not hold quantumly, for similar reasons as to why the equivalence between Random Representation and Type Safe models does not appear to hold quantumly. As such, it seems that the (Random Representation) generic group action model actually _better_ captures available attacks than the algebraic group action model.
### Our Framework for Quantum GGAM Security Proofs.
Challenges with the quantum GGAM.The problem with the quantum GGAM, as observed by [10], is that we cannot hope for unconditional security results, as the discrete logarithm is easy if we only count quantum query complexity. [10] take the approach of instead considering the Algebraic Group Action Model (AGAM). We discuss the pitfalls of this approach in Section 5. Here we instead observe that we can recover a meaningful model by counting both queries and computational cost. However, because we cannot hope to prove unconditional query complexity lower bounds, we must instead resort to making computational assumptions and giving reduction-style arguments. This means arguments in the quantum GGAM will look very different that proofs in the classical GGM. To the best of our knowledge, there have been no prior security proofs in the quantum GGAM. We therefore develop some new tools and techniques for giving such proofs, including a proof of security of our quantum money scheme.
Our Abstract Framework.We first give a very abstract framework, which we will then apply the framework to the GGAM.
Let \(\mathcal{Y}\) be a set, and \(\mathcal{F}\) be a family of functions \(f:\mathcal{Y}\to\mathcal{Y}\). Let \(y_{0}\in\mathcal{Y}\) be a specific starting element in \(\mathcal{Y}\). Consider a random injection \(L:\mathcal{Y}\to\{0,1\}^{m^{\prime}}\), and consider the oracle \(\mathcal{O}\) which maps \(\mathcal{O}(L(y),f)=L(f(y))\); \(\mathcal{O}\) outputs \(\bot\) on any string that is not in the image of \(L\). We will give the adversary \(L(y_{0})\) and also superposition access to \(\mathcal{O}\).
Now consider a set \(\mathcal{Y}^{\prime}\subset\{0,1\}^{s}\), and suppose we have a not-necessarily-random injection \(\Gamma:\mathcal{Y}\to\mathcal{Y}^{\prime}\) (meaning \(s\geq|\mathcal{Y}|\)). We also have a procedure \(P\) which is able to map \(P(\Gamma(y),f)=\Gamma(f(y))\). However, unlike the oracle \(\mathcal{O}\) considered above, this procedure \(P\) may output value other than \(\bot\) when given inputs that are not in the image of \(\Gamma\). Our goal is to, nevertheless, simulate \(\mathcal{O}\) using \(P\).
Concretely, we will choose a random injection \(\Pi:\{0,1\}^{s}\to\{0,1\}^{m^{\prime}}\), and simulate \(\mathcal{O}\) with the oracle \(\mathcal{O}^{\prime}(\Pi(z),f)=\Pi(P(z,f))\); \(\mathcal{O}^{\prime}\) will output \(\bot\) on any input not in the image of \(\Pi\). We will then give the adversary \(\Pi(\Gamma(y_{0}))\), ad quantum query access to \(\mathcal{O}^{\prime}\).
Application to the GGAM.In our case, we will have \(\mathcal{Y}\) be a group \(\mathbb{G}_{\lambda}\). \(\mathcal{F}\) will include for each \(h\in\mathbb{G}_{\lambda}\) the map \(g\mapsto h+g\). The distinguished element \(y_{0}\) is just \(0\in\mathbb{G}_{\lambda}\). In this way, \(\mathcal{O}\) becomes the generic group action oracle, with labeling function \(L\). However, we also include extra operations in \(\mathcal{F}\), the exact operations will depend on the application.
Our goal will be to simulate \(\mathcal{O}\), the generic group action oracle with extra operations, using only a plain group action \((\mathbb{G},\mathcal{X},*)\). \((\mathbb{G},\mathcal{X},*)\) could be a standard-model group action, or perhaps a plain generic group action. We will assume \(\mathcal{X}_{\lambda}\subseteq\{0,1\}^{m}\) for some polynomial \(m=m(\lambda)\). This "base" group action will be the source of hardness. We will therefore make some hopefully simple and mild computational assumptions about \((\mathbb{G},\mathcal{X},*)\), and hope to derive useful hardness results about the expanded group action \(\mathcal{O}\).
To do so, we will let \(\mathcal{Y}^{\prime}=\mathcal{X}_{\lambda}^{\otimes k}\) for some \(k\). We will also choose some integers \(c_{1},\ldots,c_{k}\) whose GCD is \(1\), and starting set elements \(y_{1},\ldots,y_{k}\). Then define \(\Gamma(g)=((c_{1}g)*y_{1},(c_{2}g)*y_{2},\cdots,(c_{k}g)*y_{k})\). Since the GCD of the \(c_{i}\) is \(1\), the map \(\Gamma(g)\) is injective.
For \(f\) corresponding to adding group element \(h\), we can set \(P((z_{1},\ldots,z_{k}),h)=((c_{1}h)*z_{1},\cdots,(c_{k}h)*z_{k})\). Note that this will have the correct effect, as \(P(\Gamma(g),h)=\Gamma(h+g)\). For simulating other functions \(f\in\mathcal{F}\), we will rely on other transformations to the vector \((z_{1},\ldots,z_{k})\), which will depend on the application.
### Correctness of the Simulation.
**Lemma 4.1**.: _Fix \(y_{0},\mathcal{Y},\mathcal{Y}^{\prime},\Gamma,\mathcal{F}\) as above. Assume \(m^{\prime}\geq s+t\) for some \(t\). Then consider any quantum algorithm \(\mathcal{A}\) which makes \(q\) quantum queries to its oracle. Then:_
\[\left|\Pr\left[\mathcal{A}^{\mathcal{O}}(L(y_{0}))=1\right]-\Pr\left[\mathcal{ A}^{\mathcal{O}^{\prime}}(\Pi(\Gamma(y_{0})))=1\right]\right|<O(q\times 2^{-t/2})\]
_Above, \(L,\Pi\) are random injections, with \(\mathcal{O},\mathcal{O}^{\prime}\) being derived from them as above. The probabilities are over the random choice of \(L,\Pi\) and the randomness of \(\mathcal{A}\). Note that our order of quantifiers allows \(\mathcal{A}\) to depend on \(y_{0},\mathcal{Y},\mathcal{Y}^{\prime},\Gamma,\mathcal{F}\)._
Proof.: We prove security via a sequence of hybrids.
Hybrid 0.This is the case where we run \(\mathcal{A}^{\mathcal{O}}(L(y_{0}))\) where \(L:\mathcal{Y}\to\{0,1\}^{m^{\prime}}\) is uniform random injection. Let \(p_{0}\) be the probability of outputting \(1\).
Hybrid 1.Here, we run \(\mathcal{A}^{\mathcal{O}}(L(y_{0}))\), except that we set \(L\) to be the function \(L(y)=\Pi(\Gamma(y))\), where \(\Pi\) is a random injection. But since \(\Gamma\) is an injection, this means \(L\) is a random injection anyway, so the distribution of \(L\) and hence \(\mathcal{O}\) is identical to **Hybrid 0**. Therefore, if we let \(p_{1}\) be the probability \(p_{0}\) outputs \(1\) in **Hybrid 1**, we have \(p_{1}=p_{0}\). Observe that \(L(y_{0})=\Pi(\Gamma(y_{0}))\).
Hybrid 2.Here, we run \(\mathcal{A}^{\mathcal{O}^{\prime}}(\Pi(\Gamma(y_{0})))\). Let \(p_{2}\) be the probability of outputting \(1\). On all points that \(\mathcal{O}\) accepts, \(\mathcal{O}^{\prime}\) behaves identically. Likewise, on any point that \(\mathcal{O}^{\prime}\) rejects, \(\mathcal{O}^{\prime}\) rejects as well. The only difference between this and **Hybrid 1** is that here, \(\mathcal{O}^{\prime}\) may accept elements that were rejected by \(\mathcal{O}\), namely elements that are in the image of \(\Pi\) but not in the image of \(L=\Pi\circ\Gamma\). We will show that these potential changes are nevertheless undetectable except with small probability.
Consider running \(\mathcal{A}^{\mathcal{O}}(L(y_{0}))\) where \(L(y)=\Pi(\Gamma(y))\) as in **Hybrid 1**. However, we only sample \(\Pi\) on inputs \(z\) that are in the image of \(\Gamma\); for all other inputs \(z\), \(\Pi\) remains unspecified. Observe that **Hybrid 1** never needs to evaluate \(\Pi\) on \(z\) outside of the image of \(\Gamma\), since the oracle \(\mathcal{O}\) will anyway reject in these cases. Let \(S\subseteq\{0,1\}^{m^{\prime}}\) be the set of images of \(\Pi\) sampled so far.
Now imagine simulating the rest of \(\Pi\). Let \(T\subset\{0,1\}^{m^{\prime}}\) be the set of images of \(\Pi\) for \(z\in\mathcal{Y}^{\prime}\) that are not in the image of \(\Gamma\). Observe that \(T\) is a random subset of size \(|\mathcal{Y}^{\prime}|\setminus|\mathcal{Y}|\leq|\mathcal{Y}^{\prime}|\leq 2^{s}\). We now observe that the only points where \(\mathcal{O}\) and \(\mathcal{O}^{\prime}\) differ are on pairs \((\ell,f)\) for \(\ell\in T\): for \(\ell\in S\), the two faithfully compute the same function and are identical, while for \(\ell\notin T\cup S\), both output \(\bot\).
From here, concluding that \(p_{1}\) and \(p_{2}\) are close is a standard argument. The expected total query weight in **Hybrid 1** on points \((\ell,f)\) for \(\ell\in T\) is at most \(|T|/2^{m^{\prime}}\leq 2^{-t}\). Then via standard results in quantum query complexity [1], the difference in acceptance probabilities \(|p_{1}-p_{2}|\) is at most \(O(\sqrt{q^{2}2^{-t}})=O(q\times 2^{-t/2})\). Thus \(|p_{0}-p_{2}|\leq O(q\times 2^{-t/2})\), as desired.
Next, we recall a lemma that shows that random injections can be simulated quantumly:
**Lemma 4.2** ([10]).: _Random injections with quantum query access can be simulated efficiently._
With Lemmas 4.1 and 4.2 in hand, we now turn to security proofs in the GGAM.
### Group Actions with Twists
In group actions based on isogenies, it is possible to compute a "twist", which maps \(g*x_{\lambda}\mapsto(-g)*x_{\lambda}\). It is straightforward to update our notion of group action and generic group action to incorporate twists. Let \(\mathsf{GGAM}^{\pm}_{\mathbb{G},m}\) denote the generic group action relative to group \(\mathbb{G}\) with label length \(m\). Such twists effectively allow for the dihedral group to act on the set \(\mathcal{X}_{\lambda}\). An important question is whether having this larger (non-abelian) group act on \(\mathcal{X}_{\lambda}\) can be damaging for security. Here, we show that, at least generically, the existence of twists plausibly has little impact on security.
Assumptions with Negation.We consider variants of standard assumptions on group actions where additional "negation" elements are given out. For example:
**Assumption 4.3**.: The _discrete log assumption with negation_ (\(\mathrm{DLog}^{\pm}\)) holds on a group action \((\mathbb{G},\mathcal{X},*)\) if, for all QPT adversaries \(\mathcal{A}\), there exists a negligible \(\lambda\) such that
\[\Pr[\mathcal{A}(g*x_{\lambda},(-g)*x_{\lambda})=g:g\leftarrow\mathbb{G}_{ \lambda}]\leq\mathsf{negl}(\lambda)\enspace.\]
**Assumption 4.4**.: The _computational Diffie-Hellman assumption with negation_ (\(\mathrm{CDH}^{\pm}\)) holds on a group action \((\mathbb{G},\mathcal{X},*)\) if, for all QPT adversaries \(\mathcal{A}\), there exists a negligible \(\lambda\) such that
\[\Pr\left[\mathcal{A}\left(\begin{subarray}{c}asx_{\lambda},bsx_{\lambda},\\ (-a)*x_{\lambda},(-b)*x_{\lambda}\end{subarray}\right)=(a+b)*x_{\lambda}:a,b \leftarrow\mathcal{G}_{\lambda}\right]\leq\mathsf{negl}(\lambda)\enspace.\]
**Assumption 4.5**.: The _decisional Diffie-Hellman assumption with negation_ (\(\mathrm{DDH}^{\pm}\)) holds on a group action \((\mathbb{G},\mathcal{X},*)\) if, for all QPT adversaries \(\mathcal{A}\), there exists a negligible \(\lambda\) such that
\[\left|\Pr\left[\mathcal{A}\left(\begin{subarray}{c}asx_{\lambda}, bsx_{\lambda},c*x_{\lambda},\\ (-a)*x_{\lambda},(-b)*x_{\lambda},(-c)*x_{\lambda}\end{subarray}\right)=1:a,b, c\leftarrow\mathcal{G}_{\lambda}\right]\] \[-\Pr\left[\mathcal{A}\left(\begin{subarray}{c}asx_{\lambda},bsx_{ \lambda},(a+b)*x_{\lambda},\\ (-a)*x_{\lambda},*(-b)*x_{\lambda},(-a-b)*x_{\lambda}\end{subarray}\right)=1:a,b\leftarrow\mathcal{G}_{\lambda}\right]\Bigr{|}\leq\mathsf{negl}(\lambda)\enspace.\]
Note that the \({}^{\pm}\) versions od DLog, \(\mathrm{CDH}\), \(\mathrm{DDH}\) imply their ordinary counterparts. Moreover, the assumptions are _equivalent_ to the ordinary versions on group actions with twists. Also, note that, while [13] prove the equivalence of ordinary DLog and \(\mathrm{CDH}\), their proof does not necessarily apply to the \({}^{\pm}\) versions, and an equivalence between these versions may be incomparable since it would start from a stronger property, but also reach a stronger conclusion.
Our Result.We now show that, the negation assumptions allow us to lift to security under twists, generically.
**Theorem 4.6**.: _Let \((\mathbb{G},\mathcal{X},*)\) be a group with \(\mathcal{X}\subseteq\{0,1\}^{m}\) such that \(\mathrm{DLog}^{\pm}\) (resp. \(\mathrm{CDH}^{\pm}\), \(\mathrm{DDH}^{\pm}\)) holds. Let \(m^{\prime}\geq 2m+\omega(\log\lambda)\). Then \(\mathrm{DLog}^{\pm}\) (resp. \(\mathrm{CDH}^{\pm}\), \(\mathrm{DDH}^{\pm}\)) hold in \(\mathsf{GGAM}^{\pm}_{\mathbb{G},m^{\prime}}\), the GGAM with twists relative to group \(\mathbb{G}\) and with label length \(m^{\prime}\)._
Proof.: We prove the case of \(\mathrm{DDH}\), the other proofs being nearly identical. Let \(\mathcal{A}^{\mathsf{GGAM}^{\pm}_{\mathbb{G},m}}\) be a supposed adversary for \(\mathrm{DDH}^{\pm}\) in \(\mathsf{GGAM}^{\pm}_{\mathbb{G},m}\), the GGAM with twists and with label length \(m^{\prime}\). Let \(\epsilon\) be the distinguishing advantage of \(\mathcal{A}\), and \(q\) the polynomial number of queries. We construct a new adversary \(\mathcal{B}\) for \(\mathrm{DDH}^{\pm}\) in the group action \((\mathbb{G},\mathcal{X},*)\) as follows.
* \(\mathcal{B}\), on input \(u^{+},v^{+},w^{+},u^{-},v^{-},w^{-}\), will chose a random injective function \(\Pi\) from \(\{0,1\}^{2m}\to\{0,1\}^{m^{\prime}}\). To make \(\mathcal{B}\) efficient, we will actually use Lemma 4.2 to efficiently simulate \(\Pi\). For simplicity in the following proof, we will treat \(\mathcal{B}\) as actually using a true random injection.
* \(\mathcal{B}\) will compute \(X=\Pi(x_{\lambda},x_{\lambda}),U=\Pi(u^{+},u^{-}),V=\Pi(v^{+},v^{-}),W=\Pi(w^{+},w ^{-})\).
* \(\mathcal{B}\) will then run \(\mathcal{A}(X,U,V,W)\)11, simulating its queries as follows: Footnote 11: Recall that in the definition of DDH, the adversary is only given \(U,V,W\). However, in the generic group action model, we additionally give all parties the starting point \(X\).
* For queries to the group action \((\ell,g)\), \(\mathcal{B}\) simulates the query by computing \((z_{1},z_{2})\leftarrow\Pi^{-1}(\ell)\), and then returning \(\Pi(g*z_{1},(-g)*z_{2})\). For superposition queries, \(\mathcal{B}\) simply runs this computation in superposition. Note that if we let \(\Gamma(g)=(g*x_{\lambda},(-g)*x_{\lambda})\), then \(\mathcal{B}\) simulates these queries exactly as prescribed above in our general framework, for constants \(c_{1}=1,c_{2}=-1\) and \(y_{1}=y_{2}=x_{\lambda}\).
* When \(\mathcal{A}\) makes a twist query on label \(\ell\), \(\mathcal{B}\) computes \((z_{1},z_{2})\leftarrow\Pi^{-1}(\ell)\), and then computes \(\ell^{\prime}=\Pi(z_{2},z_{1})\) and responds with \(\ell^{\prime}\). For superposition queries, \(\mathcal{B}\) simply runs this computation in superposition. Observe that the twist of \(\Pi(\Gamma(g))\) is exactly \(\Pi(\Gamma(-g))\).
* \(\mathcal{B}\) then outputs whatever \(\mathcal{A}\) outputs.
We now prove security via a sequence of hybrids.
Hybrid 0.Here, we run \(\mathcal{A}^{\mathcal{O}}(X,U=a*X,V=b*X,W=c*X)\) for a random injection \(L\), where \(X=L(0)\), \(a,b,c\) are uniform in \(\mathbb{O}_{\lambda}\), and \(*\) denotes the action defined by \(\mathcal{O}\). Let \(p_{0}\) be the probability \(\mathcal{A}\) outputs \(1\).
Hybrid 1.Here, \(\mathcal{B}\) is given \(u^{+},v^{+},w^{+},u^{-},v^{-},w^{-}=a*y,b*y,c*y,(-a)^{y},(-b)*y,(-c)*y\), and simulates \(\mathcal{A}\) as described above. Let \(p_{1}\) be the probability \(\mathcal{A}\) (and hence \(\mathcal{B}\)) outputs \(1\). Observe that \(X,U,V,W=L(0),L(a),L(b),L(c)\), where \(L\) is the implicit labeling function \(L(g)=\Pi(g*x_{\lambda},(-g)*x_{\lambda})\). Since \(\mathcal{B}\) simulates twist queries by mapping \(L(g)\mapsto L(-g)\), \(\mathcal{B}\) correctly simulates the view of \(\mathcal{A}\) in **Hybrid 0**, except that \(\mathcal{O}^{\prime}\) and the twist oracle operate on values \(\Pi(z_{1},z_{2})\) that might not be in the image of \(L\). But we can invoke Lemma 4.1 to conclude that \(|p_{0}-p_{1}|\leq O(q\times 2^{2m-m^{\prime}})=q\times\mathsf{negl}(\lambda)= \mathsf{negl}(\lambda)\).
Hybrid 2.Here, \(\mathcal{B}\) is \(u^{+},v^{+},w^{+},u^{-},v^{-},w^{-}=a*y,b*y,(a+b)*y,(-a)^{y},(-b)*y,(-a-b)*y\), and simulates \(\mathcal{A}\) as described above. Let \(p_{2}\) be the probability \(\mathcal{A}\) (and hence \(\mathcal{B}\)) outputs \(1\). By Assumption 4.5, \(|p_{1}-p_{2}|\leq\mathsf{negl}(\lambda)\).
Hybrid 3.Now we run \(\mathcal{A}^{\mathcal{O}}(X,U=a*X,V=b*X,W=(a+b)*X)\). Let \(p_{3}\) be the probability \(\mathcal{A}\) outputs \(1\). By a similar argument for going from **Hybrid 0** to **Hybrid 1**, we conclude that \(|p_{2}-p_{3}|\leq\mathsf{negl}(\lambda)\) is negligible. Piecing everything together, we have that \(\epsilon=|p_{0}-p_{3}|\leq\mathsf{negl}(\lambda)\), thereby proving \(\mathsf{DDH}^{\pm}\) holds in \(\mathsf{GGAM}^{\pm}_{\mathbb{O},m^{\prime}}\).
### Computing Banknotes With Complementary Serial Numbers
Here, we prove that it is hard in generic group action to compute two banknotes for our scheme with "complementary" serial numbers that sum to zero.
**Theorem 4.7**.: _Let \((\mathbb{G},\mathcal{X},*)\) be a group with \(\mathcal{X}\subseteq\{0,1\}^{m}\) such that DDH holds (Assumption 2.5). Let \(m^{\prime}\geq 4m+\omega(\log\lambda)\). Let \((\mathsf{Gen}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}},\mathsf{Ver}^{\mathsf{ GGAM}_{\mathbb{G},m^{\prime}}})\) be the quantum money construction from Construction 3.1, using the generic group action \(\mathsf{GGAM}_{\mathbb{G},m^{\prime}}\). Consider a QPT adversary \(\mathcal{B}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}\) making queries to \(\mathsf{GGAM}_{\mathbb{G},m^{\prime}}\), which takes as input the security parameter \(\lambda\), and outputs a serial number \(h\in\mathbb{G}_{\lambda}\) and two potentially entangled states \(\$_{1},\$_{2}\), which it tries to pass off as two banknotes. For all such \(\mathcal{B}\), there exists a negligible \(\mathsf{negl}(\lambda)\) such that the following holds:_
\[\Pr\left[\mathsf{Ver}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}(h,\$_{1})= \mathsf{Ver}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}(-h,\$_{2})=1:(h,\$_{1}, \$_{2})\leftarrow\mathcal{B}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}(1^{ \lambda})\right]\leq\mathsf{negl}(\lambda)\enspace.\]
Notice that the statement above is _almost_ the statement that \((\mathsf{Gen}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}},\mathsf{Ver}^{\mathsf{ GGAM}_{\mathbb{G},m^{\prime}}})\) is a quantum lightning scheme, except that the second banknote is verified with respect to \(-h\) instead of \(h\). Theorem 4.7 is therefore not quite enough to prove the security of our scheme, since it could be the case that it is possible to output many banknotes with the same serial number, even if it is impossible to output two with complementary numbers. We give a different proof below in Section 4.5 based on a stronger assumption which proves our scheme quantum lightning. We use the result here as a warm-up to our later result, which is based on a more complex assumption. Moreover, Theorem 4.7 lets us prove that it is generically hard to output the uniform superposition \(\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{g\in\mathbb{G}_{\lambda}}|L(g)\rangle\), which is just the banknote \(|\mathbb{G}_{\lambda}^{0}*L(0)\rangle\) with serial number \(0\). We state and prove this fact before proving Theorem 4.7.
**Corollary 4.8**.: _Let \((\mathbb{G},\mathcal{X},*)\) be a group with \(\mathcal{X}\subseteq\{0,1\}^{m}\) such that DDH holds (Assumption 2.5). Let \(m^{\prime}\geq 4m+\omega(\log\lambda)\). Let \(L\) be the labeling function for the generic group action \(\mathsf{GGAM}_{\mathbb{G},m^{\prime}}\). Then for any QPT adversary \(\mathcal{A}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}\) making queries to \(\mathsf{GGAM}_{\mathbb{G},m^{\prime}}\) which outputs a state \(\rho\), there exists a negligible \(\mathsf{negl}(\lambda)\) such that \(\langle\mathbb{G}_{\lambda}^{0}*L(0)|\rho|\mathbb{G}_{\lambda}^{0}*L(0)\rangle \leq\mathsf{negl}(\lambda)\)._
Proof.: Consider an adversary \(\mathcal{A}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}\) outputting a mixed state \(\rho\) and let \(\epsilon=\langle\mathbb{G}_{\lambda}^{0}*L(0)|\rho|\mathbb{G}_{\lambda}^{0}*L(0 )\rangle\leq\mathsf{negl}(\lambda)\). We will assume for simplicity that we can project exactly onto \(|\mathbb{G}_{\lambda}^{0}*L(0)\rangle\); using our verifier from Section 3 introduces negligible error that is easily accounted for. By applying this projection to \(\rho\), we have that \(\mathcal{A}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}\) outputs \(|\mathbb{G}_{\lambda}^{0}*L(0)\rangle\) with probability \(\epsilon\). We will therefore assume we have the state \(|\mathbb{G}_{\lambda}^{0}*L(0)\rangle\).
Apply in superposition the map \(|x\rangle\mapsto|x,x\rangle\). Now we have the state
\[\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{g\in\mathbb{G}_{\lambda}}|L(g),L( g)\rangle\]
We can equivalently write this state as:
\[\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{h\in\mathbb{G}_{\lambda}}|\mathbb{ G}_{\lambda}^{h}*L(0)\rangle|\mathbb{G}_{\lambda}^{-h}*L(0)\rangle\]
We therefore apply our algorithm \(\mathsf{Findh}\) from Theorem 3.6 to the first register. The output will be a random serial number \(h\), and the state will collapse to \(|\mathbb{G}_{\lambda}^{h}*L(0)\rangle|\mathbb{G}_{\lambda}^{-h}*L(0)\rangle\). We output this, which solves the problem in Theorem 4.7. Thus, we conclude that \(\epsilon\) must be negligible.
We now turn to proving Theorem 4.7.
Proof.: Consider an adversary \(\mathcal{B}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}\), and define:
\[\epsilon:=\Pr\left[\mathsf{Ver}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}(h, \$_{1})=\mathsf{Ver}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}(-h,\$_{2})=1:(h, \$_{1},\$_{2})\leftarrow\mathcal{B}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}(1 ^{\lambda})\right]\]
We will assume that \(\mathsf{Ver}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}(h,\$) projects onto the correct banknote \(|\mathbb{G}_{\lambda}^{h}*L(0)\rangle\); this assumption only introduces a negligible error which is easily accounted for. Therefore, with probability \(\epsilon\), \(\mathcal{B}\) outputs \(h\) and exactly the states \(|\mathbb{G}_{\lambda}^{h}*L(0)\rangle,|\mathbb{G}_{\lambda}^{-h}*L(0)\rangle\).
We now construct an adversary \(\mathcal{A}\) for DDH on the group action \((\mathbb{G},\mathcal{X},*)\). \(\mathcal{A}\), on input \((u,v,w)\), will choose a random injection \(\Pi:\{0,1\}^{4m}\to\{0,1\}^{m^{\prime}}\). It will then compute \(X=\Pi(x_{\lambda},u,v,w)\). \(\mathcal{A}\) will then run \(\mathcal{B}(X)\), simulating its queries \((\ell,g)\) to the group action as follows: compute \((z_{1},z_{2},z_{3},z_{4})\leftarrow\Pi^{-1}(\ell)\), and then return \(\Pi(g*z_{1},(-g)*z_{2},g*z_{3},(-g)*z_{4})\). For superposition queries, \(\mathcal{A}\) simply runs this computation in superposition. Note that if we let \(\Gamma(g)=(g*x_{\lambda},(-g)*u,g*v,(-g)*w)\), then \(\mathcal{A}\) simulates these queries exactly as prescribed above in our general framework, for constants \(c_{1}=1,c_{2}=-1,c_{3}=1,c_{4}=-1\) and \((y_{1},y_{2},y_{3},y_{4})=(x_{\lambda},u,v,w)\).
Finally, when \(\mathcal{B}\) produces serial number \(h\) and banknotes \(\$_{1},\$_{2}\), \(\mathcal{A}\) does the following:
* Run \(\mathsf{Ver}^{\mathcal{O}^{\prime}}(h,\$_{1})\) and \(\mathsf{Ver}^{\mathcal{O}^{\prime}}(-h,\$_{2})\), answering the queries of \(\mathsf{Ver}\) using the simulated group action oracle. If either run rejects, output a random bit. Otherwise, let \(\$_{1}^{\prime},\$_{2}^{\prime}\) be the resulting states of the verifier.
* In superposition, it applies the following map \(\ell\mapsto\ell^{\prime}\) to \(\$_{2}^{\prime}\):
* First map \(\ell\mapsto\Pi^{-1}(\ell)=(z_{1},z_{2},z_{3},z_{4})\)
* Now map \((z_{1},z_{2},z_{3},z_{4})\mapsto\ell^{\prime}=\Pi(z_{2},z_{1},z_{4},z_{3})\). Let \(\$_{2}^{\prime\prime}\) be the result of this map.
* Apply the swap test to \(\$_{1}^{\prime},\$_{2}^{\prime\prime}\), outputting whatever the swap test outputs.
By applying Lemma 4.1, we can conclude that \(\$_{1},\$_{2}\) are actually superpositions over elements of the form \(L(g)=\Pi(g*z_{1},(-g)*z_{2},g*z_{3},(-g)*z_{4})\) for varying \(g\). Then using our characterization of the accepting states of \(\mathsf{Ver}\), we see that both runs of \(\mathsf{Ver}\) simulteneously accept with probability \(\epsilon\), and in this case \(\$_{1}^{\prime}=|\mathbb{G}_{\lambda}^{h}*L(0)\rangle,\$_{2}^{\prime}=|\mathbb{ G}_{\lambda}^{-h}*L(0)\rangle\).
We must analyze the effect of the map \(\ell\mapsto\ell^{\prime}\) on \(|\mathbb{G}_{\lambda}^{-h}*L(0)\rangle\). We break into two cases:
* \(u=a*x_{\lambda},v=b*x_{\lambda},w=(a+b)*x_{\lambda}\). Let \(\ell=L(g)=\Pi(g*z_{1},(-g)*z_{2},g*z_{3},(-g)*z_{4})=\Pi(g*x_{\lambda},(a-g)*x_ {\lambda},(b+g)*x_{\lambda},(a+b-g)*x_{\lambda})\), which maps to \(\ell^{\prime}=\Pi((a-g)*x_{\lambda},g*x_{\lambda},(a+b-g)*x_{\lambda},(b+g)*x_ {\lambda})=L(a-g)\). Therefore, \(|\mathbb{G}_{\lambda}^{-h}*L(0)\rangle\) maps to \[|\mathbb{G}_{\lambda}^{-h}*L(0)\rangle =\frac{1}{\sqrt{|\mathbb{G}_{\lambda}}}\sum_{g}\chi(g,-h)|L(g)\rangle\] \[\mapsto\frac{1}{\sqrt{|\mathbb{G}_{\lambda}}}\sum_{g}\chi(g,-h)|L( a-g)\rangle\] \[=\frac{1}{\sqrt{|\mathbb{G}_{\lambda}}}\sum_{g}\chi(a-g,-h)|L(g)\rangle\] \[=\chi(a,-h)\frac{1}{\sqrt{|\mathbb{G}_{\lambda}}}\sum_{g}\chi(g,h) |L(g)\rangle\] \[=\chi(a,-h)|\mathbb{G}_{\lambda}^{h}*L(0)\rangle\]
Thus, in this case, \(\mathcal{A}\) obtains two copies of \(|\mathbb{G}_{\lambda}^{h}*L(0)\rangle\), which the swap test will accept with probability \(1\). Therefore, the probability \(\mathcal{A}\) outputs \(1\) is \(\frac{1}{2}(1-\epsilon)+\epsilon=\frac{1+\epsilon}{2}\).
* \(u=a*x_{\lambda},v=b*x_{\lambda},w=c*x_{\lambda}\) with \(c\neq a+b\). In this case, \(\ell=L(g)=\Pi(g*x_{\lambda},(a-g)*x_{\lambda},(b+g)*x_{\lambda},(c-g)*x_{ \lambda})\) maps to \(\ell^{\prime}=\Pi((a-g)*x_{\lambda},g*x_{\lambda},(c-g)*x_{\lambda},(b+g)*x_{ \lambda})\). However, \(\ell^{\prime}\) is _not_ equal to \(L(g^{\prime})\) for any \(g^{\prime}\). Indeed, in order for \(\ell^{\prime}=L(g^{\prime})\), we get several equations: \[g^{\prime}=a-g\quad a-g^{\prime}=g\quad b+g^{\prime}=c-g^{\prime}\quad c-g^{ \prime}=b+g\] The first two equations require that \(g^{\prime}=a-g\), while the last two require that \(g^{\prime}=c-b-g\neq a-g\). Hence, the state \(\$_{2}^{\prime\prime}\) has disjoint support from the state \(|\mathbb{G}_{\lambda}^{h}*L(0)\rangle\), and hence is orthogonal to it. Therefore, the swap test will accept with probability exactly \(1/2\). The overall probability \(\mathcal{A}\) outputs \(1\) is therefore exactly \(1/2\).
Thus, we see that \(\mathcal{A}\) has advantage \(\epsilon/2\) in distinguishing DDH, breaking the assumption.
### Security of our Quantum Lightning Scheme
Here, we prove the generic security of our quantum lightning scheme (Construction 3.1). We do not know how to prove security under any standard group action-based assumption. We instead introduce a novel assumption that appears plausible, but needs extra cryptanalysis to be certain.
The Decisional 2x Assumption (D2X).A classical "Diffie-Hellman Exponent" assumption is to distinguish \(g^{a},g^{a^{2}}\) from \(g^{a},g^{b}\) for uniform \(a,b\). The group action equivalent would be to distinguish \(a*x_{\lambda},(2a)*x_{\lambda}\) from \(a*x_{\lambda},b*x_{\lambda}\) for uniform \(a,b\in\mathbb{G}_{\lambda}\). Our assumption is based on this assumption. Hover, we need something a bit stronger. In particular, we need not just the set element \((2a)*x_{\lambda}\) or \(b*x_{\lambda}\), but the ability to query on an _arbitrary_ set element \(y\) and receive \((2a)*y\) or \(b*y\). In th classical group setting, this would correspond to receiving \(g^{a}\), and then being able to query the function \(h\mapsto h^{a^{2}}\) or \(h\mapsto h^{b}\).
Note that if allowing arbitrary queries to this oracle, the problem is _easy_ in many cases. In particular, suppose the order of \(\mathbb{G}_{\lambda}\) is odd with order \(2t-1\). Then by querying the oracle \(t\) times, we can compute \(y_{1}=(2a)*x_{\lambda},y_{2}=(2a)*y_{1}=(4a)*x_{\lambda},\cdots\), ultimately computing \(y_{t}=(2ta)*x_{\lambda}=a*x_{\lambda}\). On the other hand, if the oracle maps \(y\mapsto b*x_{\lambda}\) for a random \(b\), then \(y_{t}=(tb)*x_{\lambda}\neq a*x_{\lambda}\). This allows for distinguishing the two cases.
Therefore, we only allow a _single_ query to the oracle. In this case, a single query does not appear sufficient for breaking the assumption. The adversary, on input \(u=a*x_{\lambda}\), can send \(u\) to the oracle, receiving \((3a)*x_{\lambda}\) or \((a+b)*x_{\lambda}\). Or it can send \(x_{\lambda}\) to the oracle, receiving \((2a)*x_{\lambda}\) or \(b*x_{\lambda}\). It can also act on these elements by known constants, computing either \((2a+c)*x_{\lambda},(3a+d)*x_{\lambda}\), or \((b+c)*x_{\lambda},(a+b+d)*x_{\lambda}\). It can also act on the original element \(u\), and also on \(x_{\lambda}\) by known constants, receiving \((a+e)*x_{\lambda},f*x_{\lambda}\). Intuitively, it seems the only way the adversary can distinguish between these cases is to find constants \(c,d,e,f\) that cause a collision between elements when the oracle acts by \(2a\), but no collision when the oracle acts by \(b\). However, for any constants \(c,d,e,f\), the probability of a collision occurring in either case is negligible. Based on this intuitive argument, it is possible to prove that this assumption is generically hard against _classical_ algorithms. We do not, however, know if there is a clever quantum algorithm that breaks the assumption. However, it seems plausible that there is no such efficient quantum algorithm.
We will also allow the query to be quantum, and for technical reasons, we will use an _in place_ oracle, meaning \(\sum_{g}\alpha_{g}|g*x_{\lambda}\rangle\mapsto\sum_{g}\alpha_{g}|(2a+g)*x_{ \lambda}\rangle\), as opposed to the using "standard" oracle which maps \(\sum_{g,y}\alpha_{g,y}|g*x_{\lambda},y\rangle\mapsto\sum_{g,y}\alpha_{g,y}|g*x_ {\lambda},y\oplus|(g+2a)*x_{\lambda}\rangle\).
**Assumption 4.9**.: The Decisional 2X Assumption with minimal oracle (D2X/min) assumption holds on a group action \((\mathbb{G},\mathcal{X},*)\) if, for all QPT adversaries \(\mathcal{A}\), there exists a negligible \(\lambda\) such that
\[\left|\Pr\left[\mathcal{A}^{M^{1}_{2a}}(a*x_{\lambda})=1:a\leftarrow\mathcal{G }_{\lambda}\right]-\Pr\left[\mathcal{A}^{M_{b}}(a*x_{\lambda})=1:a,b\leftarrow \mathcal{G}_{\lambda}\right]\right|\leq\mathsf{negl}(\lambda)\enspace.\]
Above, \(M_{c}\) is the in-place (or "minimal") oracle mapping \(y\mapsto c*y\), and \(M^{1}_{c}\) means the adversary can make only a single query to \(M_{c}\).
If we insist on standard oracles, we can instead utilize the following assumption:
**Assumption 4.10**.: The Decisional 2X Assumption with standard oracle (D2X/std) assumption holds on a group action \((\mathbb{G},\mathcal{X},*)\) if, for all QPT adversaries \(\mathcal{A}\), there exists a negligible \(\lambda\) such that
\[\left|\Pr\left[\mathcal{A}^{S^{1}_{2a},S^{1}_{-2a}}(a*x_{\lambda})=1:a \leftarrow\mathcal{G}_{\lambda}\right]-\Pr\left[\mathcal{A}^{S^{1}_{b},S^{1}_{ -b}}(a*x_{\lambda})=1:a,b\leftarrow\mathcal{G}_{\lambda}\right]\right|\leq \mathsf{negl}(\lambda)\enspace.\]
Above, \(S_{c}\) is the standard oracle mapping \((y,z)\mapsto(y,z\oplus(c*y))\), and \(S^{1}_{c}\) means the adversary can make only a single query to \(S_{c}\).
The following lemma is straightforward:
**Lemma 4.11**.: _If D2X/std holds on a group action \((\mathbb{G},\mathcal{X},*)\), then so does D2X/min_
Proof.: We simply use the oracles \(S^{1}_{c},S^{1}_{-c}\) to simulate the oracle \(M^{1}_{c}\) in the obvious way.
Our security proof.We now prove the generic security of our quantum lightning scheme.
**Theorem 4.12**.: _Let \((\mathbb{G},\mathcal{X},*)\) be a group with \(\mathcal{X}\subseteq\{0,1\}^{m}\) such that D2X/min holds (Assumption 4.9). Let \(m^{\prime}\geq 2m+\omega(\log\lambda)\). Let \((\mathsf{Gen}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}},\mathsf{Ver}^{\mathsf{ GGAM}_{\mathbb{G},m^{\prime}}})\) be the quantum money construction from Construction 3.1, using the generic group action \(\mathsf{GGAM}_{\mathbb{G},m^{\prime}}\). Then the quantum money construction is a secure quantum lightning scheme._
Proof.: Consider an adversary \(\mathcal{B}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}\) for quantum lightning security, and let \(\epsilon\) be the probability that \(\mathcal{B}\) wins. We will assume that \(\mathsf{Ver}^{\mathsf{GGAM}_{\mathbb{G},m^{\prime}}}(h,\$)\) projects onto the correct banknote \(|\mathbb{G}^{h}_{\lambda}*L(0)\rangle\); this assumption only introduces a negligible error which is easily accounted for. Therefore, with probability \(\epsilon\), \(\mathcal{B}\) outputs \(h\) and exactly two copies of the state \(|\mathbb{G}^{h}_{\lambda}*L(0)\rangle\).
We now construct an adversary \(\mathcal{A}\) for D2X/min on the group action \((\mathbb{G},\mathcal{X},*)\). \(\mathcal{A}\), on input \(u=a*x_{\lambda}\), will choose a random injection \(\Pi:\{0,1\}^{2m}\rightarrow\{0,1\}^{m^{\prime}}\). It will then compute \(X=\Pi(x_{\lambda},u)\). \(\mathcal{A}\) will then run \(\mathcal{B}(X)\), simulating its queries \((\ell,g)\) to the group action as follows: compute \((z_{1},z_{2})\leftarrow\Pi^{-1}(\ell)\), and then return \(\Pi(g*z_{1},g*z_{2})\). For superposition queries, \(\mathcal{A}\) simply runs this computation in superposition. Note that if we let \(\Gamma(g)=(g*x_{\lambda},g*u)\), then \(\mathcal{A}\) simulates these queries exactly as prescribed above in our general framework, for constants \(c_{1}=c_{2}=1\) and \((y_{1},y_{2})=(x_{\lambda},u)\).
Finally, when \(\mathcal{B}\) produces serial number \(h\) and banknotes \(\$_{1},\$_{2}\), \(\mathcal{A}\) does the following:
* Run \(\mathsf{Ver}^{\mathcal{O}^{\prime}}(h,\$_{1})\) and \(\mathsf{Ver}^{\mathcal{O}^{\prime}}(h,\$_{2})\), answering the queries of \(\mathsf{Ver}\) using the simulated group action oracle. If either run rejects, output a random bit. Otherwise, let \(\$_{1}^{\prime},\$_{2}^{\prime}\) be the resulting states of the verifier.
* In superposition, it applies the following map \(\ell\mapsto\ell^{\prime}\) to \(\$_{2}^{\prime}\):
* First map \(\ell\mapsto\Pi^{-1}(\ell)=(z_{1},z_{2})\)
* Use the oracle \(M_{c}\) from the D2X/min assumption to replace \(z_{1}\) with \(z_{1}^{\prime}=c*z_{1}\), where \(c=2a\) or \(b\).
* Now map \((z_{1}^{\prime},z_{2})\mapsto\ell^{\prime}=\Pi(z_{2},z_{1}^{\prime})\). Let \(\$_{2}^{\prime\prime}\) be the result of this map.
* Apply the swap test to \(\$_{1}^{\prime},\$_{2}^{\prime\prime}\), outputting whatever the swap test outputs.
By applying Lemma 4.1, we can conclude that \(\$_{1},\$_{2}\) are actually superpositions over elements of the form \(L(g)=\Pi(g*z_{1},g*z_{2})\) for varying \(g\). Then using our characterization of the accepting states of \(\mathsf{Ver}\), we see that both runs of \(\mathsf{Ver}\) simulteneously accept with probability \(\epsilon\), and in this case \(\$_{1}^{\prime}=\$_{2}^{\prime}=|\mathbb{G}_{\lambda}^{h}*L(0)\rangle,\$_{2}^ {\prime}\).
We must analyze the effect of the map \(\ell\mapsto\ell^{\prime}\) on \(|\mathbb{G}_{\lambda}^{h}*L(0)\rangle\). We break into two cases:
* \(M_{c}\) implements the action \(y\mapsto c*y\) with \(c=2a\). Let \(\ell=L(g)=\Pi(g*z_{1},g*z_{2})=\Pi(g*x_{\lambda},(a+g)*x_{\lambda})\), which maps to \(\ell^{\prime}=\Pi(g*x_{\lambda},(2g)*x_{\lambda}=L(a+g)\). Therefore, \(|\mathbb{G}_{\lambda}^{h}*L(0)\rangle\) maps to \[|\mathbb{G}_{\lambda}^{h}*L(0)\rangle =\frac{1}{\sqrt{|\mathbb{G}_{\lambda}}}\sum_{g}\chi(g,h)|L(g)\rangle\] \[\mapsto\frac{1}{\sqrt{|\mathbb{G}_{\lambda}}}\sum_{g}\chi(g,h)| L(a+g)\rangle\] \[=\chi(a,-h)|\mathbb{G}_{\lambda}^{h}*L(0)\rangle\] Thus, in this case, \(\mathcal{A}\) obtains two copies of \(|\mathbb{G}_{\lambda}^{h}*L(0)\rangle\), which the swap test will accept with probability \(1\). Therefore, the probability \(\mathcal{A}\) outputs \(1\) is \(\frac{1}{2}(1-\epsilon)+\epsilon=\frac{1+\epsilon}{2}\).
* \(M_{c}\) implements the action \(y\mapsto c*y\) with \(c=b\) for a random \(b\). In this case, \(\ell=L(g)=\Pi(g*x_{\lambda},(a+g)*x_{\lambda})\) maps to \(\ell^{\prime}=\Pi((a+g)*x_{\lambda},(g+b)*x_{\lambda})\). However, \(\ell^{\prime}\) is _not_ equal to \(L(g^{\prime})\) for any \(g\). Indeed, in order for \(\ell^{\prime}=L(g^{\prime})\), we get several equations: \[g^{\prime}=a+g\quad a+g^{\prime}=g+b\] The first equation requires that \(g^{\prime}=a+g\), while the last one requires that \(g^{\prime}=g+b-a\neq g+a\). Hence, the state \(\$_{2}^{\prime\prime}\) has disjoint support from the state \(|\mathbb{G}_{\lambda}^{h}*L(0)\rangle\), and hence is orthogonal to it. Therefore, the swap test will accept with probability exactly \(1/2\). The overall probability \(\mathcal{A}\) outputs \(1\) is therefore exactly \(1/2\).
Thus, we see that \(\mathcal{A}\) has advantage \(\epsilon/2\) in distinguishing DDH, breaking the assumption.
On Quantum Knowledge Assumptions and Algebraic Adversaries
In this section, we explore knowledge assumptions in the quantum setting, as well the algebraic model for group actions. We find significant issues with both settings. Nevertheless, we give a second security proof for our quantum lightning scheme (Construction 3.1), this time using knowledge assumptions.
### The Knowledge of Group Element Assumption (KGEA)
Here, we discuss a new assumption that we define, called the Knowledge of Group Element Assumption (KGEA). This is an analog of the classical Knowledge of Exponent Assumption (KEA) [1], but adapted for quantum adversaries and group actions. It can also be seen as an adaptation of the Knowledge of Path assumption of [13], specialized to group actions. Despite coming from plausible origins, however, we will see that the assumption is, in fact, false. This leads to concerns over the more general Knowledge of Path assumption. We give a candidate replacement assumption that avoids our attack, but more cryptanalysis is needed to understand the new assumption.
The Knowledge of Group Element Assumption (KGEA).This assumption states, informally, that any algorithm that produces a set element \(y\) must "know" \(g\) such that \(y=g*x_{\lambda}\). Implicit in this assumption is the requirement that it is hard to obliviously sample set elements; we discuss later how to model security when oblivious sampling is possible. In the classical setting, the KGEA assumption would be formalized as follows:
**Assumption 5.1**.: The _classical knowledge of group element assumption_ (C-KGEA) holds on a group action \((\mathbb{G},\mathcal{X},*)\) if the following is true. For any probabilistic polynomial time (PPT) adversary \(\mathcal{A}\), there exists a PPT "extractor" \(\mathcal{E}\) and a negligible \(\epsilon\) such that:
\[\Pr\left[y\in\mathcal{X}\wedge y\neq g*x_{\lambda}:\begin{subarray}{c}y\gets \mathcal{A}(1^{\lambda};r)\\ g\leftarrow\mathcal{E}(1^{\lambda},r)\end{subarray}\right]\leq\epsilon(\lambda )\enspace.\]
Above, \(r\) are the random coins given to \(\mathcal{A}\), which are also given to \(\mathcal{E}\), and the probability is taken over uniform \(r\) and any additional randomness of \(\mathcal{E}\).
In other words, if \(\mathcal{A}\) outputs any set element, it must "know" how to derive that set element from \(x_{\lambda}\), since it can compute \(g\) such that \(y=g*x_{\lambda}\) using \(\mathcal{E}\) and its random coins. Note that once the random coins are fixed, \(\mathcal{A}\) is deterministic.
As observed by [13], when moving to the quantum setting, the problem with Assumption 5.1 is that quantum algorithms do not have to flip random coins to generate randomness, and instead their output may be a measurement applied to a quantum state, the result being inherently randomized even if the quantum state is fixed. Thus there is no meaningful way to give the same random coins to \(\mathcal{E}\).
The solution used in [13] is to, instead of giving \(\mathcal{E}\) the same inputs as \(\mathcal{A}\), give \(\mathcal{E}\) the remaining state of \(\mathcal{A}\) at the _end_ of the computation. This requires some care, since an algorithm can of course forget any bit of information by simply throwing it away. A more sophisticated way to lose information is to perform other measurements on the state, say measuring in the Fourier basis. The solution in [13] is to require that \(\mathcal{A}\) makes no measurements at all, _except_ for measuring the final output. Note that the Principle of Delayed Measurement implies that it is always possible
without loss of generality to move all measurements to the final output. Then \(\mathcal{E}\) is given both the output and the remaining quantum state of \(\mathcal{A}\), and tries to compute \(g\). Note that in the classical setting, if we restrict to _reversible_\(\mathcal{A}\), this formulation of giving \(\mathcal{E}\) the final state of \(\mathcal{A}\) is equivalent to given \(\mathcal{E}\) the randomness, since the randomness can be computed by reversing \(\mathcal{A}\). Similar to how we can assume a quantum \(\mathcal{A}\) makes all its measurements at the end, in we can always assume without loss of generality that a classical \(\mathcal{A}\) is reversible. Thus, in the classical setting these two definitions coincide. Adapting to our setting, this approach yields the following assumption:
**Assumption 5.2**.: The _quantum knowledge of group element assumption_ (Q-KGEA) holds on a group action \((\mathbb{G},\mathcal{X},*)\) if the following is true. For any quantum polynomial time (QPT) adversary \(\mathcal{A}\) which performs no measurements except for its final output, there exists a QPT extractor \(\mathcal{E}\) and negligible \(\epsilon\) such that
\[\Pr\left[y\in\mathcal{X}\wedge y\neq g*x_{\lambda}\cdot\overset{(y,|\psi \rangle)\leftarrow\mathcal{A}(1^{\lambda})}{g\leftarrow\mathcal{E}(y,|\psi \rangle)}\right]\leq\epsilon(\lambda)\enspace.\]
Above, \(y\) is considered as the output of \(\mathcal{A}\), and the only measurements applied to \(\mathcal{A}\) is the measurement of \(y\) to obtain the output.
Our Attack on Q-KGEA.Here, we show that Q-KGEA is _false_.
**Theorem 5.3**.: _On any group action where the discrete logarithm assumption holds (Assumption 2.4), Q-KGEA (Assumption 5.2) does not hold._
Proof.: Our proof will use the \(\mathsf{Findh}\) algorithm developed in Section 3.2. We first recall the functionality guaranteed by the algorithm. The algorithm takes as input the state \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle=\frac{1}{\sqrt{|\mathbb{G}_{ \lambda}|}}\sum_{g\in\mathbb{G}_{\lambda}}\chi(g,h)|g*x_{\lambda}\rangle\), and outputs \(h\), while leaving \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\) intact. In other words, it maps \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\mapsto|\mathbb{G}_{\lambda}^{h}* x_{\lambda}\rangle|h\rangle\).
Now, recall that the \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\) form a basis. In particular, observe that \(|x_{\lambda}\rangle=\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{h}|\mathbb{G} _{\lambda}^{h}*x_{\lambda}\rangle\). Therefore, we have that
\[\mathsf{Findh}|x_{\lambda}\rangle=\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum _{h}|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle|h\rangle\]
We can now apply an arbitrary \(h\)-dependent phase to the state, and then uncompute \(h\). The result is that we have applied an arbitrary phase to whatever state we started from, but in the Fourier domain of the group. That is, let \(F:\mathbb{G}\mapsto\mathbb{R}\) be an arbitrary function. We can apply the phase \(|h\rangle\mapsto e^{iF(h)}|h\rangle\), and then uncompute \(h\). The result is that \(|x_{\lambda}\rangle\) maps to
\[\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{h}e^{iF(h)}|\mathbb{G}_{\lambda} ^{h}*x_{\lambda}\rangle=\frac{1}{|\mathbb{G}_{\lambda}|}\sum_{g}|g*x_{\lambda }\rangle\left(\sum_{h}\chi(g,h)e^{iF(h)}\right) \tag{5.1}\]
Now suppose we apply Q-KGEA to the algorithm producing this state. When we measure the register, all we get is a sample of \(|g*x_{\lambda}\rangle\) according to some distribution, with no side information. The Q-KGEA assumption then implies an algorithm \(\mathcal{E}\) which can recover \(g\) just given \(|g*x_{\lambda}\rangle\). Therefore, if we can guarantee that measuring the state in Equation 5.1 gives a uniform choice of \(g\), then \(\mathcal{E}\) must be solving discrete logarithms, breaking Assumption 2.4.
It is not hard to devise a function \(F\) which makes the resulting sample \(g\) uniform. For example, if \(\mathbb{G}=\mathbb{Z}_{N}\) for an odd integer \(N\), we can let \(F(h)=2\pi h^{2}/N\). Then the probability of observing \(g\) is
\[\frac{1}{|\mathbb{G}_{\lambda}|^{2}}\times\left|\sum_{h}e^{i2\pi(gh+h^{2})/N} \right|^{2}=\frac{1}{|\mathbb{G}_{\lambda}|^{2}}\times|\mathbb{G}_{\lambda}|= \frac{1}{|\mathbb{G}_{\lambda}|}\]
as desired, where above we used the fact about quadratic Gauss sums that \(\sum_{h}e^{i2\pi(gh+h^{2})/N}\) is equal to \(|\mathbb{G}_{\lambda}|^{-1/2}\), up to phase.
Our Modified Knowledge Assumption.We propose a simple way to circumvent the attack above. Our basic observation is that, while the attack in Theorem 5.3 allows for obliviously sampling elements in arbitrary group actions, it does not appear useful for actually breaking cryptosystems. After all, all the attack is doing is sampling random set elements, which can anyway be sampled easily by choosing a random group element \(g\) and computing \(g*x_{\lambda}\). Thus, while strictly speaking violating the knowledge assumption, the attack appears useless for actually breaking cryptosystems.
More generally, for "nice" cryptographic games (which we will define shortly), in particular games that only use the group action interface and do not themselves obliviously sample elements, it seems that giving the adversary the ability to obliviously sample elements is no help in breaking the game. We therefore postulate that, for any adversary \(\mathcal{A}\) that wins such a nice game, there is a different adversary \(\mathcal{A}^{\prime}\) for which the KGEA assumption can be applied, yielding an extractor _for that \(\mathcal{A}^{\prime}\)_. Thus, even if the original \(\mathcal{A}\) can obliviously sample elements, we essentially assume that \(\mathcal{A}^{\prime}\) cannot, and therefore \(\mathcal{E}\) is possible. We now make this intuition precise.
We first introduce the notion of generic group action games. Note that we will only be interested in _games_ that are given by generic algorithms; we will always treat the adversary as non-generic.
Briefly, a generic group action game is given by an interactive algorithm ("challenger") \(\mathsf{Ch}\). \(\mathsf{Ch}\) is limited to only performing group action computations that are "generic" and only interacts with the group action through oracles implementing the group action interface. Specifically, a generic algorithm is an oracle-aided algorithm \(\mathcal{B}\) that has access to oracles \(\mathsf{GA}=(\mathsf{Start},\mathsf{Act},\mathsf{Mem})\). Here, \(\mathsf{Start}\) is the oracle that takes as input the empty query, and outputs a string \(\tilde{x}\) representing \(x_{\lambda}\). \(\mathsf{Act}\) is the oracle that takes as input a group element \(g\in\mathbb{G}_{\lambda}\) and a string \(\tilde{y}\) representing a set element \(y\), and outputs a string \(\tilde{z}\) representing \(z=g*x\). Finally, \(\mathsf{Mem}\) is a membership testing oracle, that tests is a given string \(\tilde{x}\) represents an actual set element. From a generic game, we obtain a standard model game by implementing the oracles \(\mathsf{Start},\mathsf{Act},\mathsf{Mem}\) with the algorithms for an actual group action: \(\mathsf{Start}\) outputs the actual set element \(x_{\lambda}\), \(\mathsf{Act}\) is the group action \(*\), and \(\mathsf{Mem}\) is the membership tester for the set \(\mathcal{X}_{\lambda}\). For a concrete group action \((\mathbb{G},\mathcal{X},*)\), we denote this standard-model game by \(\mathsf{Ch}^{(\mathbb{G},\mathcal{X},*)}\).
For any algorithm \(\mathcal{A}\), we say the algorithm \(\delta(\lambda)\)-breaks \(\mathsf{Ch}^{(\mathbb{G},\mathcal{X},*)}\) if \(\mathsf{Ch}^{(\mathbb{G},\mathcal{X},*)}(1^{\lambda})\) outputs \(1\) with probability at least \(\delta(\lambda)\) when interacting with \(\mathcal{A}\).
We say that \(\mathsf{Ch}\) is one-round if it sends a single classical string to \(\mathcal{A}\), and then receives a single quantum message from \(\mathcal{A}\), before deciding if \(\mathcal{A}\) wins.
We now give our modified KGEA assumption.
**Assumption 5.4**.: The _quantum modified knowledge of group element assumption_ (Q-mKGEA) holds on a group action \((\mathbb{G},\mathcal{X},*)\) if the following is true. Consider a one-round generic group action game \(\mathsf{Ch}\) and any quantum polynomial time (QPT) adversary \(\mathcal{A}\) that \(1-\delta\)-breaks \(\mathsf{Ch}^{(\mathbb{G},\mathcal{X},*)}\) for a negligible \(\delta\). Write the message from \(\mathcal{A}\) to \(\mathsf{Ch}^{(\mathbb{G},\mathcal{X},*)}\) as \(\rho_{1,2}\), as a joint system over two registers \(1,2\). Consider measuring the first register, to obtain a set element \(y\). Denote this as \((y,|\psi\rangle)\leftarrow\mathcal{A}^{\prime}(1^{\lambda})\Leftrightarrow \mathsf{Ch}^{(\mathbb{G},\mathcal{X},*)}(1^{\lambda})\). Then for all such \(\delta,\mathcal{A},\mathsf{Ch}\), there exists another negligible \(\delta^{\prime}\), a QPT \(\mathcal{A}^{\prime}\) that also \(1-\delta^{\prime}\)-breaks \(\mathsf{Ch}^{(\mathbb{G},\mathcal{X},*)}\), and moreover there exists a QPT extractor \(\mathcal{E}\) and negligible \(\epsilon\) such that
\[\Pr\left[y\in\mathcal{X}\wedge y\neq g*x_{\lambda}:\begin{subarray}{c}(y,| \psi\rangle)\leftarrow\mathcal{A}^{\prime}(1^{\lambda})\Leftrightarrow \mathsf{Ch}^{(\mathbb{G},\mathcal{X},*)}(1^{\lambda})\\ g\leftarrow\mathcal{E}(y,|\psi\rangle)\end{subarray}\right]\leq\epsilon( \lambda)\enspace.\]
Intuitively, this assumption says that if \(\mathcal{A}\) wins some game, we might not be able to apply the KGEA extractor to it. However, there is some other \(\mathcal{A}^{\prime}\) that also wins the game, and that we _can_ apply the KGEA extractor to.
_Remark 5.5_.: Our solution with Assumption 5.4 also resolves the problem that, for group actions based on isogenies over elliiptic curves, it is _classically_ possible to sample certain set element obliviously, thus violating the plain KGEA assumption. A different remedy used in [13] explicitly assumes a probabilistic classical procedure \(S()\) for obliviously sampling set elements, and modifies the KGEA assumption so that the extractor either outputs (1) an explanation relative to \(x_{\lambda}\)_or_ (2) an explanation relative to some input \(y\) together with the random coins \(r\) that are fed into \(S\) so that \(y=S(r)\). This approach works, but is not robust, in the sense that if another sampling procedure is found, it would contradict even the modified assumption. Moreover, our attack in Theorem 5.3 shows that, when specialized to group actions, even this approach fails, since there is a quantum procedure for sampling elements that has no randomness at all, and therefore can not be explained. Our solution is robust to new sampling procedures being found as well as our quantum sampler. Nevertheless, more cryptanalysis is needed to understand if the assumption is sound.
### Quantum Lightning Security Using Q-mKGEA
Here, we give an alternative and incomparable proof of security of our quantum lightning construction to the proof given in Section 4. Our proof here does not require generic group actions, but instead requires our Q-mKGEA assumption. Thus, it achieves a trade-off by giving a standard-model justification, but the computational assumption is more suspect.
The Discrete Log Assumption, with Help.We now define a strengthening of the Discrete Log assumption (Assumption 2.4), which allows the adversary limited query access to a computational Diffie Hellman (CDH) oracle.
**Assumption 5.6**.: We say that the _Discrete Log with a single minimal CDH query_ assumption (DLog/1-minCDH) assumption holds if the following is true. For any QPT adversary \(\mathcal{A}\) playing the following game, parameterized by \(\lambda\), there is a negligible \(\epsilon\) such that \(\mathcal{A}\) wins with probability at most \(\epsilon(\lambda)\):
* The challenger, on input \(\lambda\), chooses a random \(g\in\mathbb{G}_{\lambda}\). It sends \(\lambda\) to \(\mathcal{A}\)
* \(\mathcal{A}\) submits a superposition query \(\sum_{y\in\mathcal{X},z\in\{0,1\}^{*}}\alpha_{y,z}|y,z\rangle\). Here, \(y\) is a set element that forms the query, and \(z\) is the internal state of the adversary when making the query. The challenger responds with \(\sum_{y\in\mathcal{X},z\in\{0,1\}^{*}}\alpha_{y,z}|(-g)*y,z\rangle\)12. Footnote 12: Note that this operation is unitary and efficiently computable since \(y\mapsto(-g)*y\) is efficiently computable and efficiently reversible given \(g\).
* The challenger sends \(g*x\) to \(\mathcal{A}\).
* \(\mathcal{A}\) outputs a guess \(g^{\prime}\) for \(g\). It wins if \(g^{\prime}=g\).
Note that Assumption 5.6 uses a "minimal" oracle for the CDH oracle, meaning is replaces \(y\) with \((-g)*y\). This is only a possibility because \(y\mapsto(-g)*y\) is reversible; otherwise the query would not be unitary. The minimal oracle, however, is somewhat non-standard. So we here define a slightly different assumption which uses "standard" oracles:
**Assumption 5.7**.: We say that the _Discrete Log with a double standard CDH query_ assumption (DLog/2-stdCDH) assumption holds if the following is true. For any QPT adversary \(\mathcal{A}\) playing the following game, parameterized by \(\lambda\), there is a negligible \(\epsilon\) such that \(\mathcal{A}\) wins with probability at most \(\epsilon(\lambda)\):
* The challenger, on input \(\lambda\), chooses a random \(g\in\mathbb{G}_{\lambda}\). It sends \(\lambda\) to \(\mathcal{A}\).
* \(\mathcal{A}\) submits a superposition query \(\sum_{y\in\mathcal{X},w,z\in\{0,1\}^{*}}\alpha_{y,w,z}|y,w,z)\). Here, \(y\) is a set element that forms the query, \(w\) is a string that forms the response register, and \(z\) is the internal state of the adversary when making the query. The challenger responds with \(\sum_{y\in\mathcal{X},w,z\in\{0,1\}^{*}}\alpha_{y,w,z}|y,w\oplus[(-g)*y],z)\).
* \(\mathcal{A}\) submits a second superposition query \(\sum_{y\in\mathcal{X},w,z\in\{0,1\}^{*}}\alpha_{y,w,z}|y,w,z)\). The challenger responds with \(\sum_{y\in\mathcal{X},w,z\in\{0,1\}^{*}}\alpha_{y,w,z}|y,w\oplus[g*y],z)\).
* The challenger sends \(g*x\) to \(\mathcal{A}\).
* \(\mathcal{A}\) outputs a guess \(g^{\prime}\) for \(g\). It wins if \(g^{\prime}=g\).
**Lemma 5.8**.: _If DLog/2-stdCDH (Assumption 5.7) holds in a group action, then so does DLog/1-minCDH (Assumption 5.6)._
Proof.: Like the proof of Lemma 4.11, Lemma 5.8 follows by using the two standard oracle queries to simulate a single minimal oracle query.
From this point forward, we will use DLog/1-minCDH as our assumption; Lemma 5.8 then shows that we could have instead used DLog/2-stdCDH.
The security proof.We are now ready to formally state and prove security.
**Theorem 5.9**.: _Assuming Q-mKGEA (Assumption 5.4) and DLog/1-minCDH (Assumption 5.6) both hold on a group action \((\mathbb{G},\mathcal{X},*)\), then Construction 3.1 is a quantum lightning scheme._
_Remark 5.10_.: Before proving Theorem 4.12, we briefly discuss how to handle the case of non-uniform attackers, since in this setting quantum lightning is insecure without some modifications. Note that even against non-uniform attackers, DLog/1-minCDH still plausibly holds. However, Q-KGEA certainly does not, as a non-uniform attacker may have a \(y\) hard-coded for which it does not know the discrete log with \(x_{\lambda}\). As discussed in Section 2, there are several possibilities.
* The first is to restrict to non-uniform attackers that only have classical advice. While classical advice does not appear to be useful in breaking Construction 3.1, it still allows for breaking Q-KGEA; thus while our scheme may be secure in this setting, the security proof would be vacuous.
* The second is to use a probabilistically generated group action, and define Q-KGEA and DLog/1-minCDH accordingly. For quantum money security, it would suffice to have Gen create the parameters of the group action and then include them in the serial number, since the serial number is generated honestly. For quantum lightning security, we would instead need the parameters to be generated by a trusted third party and then placed in a common random string (CRS).
* The final option is to use the human ignorance approach [14], where we explicitly state our security theorem as transforming a quantum lightning adversary into a Q-KGEA adversary; while such Q-KGEA adversaries exist in the non-uniform setting without a CRS, they are presumably unknown to human knowledge. As a consequence, a quantum lightning attacker, while existing, would likewise be unknown to human knowledge.
For simplicity, state and prove Theorem 4.12 in the uniform setting; either probabilistically generating the group action or using human ignorance would require straightforward modifications.
We now are ready to prove Theorem 4.12.
Proof.: Consider a QPT quantum lightning adversary \(\mathcal{A}\) which breaks security with non-negligible success probability \(\epsilon\). Since an adversary can always tell if it succeeded by running \(\mathsf{Ver}\), we can run \(\mathcal{A}^{\prime}\) multiple times to boost the probability of a successful break. In particular, we can run \(\mathcal{A}^{\prime}\) for \(\lambda\epsilon\), and at except with probability \(1-2^{-\Theta(\lambda)}\), at least one of the runs will succeed. This allows us to conclude without loss of generality that \(\mathcal{A}^{\prime}\) has success probability \(1-2^{-\Theta(\lambda)}\). We can then invoke Q-mKGEA (Assumption 5.4) to arrive at an adversary \(\mathcal{A}\) which also breaks quantum lightning security with high success probability.
By Theorem 3.3, we know that if \(\mathcal{A}\) outputs a serial number \(h\), the states outputted are exponentially close to two copies of \(|\Theta^{h}_{\lambda}*x_{\lambda}\rangle\).
For simplicity in the following proof, we will assume the probability of passing verification is actually \(1\); it is straightforward to adapt the proof to the case of negligible error.
Next, we purify \(\mathcal{A}\), and assume that before measurement, \(\mathcal{A}\) outputs a pure state \(|\psi\rangle\). By our assumption that the success probability is \(1\), \(|\psi\rangle\) will have the form
\[|\psi\rangle=\sum_{h}\alpha_{h}|\phi_{h}\rangle|\mathbb{G}^{h}_{\lambda}*x_{ \lambda}\rangle|\mathbb{G}^{h}_{\lambda}*x_{\lambda}\rangle=\frac{1}{|\mathbb{ G}_{\lambda}|}\sum_{h}\alpha_{h}|\phi_{h}\rangle\chi(h,g_{1}+g_{2})|g_{1}*x \rangle_{\mathcal{M}_{1}}|g_{2}*x\rangle_{\mathcal{M}_{2}}\enspace.\]
Above, \(|\phi_{h}\rangle\) are arbitrary normalized states representing whatever state the adversary contains after outputting its banknotes, and \(\sum_{h}\|\alpha_{h}\|^{2}=1\).
Now consider the adversary \(\mathcal{B}\) which first constructs \(|\psi\rangle\), and then measures the register \(\mathcal{M}_{2}\) to obtain \(y_{2}=g_{2}*x\).
**Claim 5.11**.: \(g_{2}\) _is uniform in \(\mathbb{G}\)._
Proof.: Consider additionally measuring \(\mathcal{M}_{1}\) in the basis \(\{|\mathbb{G}^{h}_{\lambda}*x_{\lambda}\rangle\}\). This this measurement is on a different register than the measurement on \(\mathcal{M}_{2}\), measuring \(\mathcal{M}_{1}\) does not affect the output distribution of \(\mathcal{M}_{2}\) (though the results may be correlated). But the measurement on \(\mathcal{M}_{1}\) determines \(h\), and conditioned on \(h\), \(\mathcal{M}_{2}\) collapses to \(|\mathbb{G}^{h}_{\lambda}*x_{\lambda}\rangle\). Regardless of what \(h\) is, measuring \(|\mathbb{G}^{h}_{\lambda}*x_{\lambda}\rangle\) gives a uniformly random element in \(\mathcal{X}\). Thus, even without measuring \(\mathcal{M}_{1}\), the measurement of \(\mathcal{M}_{2}\) gives a uniform element in \(\mathcal{X}\).
Therefore, after measuring \(\mathcal{M}_{2}\), the state \(|\psi\rangle\) then collapses to
\[|\psi_{g_{2}*x_{\lambda}}\rangle:=\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_ {h}\alpha_{h}|\phi_{h}\rangle\chi(h,g_{1}+g_{2})|g_{1}*x\rangle_{\mathcal{M}_ {1}}\enspace.\]
**Claim 5.12**.: _There is a QPT procedure \(\mathsf{Map}\) such that \(\mathsf{Map}(g,|\psi_{y}\rangle)=|\psi_{g*y}\rangle\)._
Proof.: Map simply applies the map \(y\mapsto(-g)*y\) to \(\mathcal{M}_{1}\) in superposition. Then we have that:
\[\mathsf{Map}(g,|\psi_{g_{2}*x_{\lambda}})\rangle =\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{h}\alpha_{h}|\phi_{h }\rangle\chi(h,g_{1}+g_{2})|(g_{1}-g)*x\rangle_{\mathcal{M}_{1}}\] \[=\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{h}\alpha_{h}|\phi_{h }\rangle\chi(h,g_{1}^{\prime}+g+g_{2})|g_{1}^{\prime}*x\rangle_{\mathcal{M}_{1 }}=|\psi_{(g+g_{2})*y}\rangle=|\psi_{g*(g_{2}*y)}\rangle\]
Above we used the change of variables \(g_{1}^{\prime}=g_{1}-g\).
Now we invoke Q-KGEA (Assumption 5.2) on the adversary \(\mathcal{B}\). Since \(\mathcal{B}\) always outputs a valid set element, this means there is another QPT algorithm \(\mathcal{E}\) such that
\[\Pr[\mathcal{E}(g_{2}*x_{\lambda},|\psi_{g_{2}*x_{\lambda}}\rangle)=g_{2}] \geq 1-\mathsf{negl}(\lambda)\]
Above, the probability is over \(g_{2}*x_{\lambda}\), as well as any randomness incurred when executing \(\mathcal{E}\). We note by a simple random self-reduction that we can insist the above probability holds for _all_\(g_{2}*x_{\lambda}\), where the randomness is only over \(\mathcal{E}\). Indeed, given \(|\psi_{g_{2}*x_{\lambda}}\rangle,g_{2}*x_{\lambda}\), we can choose a random \(g\) and compute \(g_{2}^{\prime}*x_{\lambda}\) as \(g*(g_{2}*x_{\lambda})\) where \(g_{2}^{\prime}=g+g_{2}\). Likewise, we can compute \(|\psi_{g_{2}^{\prime}*x_{\lambda}}\rangle\) as \(\mathsf{Map}(g,|\psi_{g_{2}*x_{\lambda}}\rangle)\). This gives a random instance on which to apply \(\mathcal{E}\), giving \(g_{2}^{\prime}\) with probability \(1-\mathsf{negl}(\lambda)\), regardless of \(g_{2}\). Then we can compute \(g_{2}=g_{2}^{\prime}-g\). We thus compute \(g_{2}\) with overwhelming probability, even in the worst case. We will therefore assume without loss of generality that this is the case for \(\mathcal{E}\).
For simplicity, we will actually assume that the probability is \(1\); it is straightforward to handle the case the probability is negligibly close to \(1\). By the Gentle Measurement Lemma [23], \(\mathcal{E}\) can compute \(g_{2}\) without altering the state \(|\psi_{g_{2}*x}\rangle\). Thus, by combining \(\mathcal{B}\) and \(\mathcal{E}\), we can compute both \(|\psi_{g_{2}*x}\rangle\) and \(g_{2}\) with probability \(1\). We can then compute \(\mathsf{Map}(-g_{2},|\psi_{g_{2}*x_{\lambda}}\rangle)=|\psi_{x_{\lambda}}\rangle\).
We now describe a new algorithm \(\mathcal{C}\) which breaks \(\mathrm{DLog}/1\)-minCDH (Assumption 5.6). \(\mathcal{C}\) works as follows:
* It constructs \(|\psi_{x_{\lambda}}\rangle\) as above.
* It makes its query to the \(\mathrm{DLog}/1\)-minCDH challenger, setting \(\mathcal{M}_{1}\) as the query register. This query simulates the operation \(\mathsf{Map}(g,\cdot)\), where \(g\) is the group element chosen by the challenger. Thus, at the end of the query, \(\mathcal{C}\) has \(|\psi_{g*x_{\lambda}}\rangle\).
* Now upon receiving \(g*x_{\lambda}\) from the challenger, run \(\mathcal{E}(g*x_{\lambda},|\psi_{g*x_{\lambda}}\rangle)\). By the guarantees of \(\mathcal{E}\), the output will be \(g\).
Thus we see that \(\mathcal{C}\) breaks the \(\mathrm{DLog}/1\)-minCDH assumption. This completes the security proof.
### Algebraic Group Actions.
Next we turn to the Algebraic Group Action Model (AGAM), considered by a couple recent works [1, 1]. This is an analog of the Algebraic Group Model (AGM) [13], adapted to group actions and quantum attackers. This model considers algebraic adversaries, which are algorithms where, any time they produce a set element output, must also "explain" the output in terms of the set elements the adversary saw as input. That is, if the algebraic adversary has so far been given set elements \(y_{1},\ldots,y_{\ell}\), when it outputs a new element \(y\), it must also output a group element \(g\in\mathbb{G}_{\lambda}\) and index \(i\) such that \(y=g*y_{i}\).
In the classical world, a common refrain is that the AGM is "between" the generic group model and standard model. As formalized by Zhandry [14], this is true in a particular sense: any "nice" security game that is secure in the standard model is also secure in the AGM, and in turn any nice security game that is secure in the AGM is also secure in the appropriate generic group model. The statements also hold true for group actions, provided we still restrict to the classical world. Here, "nice" comes with some important restrictions. The game must be "single stage", meaning there is only a single adversary interacting with the challenger. Moreover, the game must be a "type safe" game, which for group actions informally means the algorithms can pass set elements around and perform group action computations on them as a black box, but cannot maniluate the individual bits of the set element representations.
We might expect, therefore, that the AGAM is also "between" the GGAM and the standard model quantumly. However, this appears not to be the case, or at least it does not follow from any obvious adaptation of existing work. There are at least three problems.
The first is closely related to the issue with knowledge assumptions explored above. After all, the motivation for the AGAM, following the motivation from the AGM, is that we would expect the only way to output set elements is to actually derive them from existing set elements via the group action, in which case we would seem to know how to explain the new elements in terms of existing elements. In the classical setting, you can indeed show that this is true generically. However, our attack on the Q-KGEA assumption (Theorem 5.3) shows that this is not true quantumly. Namely, it is possible to output a superposition of set element where one does not "know" how to derive those elements from input elements.
For the second issue, consider the security game for our quantum lightning scheme. Recall that the adversary must output some \(h\) along with two copies of \(|\mathbb{G}^{h}*x_{\lambda}\rangle=\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}} \sum_{g}\chi(h,g)|g*x_{\lambda}\rangle\). An algebraic adversary would have to "explain" this state, meaning it must output two copies of
\[\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{g}\chi(h,g)|g*x_{\lambda},g\rangle\]
But here, note that if the challenger tries to verify the banknote state, the verification will actually _fail_, since the state is entangled with \(g\). Worse, observe that the state produced by the algebraic adversary is actually trivial to construct for any given \(h\), by first constructing \(\frac{1}{\sqrt{|\mathbb{G}_{\lambda}|}}\sum_{g}\chi(h,g)|g\rangle\) and then applying the group action operation. Thus, we see that the algebraic adversary can actually trivially produce two copies of the requisite state. This is in contrast to the actual banknote state \(|\mathbb{G}^{h}*x_{\lambda}\rangle\), where it appears only possible to sample actual banknotes for a random \(h\), but not produce a banknote for a given \(h\); indeed the security of our scheme inherently relies on this difficult. That is, the state required of the algebraic adversary is trivial, whereas the state required by a standard-model adversary is presumably hard to construct. This is in contrast to the classical world, where the algebraic adversary's task is always at least as hard as the real-world adversary.
The third issue is the claim that any game which is secure in the classical AGM/AGAM is also secure in the classical GGM/GGAM. This claim, or at least the classical proof of it, does not hold quantumly. This is because the proof relies on the ability to view the adversary's queries to the group/group action oracle and extract information from them. Specifically, in the classical AGAM, the only way the adversary can obtain new set elements is to act on existing elements by querying the group action. By writing down the input set and group element as well as the output group element, we can remember how we derived all set elements. Importantly, for any set element we produce, we can trace that set element back to an input set element, and see that the output
element was obtained via a sequence of actions by group elements on the original input element. By multiplying these group actions together, we can explain the output element in terms of the input set element.
This strategy, however, does not work quantumly. Consider for example the hardness assumption DLog/minCDH (Assumption 5.6). Here, the adversary can query on a superposition \(\sum_{y}\alpha_{y}|y\rangle\) of set elements, and get the resulting superposition obtained by action of a secret group element \((-g)\): \(\sum_{y}\alpha_{y}|(-g)*y\rangle\)
In the AGAM, we would ask the adversary queries on \(\sum_{y}\alpha_{y}|y,\texttt{Explain}_{y}\rangle\), where \(\texttt{Explain}_{y}\) is an explanation of \(y\) in terms of the elements the adversary has seen so far. In the case of DLog/minCDH, the only element seen by the time the adversary must make its query is \(x_{\lambda}\), and so \(\texttt{Explain}_{y}\) is the unique \(h\) such that \(y=h*x_{\lambda}\). Thus, the adversary's query takes the form \(\sum_{h}\alpha_{h*x_{\lambda}}|h*x_{\lambda},h\rangle\). In response, it receives
\[|\phi_{\texttt{AGAM}}\rangle=\sum_{h}\alpha_{h*x_{\lambda}}|(h-g)*x_{\lambda},h\rangle\]
On the other hand, a generic adversary would have just
\[|\phi_{\texttt{GGAM}}\rangle=\sum_{h}\alpha_{h*x_{\lambda}}|(h-g)*x_{\lambda}\rangle\]
While in the classical setting, having the extra information \(h\) about \(y\) does not cause problems (it can just be erased or ignored), this extra information is problematic quantumly. For example, it might be that having \(|\phi_{\texttt{GGAM}}\rangle\) allows for solving some task, whereas having \(\sum_{h}\alpha_{h*x_{\lambda}}|(h-g)*x_{\lambda},h\rangle\) does not. In such a case, we find that the task is hard in the AGAM, despite being easy in the GGAM and even in the standard model. In particular, if we want the AGAM to be "between" the GGAM and standard models, we would need to rule this situation out, meaning we would need a way to map the state \(|\phi_{\texttt{AGAM}}\rangle\) containing the explanation back to the state \(|\phi_{\texttt{GGM}}\rangle\) without the explanation. This mapping, in general, will be intractible, as it requires un-computing \(h\) from \(|(h-g)*x_{\lambda}\).
Based on these issues, we see that the AGAM is probably _not_ a reasonable model for quantum attacks, at least when the game is inherently quantum, as with the security of our quantum lightning scheme or with assumptions that allow quantum queries. On the other hand, the model might be reasonable for "classically stated" security games, such as ordinary discrete log or CDH. However, these problems do not arise at all for generic group actions. Therefore, based on this discussion, we posit that generic group actions should be the preferred method for analyzing cryptosystems and security games.
## 6 A Construction for REGAs
In this section, we give a construction for the case where the group action can only be computed efficiently for a small "base" set of group elements. Such group actions are known as "restricted effective group actions" (REGAs).
### Some additional background
Before giving the construction, we here provide some additional background that will be necessary for understanding the construction.
Groups.Let \(\mathbb{G}\) be a group (written additively), and \(N\) an integer such that \(N\times g=0\) for all \(g\in\mathbb{G}\). \(N=\left|\mathbb{G}\right|\) will do. Then \(\mathbb{G}\) is a subgroup of \(\mathbb{Z}_{N}^{n}\) for some positive integer \(n\). Let \(W\) be the set of vectors in \(\mathbb{Z}_{N}^{n}\) such that \(\mathbf{w}\cdot g=0\bmod N\) for all \(g\in\mathbb{G}\). \(W\) is then a group, and we can therefore consider the group \((\mathbb{Z}_{N}^{n})/W\) defined using the equivalence relation \(\sim\), where \(\mathbf{u}_{1}\sim\mathbf{u}_{2}\) if \(\mathbf{u}_{1}-\mathbf{u}_{2}\in W\). \((\mathbb{Z}_{N}^{n})/W\) is isomorphic to \(\mathbb{G}\); let \(\phi:\mathbb{G}\to(\mathbb{Z}_{N}^{n})/W\) be an isomorphism. Note that for \(g\in\mathbb{G}\subseteq\mathbb{Z}_{N}^{n}\) and \(h\in\mathbb{G}\), \(g\cdot\phi(h)\bmod N\) is well-defined by taking any representative \(h^{\prime}\in\phi(h)\) and computing \(g\cdot h^{\prime}\bmod N\).
Under this notation, we can re-define \(\chi(g,h)\) as \(e^{i2\pi g\cdot\phi(h)/N}\), which is equivalent to the definition in Section 2.
We associate \(\mathbb{Z}_{N}\) with the interval \([-\lfloor(N-1)/2\rfloor,\lceil(N-1)/2\rceil]\) in the obvious way, and likewise associate \(\mathbb{Z}_{N}^{n}\) with the hypercube \([-\lfloor(N-1)/2\rfloor,\lceil(N-1)/2\rceil]^{n}\). This gives rise to a notion of norm on \(\mathbb{Z}_{N}^{n}\) by taking the norm in \(\mathbb{Z}^{n}\).
**Lemma 6.1**.: _Let \(\mathbb{G}\) be a subgroup of \(\mathbb{Z}_{N}\). Then the number of elements \(g\in\mathbb{G}\) such that \(|g|\geq N/4\) is exactly \(\left|\mathbb{G}\right|+1-2\lceil\left|\mathbb{G}\right|/4\rceil\). In particular, if \(\mathbb{G}\neq\{0\}\), then there is at least one element \(g\in\mathbb{G}\) has \(|g|\geq N/4\)._
Proof.: First, it suffices to consider \(\left|\mathbb{G}\right|=N\), in other words \(\mathbb{G}=\mathbb{Z}_{N}\): we can then lift to \(N=t\left|\mathbb{G}\right|\), where \(\mathbb{G}\) is embedded into \(\mathbb{Z}_{N}\) by multiplying each element in \(\mathbb{G}\) by \(t\) (where multiplication is over the integers). Since \(N\) is also multiplied by \(t\), this preserves the number of elements with \(|g|\geq N/4\).
When \(\mathbb{G}=\mathbb{Z}_{N}\), we are then simply asking for the number of elements in \([-\lfloor(\left|\mathbb{G}\right|-1)/2\rfloor,\lceil(\left|\mathbb{G}\right|- 1)/2\rceil]\) with absolute value at least \(\left|\mathbb{G}\right|/4\). In other words, it is the combined size of the intervals \([\lceil\left|\mathbb{G}\right|/4\rceil,\lceil(\left|\mathbb{G}\right|-1)/2 \rceil]\) and \([-\lfloor(\left|\mathbb{G}\right|-1)/2\rfloor,-\lceil\left|\mathbb{G}\right|/4]\), giving a total of \((\lceil(\left|\mathbb{G}\right|-1)/2\rceil-\lceil\left|\mathbb{G}\right|/4 \rceil+1)+(\lfloor(\left|\mathbb{G}\right|-1)/2\lfloor-\lceil\left|\mathbb{G} \right|/4\rceil+1)=\left|\mathbb{G}\right|+1-2\lceil\left|\mathbb{G}\right|/4\rceil\).
**Lemma 6.2**.: _Let \(\mathbf{A}\in\mathbb{Z}_{N}^{n\times m}\) be a matrix. Let \(\mathbb{G}\) be the subgroup of \(\mathbb{Z}_{N}^{n}\) generated by the columns of \(\mathbf{A}\). Let \(B,C\) be positive integers such that \(8BCm<N\). Suppose there is a distribution \(\mathcal{D}\) on \([-B,B]^{m}\) such that \(\mathbf{A}\cdot\mathbf{x}\) for \(x\leftarrow\mathcal{D}\) is negligibly close to uniform in \(\mathbb{G}\). Then the function \(f:\mathbb{G}\times[-C,C]\to\mathbb{Z}_{N}^{m}\) given by \(f(g,\mathbf{e})=\mathbf{A}^{T}\cdot\phi(g)+\mathbf{e}\) is injective._
Proof.: Note that \(\mathbf{A}^{T}\cdot\phi(g)\) is well defined since it is independent of the representative of \(\phi(g)\). Consider a potential collision in \(f\): \(\mathbf{A}^{T}\cdot\phi(g_{1})+\mathbf{e}_{1}=\mathbf{A}^{T}\cdot\phi(g_{2})+ \mathbf{e}_{2}\). By subtracting, this gives a non-zero pair \((g=g_{1}-g_{2},\mathbf{e}=\mathbf{e}_{1}-\mathbf{e}_{2})\) where \(\mathbf{e}\in[-2C,2C]\) such that \(\mathbf{A}^{T}\cdot\phi(g)+\mathbf{e}=0\) or equivalently \(\mathbf{A}^{T}\cdot\phi(g)=-\mathbf{e}\). Now consider sampling \(\mathbf{x}\leftarrow\mathcal{D}\), meaning \(\mathbf{u}=\mathbf{A}\cdot\mathbf{x}\) is negligibly close to uniform in \(\mathbb{G}\). Then \(\mathbf{u}^{T}\cdot\phi(g)=\mathbf{x}^{T}\cdot\mathbf{A}^{T}\cdot\phi(g)=- \mathbf{x}^{T}\cdot\mathbf{e}\). On one hand, \(\mathbf{u}^{T}\cdot\phi(g)\) is statistically close to uniform in a subgroup \(\mathbb{G}^{\prime}\) of \(\mathbb{Z}_{N}\), and \(\mathbb{G}^{\prime}\) is different from \(\{0\}\) since \(g\neq 0\). By Lemma 6.1, the probability \(\left|\mathbf{u}^{T}\cdot\phi(g)\right|\geq N/4\) is \(\left|\mathbb{G}^{\prime}\right|+1-2\lceil\left|\mathbb{G}^{\prime}\right|/4\rceil>0\) since \(\left|\mathbb{G}^{\prime}\right|\geq 2\). On the other hand, \(\left|-\mathbf{x}^{T}\cdot\mathbf{e}\right|<2mBC\leq N/4\) always. This means the distributions of \(\mathbf{u}^{T}\cdot\phi(g)\) and \(-\mathbf{x}^{T}\cdot\mathbf{e}\) must be non-negligibly far, a contradiction.
Discrete Gaussians.The _discrete Gaussian distribution_ is the distribution over \(\mathbb{Z}\) defined as:
\[\Pr[x]=\mathcal{D}_{\sigma}(x):=C_{\sigma}e^{2\pi x^{2}/\sigma^{2}},\]
where \(C_{\sigma}\) is the normalization constant \(C_{\sigma}=\sum_{x\in\mathbb{Z}}e^{2\pi x^{2}/\sigma^{2}}\), so that \(\mathcal{D}_{\sigma}\) defined a probability distribution. We will also define a truncated variant, denoted
\[\mathcal{D}_{\sigma,B}(x):=\begin{cases}C_{\sigma,B}e^{2\pi x^{2}/\sigma^{2}}& \text{ if }|x|\leq B\\ 0&\text{ otherwise}\end{cases},\]
where again \(C_{\sigma,B}\) is an appropriately defined normalization constant. For large \(B\), we can treat the truncated and un-truncated Gaussians as essentially the same distribution:
**Fact 6.3**.: _For \(\sigma\geq\omega(\sqrt{\log\lambda})\) and \(B\geq\sigma\times\omega(\sqrt{\log\lambda})\), the distributions \(\mathcal{D}_{\sigma}\) and \(\mathcal{D}_{\sigma,B}\) are negligibly close_
For a vector \(\mathbf{r}\in\mathbb{Z}^{m}\), we write \(\mathcal{D}_{\sigma,B}(\mathbf{r})=\prod_{i=1}^{m}\mathcal{D}_{\sigma,B}(r_{i})\).
The _discrete Gaussian superposition_ is the quantum state
\[|\mathcal{D}_{\sigma}\rangle:=\sum_{x\in\mathbb{Z}}\sqrt{\mathcal{D}_{\sigma} (x)}|x\rangle\]
As we will generally need to restrict to finite-precision, we also consider the truncated variant
\[|\mathcal{D}_{\sigma,B}\rangle:=\sum_{x\in[-B,B]}\sqrt{\mathcal{D}_{\sigma,B}( x)}|x\rangle\]
Again, for large enough \(B\), we can treat the truncated and un-truncated Gaussian superpositions as essentially the same state:
**Fact 6.4**.: _For \(\sigma\geq\omega(\sqrt{\log\lambda})\) and \(B\geq\sigma\times\omega(\sqrt{\log\lambda})\), the \(\|\mathcal{D}_{\sigma}\rangle-|\mathcal{D}_{\sigma,B}\rangle\|\) is negligible._
By adapting classicsal lattice sampling algorithms, the states \(|\mathcal{D}_{\sigma,B}\rangle\) can be efficiently constructed.
Fourier transform pairs.Fix an integer \(N\). We will associate the set \(\mathbb{Z}_{N}\) with the integers \([-\lfloor(N-1)/2\rfloor,\lceil(N-1)/2\rceil]\). Denote by \(\mathsf{QFT}_{N}\) the Quantum Fourier Transform \(\mathsf{QFT}_{\mathbb{Z}_{N}}\). We now recall some basic facts about quantum Fourier transforms.
\[\mathsf{QFT}_{N}^{m}\sum_{\mathbf{r}\in\mathbb{Z}_{N}^{m}:\mathbf{ A}\cdot\mathbf{r}=\mathbf{s}}|\mathbf{r}\rangle =N^{m/2-n}\sum_{\mathbf{t}\in\mathbb{Z}_{N}^{n}}e^{i2\pi\mathbf{ t}\cdot\mathbf{s}/N}|\mathbf{A}^{T}\cdot\mathbf{t}\rangle\text{ for }\mathbf{A}\in\mathbb{Z}_{N}^{n\times m}\] \[\mathsf{QFT}_{N}^{m}\sum_{\mathbf{r}}\alpha_{\mathbf{r}}\beta_{ \mathbf{r}}|\mathbf{r}\rangle =\frac{1}{N^{m/2}}\sum_{\mathbf{t},\mathbf{u}}\hat{\alpha}_{ \mathbf{t}}\hat{\beta}_{\mathbf{u}}|\mathbf{u}+\mathbf{t}\rangle\text{ for }\sum_{\mathbf{u}} \hat{\alpha}_{\mathbf{t}}|\mathbf{t}\rangle=\mathsf{QFT}_{N}^{m}\sum_{ \mathbf{r}}\alpha_{\mathbf{r}}|\mathbf{r}\rangle\] \[\mathsf{QFT}_{N}|\mathcal{D}_{\sigma,\lfloor(N-1)/2\rfloor}\rangle \approx|\mathcal{D}_{N/\sigma,\lfloor(N-1)/2\rfloor}\rangle\text{ for }\sum_{\begin{subarray}{c}N\geq\sigma\times\omega(\sqrt{\log\lambda})\\ \sigma\geq\omega(\sqrt{\log\lambda})\end{subarray}}^{N\geq\sigma\times\omega( \sqrt{\log\lambda})}\]
Above, \(\approx\) means the two states are negligibly close.
### The Construction
Let \(\mathbb{G}_{\lambda},\mathcal{X}_{\lambda},*\) be a REGA, and \(\mathcal{T}=(g_{1},\ldots,g_{m})\) a set such that \(*\) can be efficiently computed for \(g_{i}\) and \(g_{i}^{-1}\). We can associate \(\mathbb{G}_{\lambda}\) with a subgroup of \(\mathbb{Z}_{N}^{n}\) for some integers \(N,n\). We can likewise associate the list \(\mathcal{T}\) with the matrix \(\mathbf{A}=(g_{1},\cdots,g_{m})\in\mathbb{Z}_{N}^{n\times m}\).
We will make the following assumption about the structure of \(\mathcal{T}\), which is typical in the isogeny literature.
**Assumption 6.5**.: There is a polynomial \(B\) and a distribution \(\mathcal{D}^{*}\) on \([-B,B]^{m}\) such that for \(\mathbf{x}\leftarrow\mathcal{D}\), \(\sum_{i=1}^{m}x_{i}g_{i}=\mathbf{A}\cdot\mathbf{x}\) is statistically close to a uniform element in \(\mathbb{G}\)
Numerous examples of such \(\mathcal{D}^{*}\) have been proposed, such as discrete Gaussians [14], or uniform vectors in small balls relative to different norms [13, 15].
Let \(C=N/8Bm\), which then satisfies the conditions of Lemma 6.2. Thus, for \(\mathbf{e}\) with entries in \([-C,C]^{m}\), the map \((g,\mathbf{e})\mapsto\mathbf{A}^{T}\cdot\phi(g)+\mathbf{e}\) is injective.
Let \(\sigma\geq 16Bm/\epsilon\times\omega(\sqrt{\log\lambda})\) and \(B^{\prime}\geq\sigma\times\omega(\sqrt{\log\lambda})\) be polynomials. We will assume \(N\geq 2B^{\prime}\), which is always possible since we can take \(N\) to be arbitrarily large. We will also for simplicity assume \(N\) is even. This assumption is not necessary but will simplify some of the analysis, and is moreover without loss of generality since we can always make \(N\) larger by multiplying it by arbitrary factors.
**Construction 6.6**.: \(\mathsf{Gen}(1^{\lambda})\): Initialize quantum registers \(\mathcal{S}\) (for serial number) and \(\mathcal{M}\) (for money) to states \(|\mathcal{D}_{\sigma,B^{\prime}}\rangle_{\mathcal{S}}^{\otimes m}\) and \(|0\rangle_{\mathcal{M}}\), respectively. Then do the following:
* Apply in superposition the map \(|\mathbf{r}\rangle_{\mathcal{S}}|y\rangle_{\mathcal{M}}\mapsto|\mathbf{r} \rangle_{\mathcal{S}}|y\oplus[(\sum_{i=1}^{m}r_{i}g_{i})*x_{\lambda}])_{ \mathcal{M}}\). The joint state of the system \(\mathcal{S}\otimes\mathcal{M}\) is then \[\sum_{\mathbf{r}\in\mathcal{I}_{N}^{m}}\sqrt{\mathcal{D}_{\sigma^{\prime},B}( \mathbf{r})}|\mathbf{r}\rangle_{\mathcal{S}}|(\sum_{i=1}^{m}r_{i}g_{i})*x_{ \lambda}\rangle_{\mathcal{M}}=\sum_{g\in\mathcal{G}_{\lambda}}\left(\sum_{ \mathbf{r}\in\mathcal{I}_{N}^{m}:\mathbf{A}\cdot\mathbf{r}=g}\sqrt{\mathcal{ D}_{\sigma,B^{\prime}}(\mathbf{r})}|\mathbf{r}\rangle_{\mathcal{S}}\right)|g*x_{ \lambda}\rangle_{\mathcal{M}}\]
* Apply \(\mathsf{QFT}_{\mathcal{I}_{N}^{m}}\) to \(\mathcal{S}\). Using the QFT rules given above, this yields the state negligibly close to: \[\frac{1}{N^{n}}\sum_{g\in\mathcal{G}_{\lambda}}\left(\sum_{ \mathbf{s},\mathbf{e}\in\mathcal{I}_{N}^{n}}\sqrt{\mathcal{D}_{N/\sigma,N/2-1 }(\mathbf{e})}e^{i2\pi(g\cdot\mathbf{s})}|\mathbf{A}^{T}\cdot\mathbf{s}+ \mathbf{e}\rangle_{\mathcal{S}}\right)|g*x_{\lambda}\rangle_{\mathcal{M}}\] \[=\frac{1}{|\mathcal{G}_{\lambda}|}\sum_{g\in\mathcal{G}_{\lambda} }\left(\sum_{h\in\mathcal{G}_{\lambda},\mathbf{e}\in\mathcal{I}_{N}^{n}}\sqrt {\mathcal{D}_{N/\sigma,N/2-1}(\mathbf{e})}e^{i2\pi(g\cdot\phi(h))}|\mathbf{A }^{T}\cdot\phi(h)+\mathbf{e}\rangle_{\mathcal{S}}\right)|g*x_{\lambda}\rangle_{ \mathcal{M}}\] \[=\frac{1}{\sqrt{|\mathcal{G}_{\lambda}|}}\sum_{g\in\mathcal{G}_{ \lambda}}\left(\frac{1}{\sqrt{|\mathcal{G}_{\lambda}|}}\sum_{h\in\mathcal{G}_ {\lambda},\mathbf{e}\in\mathcal{I}_{N}^{n}}\sqrt{\mathcal{D}_{N/\sigma,N/2-1 }(\mathbf{e})}\chi(g,h)|\mathbf{A}^{T}\cdot\phi(h)+\mathbf{e}\rangle_{ \mathcal{S}}\right)|g*x_{\lambda}\rangle_{\mathcal{M}}\]
* Measure \(\mathcal{S}\), giving the serial number \(\mathbf{t}:=\mathbf{A}^{T}\cdot\phi(h)+\mathbf{e}\). \(\mathbf{e}\) is distributed negligibly close to \(\mathcal{D}_{N/\sigma}\), meaning with overwhelming probability each entry is in \([-N/16Bm,N/16Bm]=[-C/2,C/2]\subseteq[-C,C]\). This means, to within negligible error, \(\mathbf{t}\) uniquely determines \(\phi(h)\) and hence \(h\). Therefore, the \(\mathcal{M}\) register then collapses to a state negligibly close to \[\frac{1}{\sqrt{|\mathcal{G}_{\lambda}|}}\sum_{g\in\mathcal{G}_{\lambda}}\chi( g,h)|g*x_{\lambda}\rangle_{\mathcal{M}}=:|\mathcal{G}_{\lambda}^{h}*x_{ \lambda}\rangle\] Note that \(h\) is unknown. Output \((\mathbf{t},|\mathcal{G}_{\lambda}^{h}*x_{\lambda}\rangle)\) \(\mathsf{Ver}(\mathbf{t},\$):\) First verify that the support of \(\$\) is contained in \(\mathcal{X}_{\lambda}\), by applying the assumed algorithm for recognizing \(\mathcal{X}_{\lambda}\) in superposition. Then repeat the following \(\lambda\) times:
* Initialize a new register \(\mathcal{H}\) to \((|0\rangle_{\mathcal{H}}+|1\rangle_{\mathcal{H}})/\sqrt{2}\).
* Choose a random element \(\mathbf{x}\leftarrow\mathcal{D}^{*}\).
* Apply to \(\mathcal{H}\otimes\mathcal{M}\) in superposition the map \[\mathsf{Apply}|b\rangle_{\mathcal{H}}|y\rangle_{\mathcal{M}}\mapsto\begin{cases}| 0\rangle_{\mathcal{H}}|y\rangle_{\mathcal{M}}&\text{if }b=0\\ |1\rangle_{\mathcal{H}}|(-\sum_{i}x_{i}g_{i})*y\rangle_{\mathcal{M}}&\text{if }b=1 \end{cases}\] Since the entries of \(\mathbf{x}\) are bounded by \(B\) which is polynomial, this step is efficient.
* Measure \(\mathcal{H}\) in the basis \(B_{\mathbf{t},\mathbf{x}}:=\{(|0\rangle_{\mathcal{H}}+e^{i2\pi\mathbf{x}T\cdot \mathbf{t}/N}|1\rangle_{\mathcal{H}})/\sqrt{2},(|0\rangle_{\mathcal{H}}-e^{i2 \pi\mathbf{x}^{T}\cdot\mathbf{t}/N}|1\rangle_{\mathcal{H}})/\sqrt{2}\}\), giving a bit \(b_{u}\in\{0,1\}\). Discard the \(\mathcal{H}\) register.
* Accept if at least a fraction \(7/8\) of the \(b_{u}=0\) and the support of \(\$\) is contained in \(\mathcal{X}_{\lambda}\); otherwise reject.
### Accepting States of the Verifier
We now analyze the correctness of the construction.
**Theorem 6.7**.: _Let \(|\psi\rangle\) be a state over \(\mathcal{M}\). Then \(\Pr[\mathsf{Ver}(h,|\psi\rangle)=1]=\|\langle\psi|\mathbb{G}_{\lambda}^{h}*x _{\lambda}\rangle\|^{2}(1-2^{-\Omega(\sqrt{\lambda})}+2^{-\Omega(\sqrt{\lambda })}.\)_
Proof.: For simplicity, we analyze the case of \(|\psi\rangle=|\mathbb{G}_{\lambda}^{h^{\prime}}*x_{\lambda}\), which form a basis for superpositions over \(\mathcal{X}_{\lambda}\). In this case, Theorem 6.7 states that \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\) is accepted with probability \(1-2^{\Omega(\sqrt{\lambda})}\), while \(|\mathbb{G}_{\lambda}^{h^{\prime}}*x_{\lambda}\rangle\) for \(h^{\prime}\neq h\) is accepted with probability \(2^{\Omega(\sqrt{\lambda})}\). By a similar approach as in Theorem 3.3, we can extend the analysis to all states.
If we let \(u=\mathbf{A}\cdot\mathbf{x}=\sum_{i}x_{i}g_{i}\), then by the same analysis as in Construction 3.1, we have that applying Apply to the state \(|\mathbb{G}_{\lambda}^{h^{\prime}}*x_{\lambda}\rangle\) results in the state
\[\frac{1}{\sqrt{2}}\left(|0\rangle_{\mathcal{H}}+\chi(u,h^{\prime })|1\rangle_{\mathcal{H}}\right)|\mathbb{G}_{\lambda}^{h^{\prime}}*x_{\lambda}\rangle\] \[=\frac{1}{\sqrt{2}}\left(|0\rangle_{\mathcal{H}}+e^{i2\pi u\cdot \phi(h^{\prime})/N}|1\rangle_{\mathcal{H}}\right)|\mathbb{G}_{\lambda}^{h^{ \prime}}*x_{\lambda}\rangle\] \[=\frac{1}{\sqrt{2}}\left(|0\rangle_{\mathcal{H}}+e^{i2\pi\mathbf{ x}^{T}\cdot\mathbf{A}^{T}\cdot\phi(h^{\prime})/N}|1\rangle_{\mathcal{H}}\right)| \mathbb{G}_{\lambda}^{h^{\prime}}*x_{\lambda}\rangle\]
Conditioned on sampling \(u\), \(\Pr[b_{u}=0]\) is the inner product squared of \(\left(|0\rangle_{\mathcal{H}}+e^{i2\pi\mathbf{x}^{T}\cdot\mathbf{A}^{T}\cdot \phi(h^{\prime})/N}|1\rangle_{\mathcal{H}}\right)/\sqrt{2}\), with the basis state \(\left(|0\rangle_{\mathcal{H}}+e^{i2\pi\mathbf{x}\cdot\mathbf{t}/N}|1\rangle_{ \mathcal{H}}\right)/\sqrt{2}\). This is:
\[\Pr[b_{u}=0] =\frac{1}{4}\left\|1+e^{i2\pi(\mathbf{x}^{T}\cdot\mathbf{A}^{T} \cdot\phi(h^{\prime})-\mathbf{x}^{T}\cdot\mathbf{t})/N}\right\|^{2}\] \[=\frac{1}{2}\left(1+\cos\left[2\pi(\mathbf{x}^{T}\cdot\mathbf{A}^{T }\cdot\phi(h^{\prime})-\mathbf{x}^{T}\cdot(\mathbf{A}^{T}\cdot\phi(h)+ \mathbf{e}))/N\right]\right)\] \[=\frac{1}{2}\left(1+\cos\left[2\pi(\mathbf{x}^{T}\cdot\mathbf{A}^{T }\cdot\phi(h^{\prime}-h)+\mathbf{x}^{T}\cdot\mathbf{e})/N\right]\right)\]
In the case \(h=h^{\prime}\), \(\Pr[b_{u}=0]=\frac{1}{2}\left(1+\cos\left[2\pi\mathbf{x}^{T}\cdot\mathbf{e}/N \right]\right)\). We have that \(|2\pi\mathbf{x}^{T}\cdot\mathbf{e}/N|\leq\pi/8\). Using the fact that \(\cos(x)\geq 1-x^{2}/2\), we therefore have that \(\Pr[b_{u}=0]\geq 1-\pi^{2}/256=0.9614\ldots=\frac{1}{2}\left(1+\cos\left[2\pi \mathbf{x}^{T}\cdot\mathbf{e}/N\right]\right)\). We have that \(|2\pi\mathbf{x}^{T}\cdot\mathbf{e}/N|\leq\pi/8\). Using the fact that \(\cos(x)\geq 1-x^{2}/2\), we therefore have that \(\Pr[b_{u}=0]\geq 1-\pi^{2}/256=0.9614\ldots=\frac{1}{2}\left(1+\cos\left[2\pi \mathbf{x}^{T}\cdot\mathbf{e}/N\right]\right)\). We have that \(|2\pi\mathbf{x}^{T}\cdot\mathbf{e}/N|\leq\pi/8\). Using the fact that \(\cos(x)\geq 1-x^{2}/2\), we therefore have that \(\Pr[b_{u}=0]\geq 1-\pi^{2}/256=0.9614\ldots=\frac{1}{2}\left(1+\cos\left[2\pi \mathbf{x}^{T}\cdot\mathbf{e}/N\right]\right)\). We have that \(|2\pi\mathbf{x}^{T}\cdot\mathbf{e}/N|\leq\pi/8\). Using the fact that \(\cos(x)\geq 1-x^{2}/2\), we therefore have that \(\Pr[b_{u}=0]\geq 1-\pi^{2}/256=0.9614\ldots=\frac{1}{2}\left(1+\cos\left[2\pi \mathbf{x}^{T}\cdot\mathbf{e}/N\right]\right)\).
\(7/8+\Omega(1)\). Then via standard concentration inequalities, after \(\lambda\) trials, except with probability \(2^{-\Omega(\sqrt{\lambda})}\), at least \(7/8\) of the \(b_{u}\) will be \(0\). Therefore, \(\mathsf{Ver}\) accepts with probability \(1-2^{-\Omega(\sqrt{\lambda})}\).
On the other hand, if \(g\neq g^{\prime}\), then \(\mathbf{x}^{T}\mathbf{A}^{T}\) is statistically close to uniform in \(\mathbb{G}_{\lambda}\), and so \(\mathbf{x}^{T}\cdot\mathbf{A}^{T}\cdot\phi(h^{\prime}-h)\) is statistically close to uniform in a non-trivial subgroup \(\mathbb{G}^{\prime}\) of \(\mathbb{Z}_{N}\). By Lemma 6.1 and our assumption that \(N\) is even, at least half of the elements of \(\mathbb{Z}_{N}\) are at least \(N/4\) in absolute value. In particular, this means \(\Pr[|\mathbf{x}^{T}\cdot\mathbf{A}^{T}\cdot\phi(h^{\prime}-h)|\geq N/4]\geq 1 /2-\mathsf{negl}\). On the other hand, \(|\mathbf{x}^{T}\cdot\mathbf{e}|\leq N/16\) always. This means \(\|\mathbf{x}^{T}\cdot\mathbf{A}^{T}\cdot\phi(h^{\prime}-h)+\mathbf{x}^{T}\cdot \mathbf{e}\|\geq N/4-N/16\) with probability at least \(1/2-\mathsf{negl}\). In this case, we can use that \(\cos(\pi/2+x)\leq|x|\) to bound \(\cos\left[2\pi(\mathbf{x}^{T}\cdot\mathbf{A}^{T}\cdot\phi(h^{\prime}-h)+ \mathbf{x}^{T}\cdot\mathbf{e})/N\right]\leq 2\pi/16=\pi/8\), meaning \(\Pr[b_{u}=0]\leq 1/2+\pi/16\). Averaging over all \(u\), we therefore have that: \(\Pr[b_{u}=0]\leq\frac{3}{4}+\pi/32+\mathsf{negl}=0.8481\ldots=7/8-\Omega(1)\). Then via standard concentration inequalities, after \(\lambda\) trials, except with probability \(2^{-\Omega(\sqrt{\lambda})}\), fewer than \(7/8\) of the \(b_{u}\) will be \(0\). therefore, \(\mathsf{Ver}\) accepts with probability \(2^{-\Omega(\sqrt{\lambda})}\).
### Security
Here, we state the security of Construction 6.6.
Assumptions.We first need to define slight variants of our assumptions, in order to be consistent with the more limited structure of a REGA. For example, in the ordinary Discrete Log assumption (Assumption 2.4), the challenger computes \(y=g\ast x\) for a random \(g\), and adversary produces \(g\). But the adversary cannot even tell if it succeeded since it cannot compute the action of \(g\) in general. Instead, the adversary is required not to compute \(g\), but instead to compute any short \(\mathbf{x}\) such that \(g=\sum_{i}x_{i}g_{i}\). The adversary can then check that it has a solution by computing the action of \(g\) using its knowledge of \(\mathbf{x}\). We analogously update each of our assumptions to work with the limited ability to compute the group action on REGAs.
As above, let \(\mathbb{G}_{\lambda},\mathcal{X}_{\lambda},\ast\) be a REGA, and \(\mathcal{T}=(g_{1},\ldots,g_{m})\) a set such that \(\ast\) can be efficiently computed for \(g_{i}\) and \(g_{i}^{-1}\). Let \(\mathcal{D}^{\ast},B\) be as in Assumption 6.5.
**Assumption 6.8**.: The _REGA quantum modified knowledge of group element assumption_ (REGA-Q-KGEA) holds on a group action \((\mathbb{G},\mathcal{X},\ast)\) if the following is true. For any quantum polynomial time (QPT) adversary \(\mathcal{A}\) which performs no measurements except for its final output, there exists a polynomial \(C\), a QPT extractor \(\mathcal{E}\) with outputs in \([-C,C]^{m}\), and negligible \(\epsilon\) such that
\[\Pr\left[y\in\mathcal{X}\wedge y\neq g\ast x_{\lambda}:\tfrac{(y,|\psi))\leftarrow \mathcal{A}(1^{\lambda})}{\mathbf{x}\leftarrow\mathcal{E}(y,|\psi))}g\leftarrow \sum_{i}x_{i}g_{i}\right]\leq\epsilon(\lambda)\enspace.\]
As with the non-REGA Q-KGEA assumption, we expect the REGA-Q-KGEA assumption is likely false. Certainly it is false on group actions with oblivious sampling. However, we note that it is unclear if our attack from Theorem 5.3 can be adapted to REGAs. Nevertheless, to mitigate any risks associated with the plain REGA-Q-KGEA assumption, we can likewise define a _modified_ REGA KGEA assumption (REGA-Q-mKGEA), in the same spirit as Assumption 5.4.
We next define our REGA analog of Assumption 5.6.
**Assumption 6.9**.: We say that the _REGA Discrete Log with a single minimal CDH query_ assumption (REGA-DLog/1-minCDH) assumption holds if the following is true. For any QPT adversary \(\mathcal{A}\) playing the following game, parameterized by \(\lambda\), there is a negligible \(\epsilon\) such that \(\mathcal{A}\) wins with probability at most \(\epsilon(\lambda)\):
* The challenger, on input \(\lambda\), chooses a random \(g\in\mathbb{G}_{\lambda}\). It sends \(\lambda\) to \(\mathcal{A}\)
* \(\mathcal{A}\) submits a superposition query \(\sum_{y\in\mathcal{X},z\in\{0,1\}^{*}}\alpha_{y,z}|y,z\rangle\). Here, \(y\) is a set element that forms the query, and \(z\) is the internal state of the adversary when making the query. The challenger responds with \(\sum_{y\in\mathcal{X},z\in\{0,1\}^{*}}\alpha_{y,z}|(-g)*y,z\rangle\).
* The challenger sends \(g*x\) to \(\mathcal{A}\).
* \(\mathcal{A}\) outputs a \(\mathbf{x}\in\mathbb{Z}^{m}\), encoded in unary. It wins if \(g=\sum_{i}x_{i}g_{i}\).
Note that the challenger in Assumption 6.9 is inefficient on a REGA. However, under Assumption 6.5, the challenger can be made efficient by first sampling \(\mathbf{y}\leftarrow\mathcal{D}^{*}\) and then computing \(g=\sum_{i}y_{i}g_{i}\).
**Theorem 6.10**.: _Assuming REGA-DLog/1-minCDH (Assumption 6.9) and REGA-Q-KGEA (Assumption 6.8) (or more generally, REGA-Q-mKGEA) both hold on a group action \((\mathbb{G},\mathcal{X},*)\), then Construction 6.6 is a quantum lightning scheme. Alternatively, if D2X/min (Assumption 4.9) holds on a group action with \(\mathcal{X}\subseteq\{0,1\}^{m}\), then Construction 6.6 is a quantum lightning scheme in the generic group action model \(\mathsf{GGAM}_{\mathbb{G},m^{\prime}}\) with label length \(m^{\prime}\)._
We only sketch the proof. Like in the proof of Theorems 4.12 and 5.9, we can assume the adversary wins the quantum lightning experiment with probability \(1-\mathsf{negl}(\lambda)\). In order for a supposed note \(\$\) to be accepted relative to serial number \(\mathbf{t}\) with overwhelming probability, \(\mathbf{t}\) must have the form \(\mathbf{t}=\mathbf{A}^{T}\cdot\phi(h)+\mathbf{e}\) for "short" \(\mathbf{e}\), and \(\$\) must be negligibly close to \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\). Therefore, a quantum lightning adversary outputs two copies of \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\) for some \(h\). The security reduction of Theorem 4.12 did not rely on knowing \(h\), just that the adversary outputted two copies of \(|\mathbb{G}_{\lambda}^{h}*x_{\lambda}\rangle\). Hence, a near-identical proof holds for Construction 6.6. The only difference is that when the extractor \(\mathcal{E}\) outputs a group element, it instead outputs a small linear combination of the \(g_{i}\) giving that group element, and then the DLog/1-minCDH adversary uses this small representation to compute the action by that group element.
## 7 Further Discussion
### Quantum Group Actions
Here, we consider a generalization of group actions where set elements are replaced with quantum states.
An quantum (abelian) group action consists of a family of (abelian) groups \(\mathbb{G}=(\mathbb{G}_{\lambda})_{\lambda}\) (written additively), a family \(\mathcal{X}=(\mathcal{X}_{\lambda})_{\lambda}\) of sets \(\mathcal{X}_{\lambda}\) of quantum states over a system \(\mathcal{M}_{\lambda}\), and an operation \(*\). We will require that the states in \(\mathcal{X}_{\lambda}\) are orthogonal. \(*\) is a quantum algorithm that takes as input a group element \(g\in\mathbb{G}_{\lambda}\) and a quantum state \(|\psi\rangle\) over \(\mathcal{M}_{\lambda}\), and outputs another state over \(\mathcal{M}_{\lambda}\). \(*\) satisfies the following properties:
* **Identity:** If \(0\in\mathbb{G}_{\lambda}\) is the identity element, then \(|0\rangle*|\psi\rangle=|\psi\rangle\) for any \(|\psi\rangle\in\mathcal{X}_{\lambda}\).
* **Compatibility:** For all \(g,h\in\mathbb{G}_{\lambda}\) and \(|\psi\rangle\in\mathcal{X}_{\lambda}\), \((g+h)*|\psi\rangle=g*(h*|\psi\rangle)\).
We can also relax the above properties to only hold to within negligible error, and/or relax the orthogonality requirement to being near-orthogonal. We will additionally require the following properties:
* **Efficiently computable:** There is a pseudodeterministic QPT procedure Construct which, on input \(1^{\lambda}\), outputs a description of \(\mathbb{G}_{\lambda}\) and an specific element \(|\psi_{\lambda}\rangle\in\mathcal{X}_{\lambda}\). The operation \(*\) is also computable by a QPT algorithm.
* **Efficiently Recognizable:** There is a QPT procedure Recog which recognizes elements in \(\mathcal{X}_{\lambda}\). That is, \(\mathsf{Recog}(1^{\lambda},\cdot)\) projects onto the span of the states in \(\mathcal{X}_{\lambda}\).
* **Regular:** For every \(|\phi\rangle\in\mathcal{X}_{\lambda}\), there is exactly one \(g\in\mathbb{G}_{\lambda}\) such that \(|\phi\rangle=g*|\psi_{\lambda}\rangle\).
Again, we can also relax the above properties to only hold to within negligible error.
Cryptographic group actions.At a minimum, a cryptographically useful quantum group action will satisfy the following discrete log assumption:
**Assumption 7.1**.: The _discrete log assumption_ (DLog) holds on a quantum group action \((\mathbb{G},\mathcal{X},*)\) if, for all QPT adversaries \(\mathcal{A}\), there exists a negligible \(\lambda\) such that
\[\Pr[\mathcal{A}(g*|\psi_{\lambda}\rangle)=g:g\leftarrow\mathbb{G}_{\lambda}] \leq\mathsf{negl}(\lambda)\enspace.\]
Note that if we do not insist on orthogonality of the states in \(\mathcal{X}_{\lambda}\), then it is trivial to construct a quantum group action in which DLog holds: simply have all \(|\psi\rangle\in\mathcal{X}_{\lambda}\) be identical, or negligibly close. Then it will be information-theoretically impossible to determine \(g\). Orthogonality essentially says that the group action is classical, except that the basis for the set elements is potentially different than the computational basis.
### Quantum Group Actions From Lattices
Here, we describe a simple quantum group action from lattices.
The group \(\mathbb{G}_{\mathsf{LWE},\mathsf{N},\mathsf{n},\mathsf{m},\sigma}\) will be set to \(\mathbb{Z}_{N}^{n}\) for some integers \(N,n\). We will fix a short wide matrix \(\mathbf{A}\in\mathbb{Z}_{N}^{n\times m}\); we can think of \(\mathbf{A}\) as being sampled randomly and included in a common reference string. Note that \(\mathbb{G}\) is independent of \(\sigma\), but we include it for notational consistency.
The set \(\mathcal{X}_{\mathsf{LWE},\mathsf{N},\mathsf{n},\mathsf{m},\sigma}\) will be the set of states \(|\psi_{\mathsf{s}}\rangle=\sum_{\mathbf{e}\in\mathbb{Z}_{N}^{n}}\sqrt{ \mathcal{D}_{\sigma,N/2}(\mathbf{e})}|\mathbf{A}^{T}\cdot\mathbf{s}+\mathbf{e}\rangle\). In other words, we take the discrete Gaussian vector superposition of some width, and add the vector \(\mathbf{A}^{T}\cdot\mathbf{s}\).
\(\mathbb{G}_{\mathsf{LWE},\mathsf{N},\mathsf{n},\mathsf{m},\sigma}\) acts on \(\mathcal{X}_{\mathsf{LWE},\mathsf{N},\mathsf{n},\mathsf{m},\sigma}\) in the following obvious way: \(\mathbf{r}*|\psi_{\mathsf{s}}\rangle=|\psi_{\mathbf{r}+\mathbf{s}}\rangle\), which can be computed by simply adding \(\mathbf{A}^{T}\cdot\mathbf{r}\) in superposition.
We have the following theorem:
**Theorem 7.2**.: _Let \(\sigma,\sigma_{0}\) be non-negative real numbers such that \(\sigma/\sigma_{0}\) is super-logarithmic. Assuming the Learning with Errors problem is hard for noise distribution \(\mathcal{D}_{\sigma_{0}}\), discrete logarithms are hard in the group action \((\mathbb{G}_{\mathsf{LWE},\mathsf{N},\mathsf{n},\mathsf{m},\sigma},\mathcal{ X}_{\mathsf{LWE},\mathsf{N},\mathsf{n},\mathsf{m},\sigma},*)\)._
Proof.: The learning with errors assumption states that it is hard to compute \(\mathbf{s}\) given \(\mathbf{A}^{T}\cdot\mathbf{s}+\mathbf{e}\) with \(\mathbf{e}\) sampled from \(\mathcal{D}_{\sigma_{0}}\). We need to show that it is hard to compute \(\mathbf{s}\) given the analogous superposition over \(\mathbf{A}^{T}\cdot\mathbf{s}+\mathbf{e}\), where here \(\mathbf{e}\) comes from the Gaussian superposition \(|\mathcal{D}_{\sigma}\rangle\). The idea is a simple application of noise flooding: given \(\mathbf{u}=\mathbf{A}^{T}\cdot\mathbf{s}+\mathbf{e}\), compute the state \(|\psi_{\mathsf{s}}^{\prime}\rangle:=\sum_{\mathbf{e}^{\prime}\in\mathbb{Z}_{N }^{n}}\sqrt{\mathcal{D}_{\sigma,N/2}(\mathbf{e}^{\prime})}|\mathbf{A}^{T} \cdot\mathbf{s}+\mathbf{e}+\mathbf{e}^{\prime}\rangle\). Since \(\sigma/\sigma_{0}\) is super-polynomial, \(\mathbf{e}+\mathbf{e}^{\prime}\) where \(\mathbf{e}^{\prime}\leftarrow\mathcal{D}_{\sigma,N/2}\) is negligibly close to a Gaussian centered at \(0\). Therefore, \(|\psi_{\mathsf{s}}^{\prime}\rangle\) is negligibly close to \(|\psi_{\mathsf{s}}\rangle\). Plugging into a supposed DLog adversary then gives \(\mathbf{s}\), breaking LWE.
Unfortunately, this LWE-based group action is missing a crucial feature: it is not possible to recognize states in \(\mathcal{X}\). In particular, the states in \(\mathcal{X}\) are indistinguishable from states of the form \(\sum_{\mathbf{e}\in\mathbb{Z}_{N}^{n}}\sqrt{\mathcal{D}_{\sigma,N/2}(\mathbf{e} )}|\mathbf{v}+\mathbf{e}\rangle\), where \(\mathbf{v}\) is an arbitrary vector in \(\mathbb{Z}_{N}^{m}\). As we will see in the next subsection, the inability to recognize \(\mathcal{X}\) will prevent us from using this group action to instantiate our quantum money scheme.
### Relation to Quantum Money Approaches based on Lattices
Here, we see that our quantum money scheme is conceptually related to a folklore approach to building quantum money from lattices. This approach has not been able to work; in our language, the reason is exactly due to the inability to recognize \(\mathcal{X}_{\text{LWE},\mathbf{N},\mathbf{n},\mathbf{m},\sigma}\).
The approach is the following. Let \(\mathbf{A}\in\mathbb{Z}_{N}^{n\times m}\) be a random short wide matrix over \(\mathbb{Z}_{n}\). To mint a banknote, construct the discrete Gaussian superposition \(|\mathcal{D}_{\sigma}\rangle^{\otimes m}\) in register \(\mathcal{M}\). Then compute and measure \(\mathbf{A}\cdot\mathbf{x}\) applied to \(\mathcal{M}\). The result is a vector \(\mathbf{h}\in\mathbb{Z}_{N}^{n}\), which will be the serial number, and \(\mathcal{M}\) collapses to a superposition \(|\mathfrak{S}_{\mathbf{h}}\rangle\propto\sum_{\mathbf{x}:\mathbf{A}\cdot \mathbf{x}=\mathbf{h}}\sqrt{\mathcal{D}_{\sigma}(\mathbf{x})}|\mathbf{x}\rangle\) of short vectors \(\mathbf{x}\) such that \(\mathbf{A}\cdot\mathbf{x}=\mathbf{h}\). This is the banknote. A simple argument shows that it is impossible to construct two copies of \(|\mathfrak{S}_{\mathbf{h}}\rangle\) for the same \(\mathbf{h}\): given such a pair, measure each to get \(\mathbf{x},\mathbf{x}^{\prime}\) such that \(\mathbf{A}\cdot\mathbf{x}=\mathbf{A}\cdot\mathbf{x}^{\prime}=\mathbf{h}\). Then subtract to get a short vector \(\mathbf{x}-\mathbf{x}^{\prime}\) such that \(\mathbf{A}\cdot(\mathbf{x}-\mathbf{x}^{\prime})=0^{n}\). We can conclude \(\mathbf{x}-\mathbf{x}^{\prime}\) is non-zero with overwhelming probability, since the measurement of \(|\mathfrak{S}_{\mathbf{h}}\rangle\) has high entropy. Such a non-zero short kernel vector would solve the Short Integer Solution (SIS) problem, which is widely believed to be hard and is the foundation of lattice-based cryptography.
Unfortunately, the above approach is broken. The problem is that there is no way to actually verify banknotes. One can verify that a banknote has support on short vectors with \(\mathbf{A}\cdot\mathbf{x}=\mathbf{h}\), but it is impossible to verify that the banknote is in superposition. If one could solve the Learning with Errors (LWE) problem, it would be possible to verify banknotes as follows: first perform the QFT to the banknote state. If an honest banknote, the QFT will give a state negligibly close to
\[|\mathfrak{S}_{\mathbf{h}}^{\prime}\rangle:=\frac{1}{N^{n/2}}\sum_{\mathbf{s},\mathbf{e}\in\mathbb{Z}_{N}^{n}}\sqrt{\mathcal{D}_{N/\sigma}(\mathbf{e})}e^ {i2\pi\mathbf{h}\cdot\mathbf{s}/N}|\mathbf{A}^{T}\cdot\mathbf{s}+\mathbf{e} \rangle\enspace. \tag{7.1}\]
The second step is to simply apply the supposed LWE solver to this state in superposition, ensuring that the state has support on vectors of the form \(\mathbf{A}^{T}\cdot\mathbf{s}+\mathbf{e}\) for small \(\mathbf{e}\).
Unfortunately, LWE is likely hard. In fact, it is quantumly equivalent to SIS [10], meaning if one could verify banknotes using an LWE solver, then SIS is easy. Not only does this mean we are reducing from an easy problem, but it would be possible to turn such a SIS algorithm into an attack.
Without the ability to verify that banknotes are in supeprosition, the attacker can simply measure a banknote to get \(\mathbf{x}\), and then pass off \(|\mathbf{x}\rangle\) as a fake banknote that will pass verification. Since \(\mathbf{x}\) is trivially copied, this would break security. Interestingly, [11] prove that, no matter what efficient verification procedure is used, even if the verification diverged from the LWE-based approach above, this attack works. [10] extend this to a variety of potential schemes based on similar ideas, including a recent proposed instantiation of this approach by [12].
We now see how the above approach is essentially equivalent to our construction of quantum money from group actions, instantiated over our LWE-based quantum group action. The inability to recognize \(\mathcal{X}\) is the reason this instantiation is insecure, despite natural hardness assumptions presumably holding on the group action.
We consider the quantum group action \((\mathbb{G}_{\mathsf{LWE},\mathsf{N},n,\mathsf{m},\mathsf{N}/\sigma},\mathcal{X}_{ \mathsf{LWE},\mathsf{N},n,\mathsf{m},\mathsf{N}/\sigma},*)\), where \(\sigma\) is from the folklore construction above. When applied to \((\mathbb{G}_{\mathsf{LWE},\mathsf{N},\mathsf{n},\mathsf{m},\mathsf{N}/\sigma}, \mathcal{X}_{\mathsf{LWE},\mathsf{N},n,\mathsf{m},\mathsf{N}/\sigma},*)\), a banknote in our scheme, up to negligibly error from truncating discrete Gaussians, is the state \(|\mathfrak{s}^{\prime}_{\mathsf{h}}\rangle\) from Equation 7.1 above, where the serial number is \(\mathsf{h}\). Thus, we see that our quantum money scheme is simply the folklore construction but moved to the Fourier domain. The attack on the folklore construction can therefore easily be mapped to an attack on our scheme: if the adversary is given \(|\mathfrak{s}^{\prime}_{\mathsf{h}}\rangle\), it measures in the Fourier domain (which is the primal domain for the folklore construction) to get a short vector \(\mathbf{x}\) such that \(\mathbf{A}\cdot\mathbf{s}=\mathbf{h}\). Then it switched back to the primal domain, giving the state
\[\frac{1}{N^{m/2}}\sum_{\mathbf{u}}e^{i2\pi\mathbf{e}\cdot\mathbf{x}}|\mathbf{ x}\rangle\]
This is a state that lies outside the span of \(\mathcal{X}\). However, no efficient verification procedure can distinguish it from an honest banknote state.
Two features that distinguish isogeny-based group actions from the LWE-based action above. The first is the ability to recognize elements in \(\mathcal{X}\). Suppose it were possible to recognize elements of \(\mathcal{X}\) in the LWE-based action, and we had the verifier check to see if the banknote belonged to the span of the elements in \(\mathcal{X}\). In the language of quantum group actions, this check would prevent the attacker from sending \(\frac{1}{N^{m/2}}\sum_{\mathbf{u}}e^{i2\pi\mathbf{e}\cdot\mathbf{x}}|\mathbf{ x}\rangle\), which lies outside the span of \(\mathcal{X}\). In the language of the folklore construction, this check would correctly distinguish between an honest banknote and the easily clonable state \(|\mathbf{x}\rangle\) in the attack. If such a check were possible, the proof sketched above would work to base the security of the scheme on SIS. Unfortunately, such a check is computationally intractable under the decision LWE problem, which is equivalent to SIS and most likely hard.
The issue of recognizing set elements is also crucial in our security arguments. Indeed, the first step in our proof was to characterize the states accepted by the verifier, showing that only honest banknote states are accepted. This step in the proof fails in the LWE-based scheme, which would prevent the proof from going through. Thus, even though the scheme based on LWE is broken, it does not contradict our DLog/1-minCDH and Q-KGEA assumptions holding on the LWE-based group action.
The second difference, is that, with the LWE-based group action, taking the QFT of money states gives elements with meaningful structure: short vectors \(\mathbf{x}\) such that \(\mathbf{A}\cdot\mathbf{x}=\mathbf{h}\). This structure and it's relation to the original money state are what enables the attack. In contrast, taking the QFT of money states over \(\mathcal{X}\) coming from isognies will give terms with no discernible structure.
We believe the above perspective adds to the confidence in our proposal. Indeed, in the LWE-based scheme, the key missing piece is recognizing set elements; if not for this missing piece the scheme _could_ be proven secure. By switching to group actions based on isogenies, we add the missing piece. The hope is that even though the source of hardness is now from hard problems on isogenies over elliptic curves instead of lattices, by adding the missing piece we can finally obtain a secure scheme.
|
2304.04212 | RISC: Generating Realistic Synthetic Bilingual Insurance Contract | This paper presents RISC, an open-source Python package data generator
(https://github.com/GRAAL-Research/risc). RISC generates look-alike automobile
insurance contracts based on the Quebec regulatory insurance form in French and
English. Insurance contracts are 90 to 100 pages long and use complex legal and
insurance-specific vocabulary for a layperson. Hence, they are a much more
complex class of documents than those in traditional NLP corpora. Therefore, we
introduce RISCBAC, a Realistic Insurance Synthetic Bilingual Automobile
Contract dataset based on the mandatory Quebec car insurance contract. The
dataset comprises 10,000 French and English unannotated insurance contracts.
RISCBAC enables NLP research for unsupervised automatic summarisation, question
answering, text simplification, machine translation and more. Moreover, it can
be further automatically annotated as a dataset for supervised tasks such as
NER | David Beauchemin, Richard Khoury | 2023-04-09T10:42:18Z | http://arxiv.org/abs/2304.04212v1 | # RISC: Generating Realistic Synthetic Bilingual Insurance Contract
###### Abstract
This paper presents RISC, an open-source Python package data generator1. RISC generates look-alike automobile insurance contracts based on the Quebec regulatory insurance form in French and English. Insurance contracts are 90 to 100 pages long and use complex legal and insurance-specific vocabulary for a layperson. Hence, they are a much more complex class of documents than those in traditional NLP corpora. Therefore, we introduce RISCBAC, a Realistic Insurance Synthetic Bilingual Automobile Contract dataset based on the mandatory Quebec car insurance contract. The dataset comprises 10,000 French and English unannotated insurance contracts. RISCBAC enables NLP research for unsupervised automatic summarisation, question answering, text simplification, machine translation and more. Moreover, it can be further automatically annotated as a dataset for supervised tasks such as NER.
Footnote 1: [https://github.com/GRAAL-Research/risc](https://github.com/GRAAL-Research/risc)
Synthetic Data Generation, Bilingual Unsupervised Corpus, Legal NLP, Insurance dataset, Machine Learning
## 1 Introduction
Application of NLP deep learning techniques on specialized domains have seen an increase in interest in recent years [1]. The legal domain is one such domain, which is known to be complex and hermetic for a layperson [2]. This complexity has real consequences for many individuals and organizations. For example, a Canadian study (in the province of Quebec) has shown that the public register of official court traces (i.e. dockets) of all legal cases lacks intelligibility to most citizens [3; 4]. Moreover, this complexity has raised concerns about assisting the public with fair access to justice and judicial information [5; 6], especially after the COVID pandemic judicial system has taken overdue in their court cases [7; 8].
Even though judiciary systems produce, consume and use massive volumes of textual information [9], they lack technological solutions to increase their efficiency. Moreover, legal documents are known to be complex and lengthy and use specialized vocabulary [1], which raises the technical challenge of developing NLP systems in that domain.
Thus, creating curated large legal annotated corpora has been proven to be costly [10; 11]. For example, MAUD, an expert-annotated merger agreement understanding dataset, has been estimated to cost $5 million using the standard hourly fees of specialized lawyers [11]. Despite the challenges, there has understandably been great interest in exploring the possibility of deep learning techniques such as the use of Transformer architecture (i.e. GPT-like model) [12; 13] for helping process complex legal texts.
Insurance contracts are a particular case of legal documents where documents are relatively standardized, yet they use legal and insurance-specific vocabulary. For example, they use long and word sentences to specify a property or life risk coverage. Also, insurance contracts (at least in Canada) use a base form that specifies many exclusions and
limited coverage and use appended endorsements to modify the base form. Thus, the overall document, composed of a base form and endorsements, "contredict" itself and must be interpreted as a whole.
Insurance products can represent significant financial implications for individual financial health in the event of a loss. For example, a residential property total loss represents a heavy loss for any individual. This situation has led many governments to establish insurance regulators such as the _Autorite des marches financiers_ (AMF) in the province of Quebec [14]. Moreover, some insurance products are mandatory by law; for example, car civil liability insurance is mandatory in Quebec. Thus, choosing the right product is an essential step for many individuals, yet it is complicated. Regulations usually enforce a professional's advisory role as a legal obligation to insurers to protect the public [14]. However, in recent years, many governments have started authorizing the online sale of insurance products without the intervention of any human agent [14, 15]. This new way of selling insurance has raised concerns for regulatory and professional organizations in their role to protect the public [16, 17]. It created an interest in leveraging new technologies, such as deep learning, to improve (or automate) access to more understandable and personalized information about insurance products. However, no insurance contract corpora are currently available to train machine learning (ML) models to tackle NLP tasks that apply to the insurance field [1].
One of the particularities of insurance contracts is that they include detailed customer personal data such as name, date of birth and address. It is more challenging to release a public dataset based on actual customer insurance contracts since data would have to be anonymized. Moreover, they also include corporate property, namely the premium for a specific customer. Even if insurance contracts could be perfectly anonymized, releasing the premium could expose the insurer to premium reverse engineering from other insurers. For those reasons, in partnership with a Canadian insurance company, we have created a realistic insurance synthetic contract dataset generator based on our strong field expertise in the insurance domain and use as much real data as possible.
This paper's contributions are twofold: a realistic insurance synthetic contract data generator and a new synthetic automobile insurance contract dataset. It is outlined as the following, first, we study the available legal corpora and synthetic dataset generator in Section 2. Then, we propose RISC, an open-source Python package, to generate realistic insurance synthetic contract datasets in Section 3. Finally, in Section 4, we propose a realistic synthetic bilingual automobile insurance contract corpus based on Quebec's car insurance, and we discuss the ML research task enabled by this more difficult corpus as traditional NLP corpora.
## 2 Related Work
In recent years, a few legal corpora have been proposed in English, such as LEDGAR [18], CUAD [10], BillSum [19], MAUD [11], and EUR-Lex-Sum [20]. The first, LEDGAR, consists of 100,000 provisions to be classified as provisions types (e.g. law compliance). Provisions are the "items" in any contract that constitute the contract's legal speech act. These provisions were extracted from contracts on the U.S. Securities and Exchange Commission (SEC) website, namely contracts between companies. The second, CUAD, is a dataset of 510 annotated contracts also used for classification, but for clause identification instead of provisions. However, these contracts are not insurance but rather reviews of general contracts to asses the rights or obligations of an individual or company. The third, BillSum, consists of 22,218 US Congressional bills and reference summaries for legal text summarization. The dataset is constructed with law bills and not contracts. Nevertheless, it uses similar legal vocabulary, but the variety of law applications (e.g. environment, labour law) makes it of limited use for insurance applications. The fourth is MAUD, an expert-annotated
merger agreement understanding dataset for reading comprehension questions about merger agreements. However, again, the dataset does not transfer well to the insurance domain. Finally, a more recent corpus is EUR-Lex-Sum, a manually curated multi- and cross-lingual document summaries of legal acts from the European Union law platform. It contains up to 1,505 document/summary pairs for 24 languages. Like BillSum, it is constructed with legal acts, thus not insurance documents.
No synthetic corpus of legal documents is available in the literature, nor are any synthetic dataset generators for legal documents. However, creating a synthetic dataset is not a new challenge. Research in many areas, such as finance, healthcare and computer vision, use synthetic datasets [21]. Synthetic data generation is usually categorized into two distinct categories: process-driven methods and data-driven methods. Process-driven methods generate synthetic data from mathematical models of an underlying physical process; for example, numerical simulations using Monte Carlo. Data-driven methods generate synthetic data from generative models that have been trained on real data [21]. Most recent approaches are data-driven and rely on generative methods using generative adversarial networks (GAN) [21]. GANs are deep neural networks that produce two jointly-trained networks; one generates synthetic data intended to be as similar as possible to the training data, and one tries to discriminate the synthetic data from true training data. They have proven to be very good at learning high-dimensional, continuous data such as images [21]. However, GAN data generators (or any data-driven approach) usually generate images, numerical values and short texts (i.e. sentences), not long coherent documents such as an insurance contract. Thus, solutions like the DataSynthetizer [22] or Synthetic Data Generation (SDV) [23] Python packages that use generative methods are not well suited to generate long textual data. Neither are other solutions using large language models (LLM) [24]. Indeed, most recent approaches using LLM as the generative method are applied on relatively short documents compared to long insurance contracts (90 to 100 pages). For example, [25] have used GPT-2 to generate new data of "long" document of more than 280 tokens using the SST movies reviews dataset as a finetuning dataset for GPT-2. Thus, the meaning of the "long" document is shorter than insurance contracts. Also, as [26] stated, for long text, LLM tends to repeat themselves semantically at the document level, start to lose coherence over sufficiently long passages, contradict themselves and include factual inaccuracies. In other words, for now, LLMs do not show the capabilities to generate 90 to 100 pages that look like actual insurance contracts that do not include unfactual information.
## 3. **Realistic Insurance Synthetic Contract Data Generator**
As stated in Section 1, insurance contracts include personal data and corporate intellectual property; for those reasons, it was impossible to publicly release a real insurance contract dataset. Therefore, in partnership with a Canadian insurance company, we propose the "Realistic Insurance Synthetic Contract"2 (RISC) data generator, an open-source Python package to generate realistic insurance synthetic contract datasets. It was developed to be as realistic as possible by being enriched and validated by the insurer's expertise. RISC uses a set of templates, statistical data models, and a synthetic protection generator trained on real insurance data to create synthetic data. As a result, starting from an initial seed, it can generate a deterministic dataset of non-annotated French and English realistic synthetic automobile insurance contracts based on the AMF-approved Quebec form and the insurer documentation. Real insurance contracts are composed of the followings parts; thus, synthetic one uses the same parts:
Footnote 2: [https://github.com/GRAAL-Research/risc](https://github.com/GRAAL-Research/risc)
**Insurer introductory pages**: consists of pages that introduce the insurer (e.g. customer service phone number), table of contents, client customer advantages (e.g. privileged rates)
and actions required by customers (e.g. detach and keep insurance certificate). This part is typically 4 to 5 pages long.
**Declaration and disclosure**: consists of details about the insurance contract. Notably, it includes the main driver and vehicle information, contract start and end date, and contract insurance coverage. This part is typically 2 to 3 pages long.
**Quebec Police Form (Q.P.F.)**: consists of the AMF-approved automobile insurance form specifying the insurer's and insured's legal obligations, including and excluding coverage of the mandatory liability coverage and the property car damage and the general conditions. The regulatory form does not cover all the regulated covered risks. Instead, it offers limited coverage. For example, the form covers the insured car but with depreciation. This part is 34 pages in French and 33 pages in English.
**Quebec endorsements form (Q.E.F.)**: consists of the set of 81 possibles clauses added to the contract to increase or decrease the coverage of the base form. For example, an insurance contract can include an endorsement to cover the insured car without depreciation. In other words, endorsements "contradict" the base form text. Endorsements are typically 1 page long, but some can go up to 10 pages.
Figure 1 illustrates RISC's generation procedure (green) to generate a realistic synthetic automobile insurance contract (gray). It uses two components to generate an insurance contract: data generators (blue) and templates (red). First, it uses template-filling templates to ensure the proper generation of the contract structure. Second, it uses two generators designed to populate the templates: a realistic protection generator and a realistic data generator. These data generators produce the synthetic information included in the insurance contract, such as names and addresses. All three components will be discussed in the following sub-sections.
### Templates
To generate realistic insurance contracts, we have manually designed a set of fillable templates along with the generator's synthetic data based on the insurer's expertise. We created various templates in both French and English for all four parts of the insurance contract. Templates were created by manually extracting real insurance contract contents that were not insurance company information (e.g. name of the insurance company) or the insured information data (e.g. name, address, car details). Then, missing information, such as the insured name and car details, was marked as fillable data. The templates for the first two parts of the insurance contract are designed based on the insurer's corporate documentation. However, company-specific information in the documentation was depersonalized by replacing it with fake information that can be customized. For example, the "Insurer Customer Service" phone number can be replaced by any phone number. The templates for the last two parts of the contract are designed from the approved forms available online at the AMF Website [27]. In total, for both languages, we created 29 templates for the first three parts of the insurance contract and 25 for the endorsements. Figure 2 presents an example of a template used by our synthetic generator.
Figure 1: Illustration of RISC procedure to generate a realistic insurance synthetic contract.
### Realistic Protection Generation
The objective of the realistic protection generator is to generate a set of realistic protections for an insurance contract. The protections can include liabilities (Section A) and property damage (Sections B1 to B4) coverage, and the 81 available endorsements (Q.E.F. section) that increase or decrease insurance coverage. For each protection, a binary value represents whether or not the protection is included. Table 1 presents an example of a set of binary protections. However, these protections are not independent of each other; some build upon others, while some are mutually exclusive. Consequently, based on knowledge from our partner insurance company, we designed a set of rules to constrain how protections can interact with each other, and guarantee that the set generated corresponds to a likely insurance contract. Specifically, a set of protections must comply with the following rules to be realistic:
* Include the mandatory Section A coverage.
* It does not include Section B1 with any other Section B coverage, since Section B1 is a superset of all the other Section B.
* It does not include both Section B3 and Section B4 since Section B3 is a superset of Section B4.
* It does not include the Q.E.F. 41, which removes the deductible on some risk if the insured has a claim or a driver's license suspension.
* It does not include a Q.E.F. 43, which covers the insured car without depreciation, without any Section B coverage, since Q.E.F. 43 is a replacement value applied to property damage described in Section B.
A rules-based approach enforces these rules. That is, it generates a set of protection and verifies if these rules are respected, and if it does not, it is rejected, and the process is repeated until a set of protections respect the rules.
The insurance company provided us with a real insurance tabular dataset to develop a synthetic protection generator that can generate realistic data. This dataset consists of 266,082 binary protections similar to the one shown in Table 1. However, since insurers are not required to cover all 81 endorsements, our dataset includes only the 26 endorsements covered by our partner. Based on the insurer dataset, on average, an insurance contract (a row) includes 7.24 protections, including mandatory civil liability, and all the contracts include at least one endorsement. Moreover, as shown in Table 2, there are 1,880 unique combinations of protection (a set of columns), and 75 % of them appear at most in 0.00004 % of the dataset. This means that using the unique combination's distribution to generate a synthetic protection dataset would be cumbersome due to many rarely-occurring combinations. Furthermore, such an approach would only generate a combination of protections seen during training. The insurer was also unwilling to share a model to generate a perfect distribution of its risk portfolio. Thus, a look-alike distribution was more suitable for a public dataset.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Section A & Section B1 & Section B2 & Section B3 & Section B4 & Q.E.F. 2 & Q.E.F. 3 & \(\ldots\) & Q.E.F. 48a \\ \hline
1 & 0 & 1 & 1 & 0 & 0 & 1 & \(\ldots\) & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Example of a set of protections for a Quebec insurance contract, provided by the insurer.
Figure 2: Fillable template example used by RISC to generate insurance contract.
Since the data to generate are composed of numerical values, we have trained a tabular variational autoencoder (TVAE) and a conditional tabular GAN (CTGAN) model using [28]'s approach. The TVAE model uses a modified version of the traditional VAE loss function to adapt to tabular data. The CTGAN model is a conditional GAN for synthetic tabular data generation using mode-specific normalization. The advantage of using these approaches is that they rely on a neural network generative model to capture the relationship between the distributions of a specific protection (a column) and all the other protections. For example, it is common to see a "bundle" of endorsements purchased together, such as Q.E.F. 20a and 27, to cover civil liability for a short rental car during a vacation trip. These approaches capture commonly-occurring sets of protections but do not restrict the generative model to generate data seen during training. Therefore, the data will be realistic but will differ slightly from the insurer's portfolio risk.
To train our two models, we use the SDV [23] implementation of TVAE and CTGAN models. We train each of the aforementioned models using the random initial seed 42 with a batch size of 1,024. The models were trained for 200 epochs using SDV default training parameters for the generator and discriminator dimensions and learning rate. The training was done using the entire dataset since SDV evaluates models by comparing the quality of a synthetic sampled test dataset to the original one. It does so by computing the inverted Kolmogorov-Smirnov (KS) test [29] between the two datasets. We have used a synthetic sampled test dataset size of 300,000. Table 3 shows the averaged metric values for both models. These results show that both models achieved high scores on the KS test, but the TVAE model slightly outperformed the CTGAN model. We conducted a z-test significance test on both models' KS test scores to further assess the models' performance. Our z-test null hypothesis is that the pair of models have equal performances, meaning that values smaller or greater than \(|3.290527|\) allow us to reject the null hypothesis with \(\alpha=0.001\). A positive value means that the first model (left) performs significantly better than the second (right), and a negative value means the opposite. The z-test value is 70.63, so we can reject the null hypothesis that both models share the same performance. It also means that the TVAE performs significantly better than the CTGAN model. Second, both models create synthetic data with similar unique combination distributions as the insurer dataset. Third, the CTGAN tends to generate nearly double the number of new unique combinations (UC) of a set of protections, with 1,842 of them being entirely new (not seen during training). Conversely, TVAE creates more look-alike protections by generating more sets of protections similar to the real data. Therefore, since TVAE has significantly better performance, is less computationally intensive, easier to use and tends to offer more look-alike protections to the insurer dataset, we selected this model as the protection generation model.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Unique combination & Average UC & UC frequency & UC frequency & Maximum UC \\ (UC) & frequency (\%) & median (\%) & 75-quartile (\%) & frequency (\%) \\ \hline
1,880 & 0.00053 & 0.00001 & 0.00004 & 0.12872 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Distribution of the unique combinations of the insurer protection dataset.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & Inverted & \begin{tabular}{c} Unique \\ combination \\ (UC) \\ \end{tabular} & \begin{tabular}{c} New \\ combination \\ (UC) \\ \end{tabular} & \begin{tabular}{c} Average UC \\ frequency \\ (\%) \\ \end{tabular} & \begin{tabular}{c} UC frequency \\ median \\ (\%) \\ \end{tabular} & \begin{tabular}{c} UC frequency \\ (\%) \\ \end{tabular} &
\begin{tabular}{c} Maximum UC \\ frequency \\ (\%) \\ \end{tabular} \\ \hline Insurer data & - & 1,880 & - & 0.00053 & 0.00001 & 0.00004 & 0.12872 \\ TVAE & 0.9964 & 1,605 & 535 & 0.00062 & 0.00001 & 0.00007 & 0.12689 \\ CTGAN & 0.9746 & 2,912 & 1,842 & 0.00034 & 0.00001 & 0.00003 & 0.11602 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Distribution analysis of the unique combination of the synthetic protection generator.
### Realistic Data Generation
The objective of the realistic data generator is to generate a set of data similar to those in a real insurance contract. However, since most of these data include personal information such as date of birth, address, car details, and driving record, it is impossible to use real data to develop a synthetic data generator due to confidentiality concerns, unlike the realistic protection generator. Hence, using our and the insurer's expertise, we have selected a mix of preset statistic generators available in the literature and crafted stochastic generators to compose the realistic data generator; they are listed below:
**Insured personal information**: For most of the insured person's data, such as the name, address, date of birth, unique client ID, and association rebate, we have used the Python Faker library [30]. It uses preset data to sample fake data randomly. For example, to generate names, Faker uses a preset of first and last names and samples in both presets to create a completely fake name. For the sex, we have used stochastic sampling using realistic distribution parameters based on the driver population presented in the 2021 SAAQ road safety record [31].
**Insured driving information**: For the insured person's driving information, namely the number of claims in the past five years and the number of driving suspensions, we have used stochastic sampling using realistic distribution parameters based on the past eleven years' GAA Quebec's claims data [32] and the 2019 SAAQ driver suspension data [33]. We have chosen the 2019 SAAQ driver suspension data to avoid the COVID restrictions of 2020-2021, when license suspensions significantly dropped due to reduced opportunities to drive (and thus to be caught in a driving infraction by police and receive a suspension).
**Protections coverage amount**: For the protection coverage amounts of the liability coverage and the property damage deductible, we have used stochastic sampling using realistic distribution parameters based on the insurer's expertise.
**Vehicle information**: To generate the vehicle data (e.g. year, maker, model, motor type (e.g. electric) and financing institution details), we use the Python Faker library. For the purchase condition, we use a stochastic sampling using realistic distribution parameters based on the 2022 Statistics Canada quarterly new motor vehicle registrations [34] and 2021 SAAQ road safety record [31].
**Contract information**: The contract starting date is generated using the Python Faker library in the range of up to one year before the generation date. For the contract premium details per protection, we use stochastic sampling from realistic distribution parameters based on the insurer's expertise and the 2021 GAA premium statistics [35].
In order to reduce the complexity of the data generation process, we also designed the system to only generate data for one-year contracts of new customers that cover a single insured person on a single car. These represent the most common type of car insurance contract. However, this limitation can easily be removed if a more general insurance dataset needs to be generated.
## 4 Realistic Insurance Synthetic Bilingual Automobile Contract Dataset
We created the Realistic Insurance Synthetic Bilingual Automobile Contract (RISCBAC) dataset3 using RISC to enable ML research in the insurance field. It consists of 10,000 French and English realistic synthetic automobile insurance contracts. The dataset is generated using the initial seed 42 for each language. As a result, the contracts in both datasets have the same protections and data.
### Datasets Analysis
Table 4 presents some key statistics of French and English RISCBAC lower-cased datasets, and the legal corpora introduce in Section 2. For the legal corpora, we have used their official version on the "HuggingFace Datasets Hub"4, except for LEDGAR, which was not available. Instead, we have used LEDGAR's official clean version available online5. For each of these corpora, depending on the dataset type (i.e. the task), we kept only the "(_column name_)" written below the dataset name shown in Table 4. For example, for the BillSum dataset, we only kept the "_text_" column, thus excluded the "_summary_" and "_title_" from the statistics. All statistics were computed using SpaCy [36], and they excluded new line (\(\backslash\)n), whitespace, punctuation and some special characters (\(<\), \(>\), \(\mid\) and \(\$\)), and numeric character tokens. We will first analyze English and French RISCBAC datasets in the following two sub-sections and then compare them with other legal corpora using Table 4.
Footnote 4: [https://huggingface.co/datasets](https://huggingface.co/datasets)
Footnote 5: [https://drive.switch.ch/index.php/s/j9S0GRMAbGZKa1A](https://drive.switch.ch/index.php/s/j9S0GRMAbGZKa1A)
#### 4.1.1. **RISCBAC Datasets Comparison**
First, we can see in Table 4 that the datasets in both languages share a relatively similar number of tokens and lexical words (LW) (i.e. non-stopwords), with French having only 11% more tokens than English. Second, the vocabulary size is relatively small since all insurance contracts share the same base contract and only vary in endorsements and data (e.g. insured name and address). However, we note that English has 66 % more vocabulary than French. Third, documents are long; they include, on average, 1,071 and 996 sentences in 98 and 95 pages. Fourth, we can see that the documents are complex. They, on average, are composed of wordy sentences (25 tokens long). For example, the UK government's best writing practices policy stated that official publications should not use sentences of more than 25 words and use an average of 14 words [37]. Finally, to evaluate the reading complexity level of the contracts, we compute readability scores using the following three frequently used formulas: Flesch-Kincaid [38], Gunning fog index [39] and SMOG [40]. They compute using a scale from 0 (hardest) to 100 (easier) to assess the readability level. All formulas use slightly different approaches to measure the difficulty level. We can see that the two contracts datasets score near minimal on all three metrics, making them very complicated to read.
#### 4.1.2 **RISCBAC Comparison With Other Legal Corpora**
Refering again to Table 4, RISCBAC datasets contains much longer documents than any other dataset, with nearly double the number of tokens and 150 % more sentences per document compared to the second-longest-documents in the EUR-Lex-Sum. On the other hand, RISCBAC sentences are among the shortest in the table, nearly five times shorter than the maximum found in MAUD, and have the lowest lexical richness. Despite this, RISCBAC documents achieve the lowest Flesch-Kincaid readability score, demonstrating that insurance contracts are longer and more complicated to read than other legal documents. These results highlight how insurance contracts are a very different and much more complex type of document than those found in traditional NLP corpora and even legal NLP corpora.
### Research using RISCBAC
In this section, we discuss ML NLP tasks that can be performed on the RISCBAC dataset and those tasks that require additional work on the dataset before it can be used.
The documents generated can be used for research on unsupervised automatic text summarization [41], unsupervised question answering [42] and unsupervised information retrieval [43], unsupervised legal text simplification [44], unsupervised machine translation [45], text anonymization [46], and coreference resolution of clauses [47; 48]. In addition, it could also be used as a low-resource dataset for meta-learning tasks [49]. The unique features of insurance contracts make our RISCBAC dataset particularly interesting for these tasks compared to other available datasets. Working with such lengthy documents is challenging due to the computing limitations of current state-of-the-art deep learning methods such as Transformer [50]. Furthermore, as stated in the Section 1, insurance contracts "contradict" themselves between the base form and the endorsements. As a result, tasks such as summarization, information retrieval and question-answering become more challenging. Few works focus on handling contradictions in sentences [51], and even fewer in documents, with most of them focusing on misinformation detection [52], or multi-document contradictions [53]. The contradictions found in our dataset are of a different and much more challenging nature.
Furthermore, the RISCBAC dataset can also be used for research on tasks such as legal named entity recognition (NER) [54], supervised machine translation [45], supervised coreference document resolution [55] and contract element extraction [56]. However, doing so will require further annotations of the dataset. Annotations must be provided and validated for each specific task to use the corpus to train supervised ML algorithms. For instance, for the NER task, it would require annotating relevant named entities such as the insured name, address, car details, and named law article and contract Item (e.g. Item 3, Civil Code Art. 2). For supervised machine translation, it would require to do a pre-processing text alignment [57]. The supervised coreference document resolution would require manual or semi-manual annotation of a specific portion of a document referring to another portion of the insurance contract. Finally, the contract element extraction would require manual annotation of relevant element extraction similar to the NER data but also including contract elements such as items and clauses.
## 5 Conclusion
This paper presented RISC, an open-source Python package we created to generate realistic synthetic insurance contracts. It is designed to mimic Quebec's automobile insurance contracts. We also presented RISCBAC, a realistic bilingual synthetic automobile insurance contract dataset. The dataset currently comprises 10,000 French and English synthetic automobile insurance contracts in.txt format. Both contributions are designed to enable
\begin{table}
\begin{tabular}{l|c c|c c c c c c} \hline \hline & \multicolumn{2}{c|}{RISCBAC} & \multicolumn{2}{c|}{RISCBAC} & \multicolumn{2}{c}{LEDGAR} & \multicolumn{2}{c}{CUAD} & \multicolumn{2}{c}{BillSum} & \multicolumn{2}{c}{MAUD} & \multicolumn{2}{c}{EURL-Lex-Sum} \\ & French & English & (provision) & (context) & (text) & (text) & (text) & French & English \\ \hline Number of documents & 10,000 & 10,000 & 846,274 & 26,632 & 23,455 & 39,231 & 1,505 & 1,504 \\ Vocabulary size & 19,159 & 31,869 & 79,582 & 38,722 & 120,683 & 6,130 & 226,558 & 218,835 \\ Avg number of tokens & 26,869.85 & 24,198.49 & 122.45 & 9,092.28 & 1,721.22 & 450.99 & 14,484.40 & 12,636.66 \\ Avg number of IW & 13,109.94 & 12,968.63 & 59.24 & 4,932.46 & 707.94 & 231.19 & 7,388.66 & 7,132.57 \\ Avg number of sentence & 1,070.88 & 996.35 & 2.11 & 264.52 & 52.36 & 4.04 & 714.47 & 399.68 \\ Avg sentence length & 25.09 & 24.40 & 63.67 & 36.43 & 26.46 & 163.89 & 60.40 & 45.38 \\ Avg sentence length & & 12.34 & 13.13 & 30.71 & 19.82 & 14.72 & 83.69 & 30.19 & 25.15 \\ (LW) & & 98.05 & 95.05 & N/A & N/A & N/A & N/A & N/A & N/A \\ Lexical richness & 0,00014 & 0,00024 & 0,00158 & 0,00029 & 0,00725 & 0,00065 & 0,02034 & 0,02037 \\ Avg Flesch-Kincaid score & 11.73 & 13.77 & 25.60 & 16.40 & 15.76 & 61.77 & 19.45 & 19.58 \\ Avg Gunning fog score & 10.81 & 10.47 & 27.65 & 15.04 & 14.98 & 63.09 & 18.74 & 17.42 \\ Avg SMOG score & 14.18 & 15.97 & 6.82 & 16.65 & 16.73 & 15.32 & 17.86 & 19.42 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Aggregate statistics of the RISCBAC datasets and legal corpora introduce in Section 2.
NLP experiments applied to insurance documents, a very different and much more difficult class of documents than those in traditional NLP corpora.
To continue our work, we aim to extend the type of insurance documents RISC can generate to include residential property and collective insurance. Unlike automotive insurance contracts, these contracts do not have a mandatory regulated form in Quebec and Canada, but rather a variable "standard form" and, moreover, are primarily proprietary documents. We also aim to include an automatic annotation step of named entities during the RISC generation process.
## Acknowledgements
This research was made possible thanks to the support of a Canadian insurance company, NSERC research grant RDCPJ 537198-18 and FRQNT doctoral research grant. We wish to thank the reviewers for their comments regarding our work.
|
2306.12209 | The Directed Uniform Hamilton-Waterloo Problem Involving Even Cycle
Sizes | In this paper, factorizations of the complete symmetric digraph $K_v^*$ into
uniform factors consisting of directed even cycle factors are studied as a
generalization of the undirected Hamilton-Waterloo Problem. It is shown, with a
few possible exceptions, that $K_v^*$ can be factorized into two nonisomorphic
factors, where these factors are uniform factors of $K_v^*$ involving $K_2^*$
or directed $m$-cycles, and directed $m$-cycles or $2m$-cycles for even $m$. | Fatih Yetgin, Uğur Odabaşı, Sibel Özkan | 2023-06-21T12:07:25Z | http://arxiv.org/abs/2306.12209v1 | # The directed uniform Hamilton-Waterloo problem involving even cycle sizes
###### Abstract.
In this paper, factorizations of the complete symmetric digraph \(K_{v}^{*}\) into uniform factors consisting of directed even cycle factors are studied as a generalization of the undirected Hamilton-Waterloo Problem. It is shown, with a few possible exceptions, that \(K_{v}^{*}\) can be factorized into two nonisomorphic factors, where these factors are uniform factors of \(K_{v}^{*}\) involving \(K_{2}^{*}\) or directed \(m\)-cycles, and directed \(m\)-cycles or \(2m\)-cycles for even \(m\).
Key words and phrases:The Directed Hamilton-Waterloo Problem, 2-factorizations, directed cycle factorizations 2010 Mathematics Subject Classification: 05C51,05C70
## 1. Introduction
In this paper, edges and arcs are denoted by using curly braces and parentheses, respectively. Throughout this paper, we denote by \(K_{(x:y)}\) a complete equipartite graph having \(y\) parts of size \(x\) each. Also, for a simple graph \(G\), we use \(G^{*}\) to denote the symmetric digraph with vertex set \(V(G^{*})=V(G)\) and arc set \(E(G^{*})=\bigcup_{\{x,y\}\in E(G)}\{(x,y),(y,x)\}\). Hence, \(K_{v}^{*}\) and \(K_{(x:y)}^{*}\) respectively denote the complete symmetric digraph of order \(v\) and the complete symmetric equipartite digraph with \(y\) parts of size \(x\). We also use \((x,y)^{*}\) to denote the double arc which consists of \((x,y)\) and \((y,x)\).
A \(k\)-factor of a graph \(G\) is a \(k\)-regular spanning subgraph of \(G\). A \(k\)-factorization of a graph \(G\) is a partition of the edge set of \(G\) into \(k\)-factors, in other words it is a decomposition of edges set of \(G\) into edge-disjoint \(k\)-factors. It is easy see that a \(2\)-factor consists of a cycle or union of vertex-disjoint cycles. There are two well-studied \(2\)-factorization problems. The Oberwolfach Problem asks for the existence of decomposition of the complete graph \(K_{v}\) into the given \(2\)-factor \(F\). The uniform version of the Oberwolfach Problem in which there is only one type of cycle in the factor \(F\) has been mostly solved, see [4, 5, 22]. In the Hamilton-Waterloo Problem, there are two types of \(2\)-factors. The uniform version of the Hamilton-Waterloo Problem asks for a \(2\)-factorization of \(K_{v}\) (or for even \(v\), \(2\)-factorization of \(K_{v}-I\)) in which \(r\) of its \(2\)-factors consist of only \(m\)-cycles and remaining \(s\) of its \(2\)-factors consist of only \(n\)-cycles, and we will denote it by \(\operatorname{HWP}(v;m^{r},n^{s})\).
Introduction
Let \(X\) be a finite set of integers. Let \(X\) be a finite set of integers. Let \(X^{*}\) be a finite set of integers. Let \(X^{*}\) be a finite set of integers.
finding solutions to \(\mathrm{HWP}^{*}(v;2^{r},m^{s})\) for even \(m\) with \(r+s=v-1\). Also a solution is denoted as a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization of \(K_{v}^{*}\). In Section 4, we will concentrate on solving \(\mathrm{HWP}^{*}(v;m^{r},(2m)^{s})\) for even \(m\) with \(r+s=v-1\). Here are our main results.
**Theorem 3**.: _Let \(r\), \(s\) be nonnegative integers, and let \(m\geq 4\) be even. Then, \(\mathrm{HWP}^{*}(v;2^{r},m^{s})\) has a solution if \(m|v\), \(r+s=v-1\), \(s\neq 1\), \((r,v)\neq(0,6)\), \((m,r,v)\neq(4,0,4)\), and one of the following conditions holds;_
1. \(m>4\)_,_ \(s\neq 3\) _and_ \(m\equiv 0\pmod{4}\)_,_
2. \(m>4\)_,_ \(\frac{v}{m}\) _is even,_ \(s\neq 3\) _and_ \(m\equiv 2\pmod{4}\)_,_
3. \(m=4\) _and_ \(v\equiv 0,8,16\pmod{24}\)_,_
4. \(m=4\)_,_ \(v\equiv 12\pmod{24}\) _and_ \(s\notin\{3,5\}\)_,_
5. \(m=4\)_,_ \(v\equiv 4,20\pmod{24}\) _and_ \(r\) _is odd._
**Theorem 4**.: _Let \(r\), \(s\) be nonnegative integers, and let \(m\geq 4\) be even. Then, \(\mathrm{HWP}^{*}(v;m^{r},(2m)^{s})\) has a solution if and only if \(m|v\), \(r+s=v-1\) and \(v\geq 4\) except for \((s,v,m)\in\{(0,4,4),(0,6,3),(5,6,6)\}\), and except possibly when \(s\in\{1,3\}\)._
## 2. Preliminary Results
Let \(G\) be a graph and \(G_{0},G_{1},\ldots,G_{k-1}\) be \(k\) vertex disjoint copies of \(G\) with \(v_{i}\in V\left(G_{i}\right)\) for each \(v\in V(G)\). Let \(G[k]\) denote the graph with vertex set \(V(G[k])=V\left(G_{0}\right)\cup V\left(G_{1}\right)\cup\ldots\cup V\left(G_{ k-1}\right)\) and edge set \(E(G[k])=\{\{u_{i},v_{j}\}:\{u,v\}\in\,E(G)\text{ and }0\leq i,j\leq k-1\}\). It is easy see that there is an \(H[k]\)-factorization of \(G[k]\) if the graph \(G\) has an \(H\)-factorization.
Haggkvist used \(G[2]\) to build \(2\)-factorizations that include even cycles [21].
**Lemma 5** (Haggkvist Lemma).: _Let \(G\) be a path or a cycle with \(n\) edges and let \(H\) be a 2-regular graph on \(2n\) vertices with all components even cycle. Then \(G[2]\cong G^{\prime}\oplus G^{\prime\prime}\) where \(G^{\prime}\cong G^{\prime\prime}\cong H\). Therefore, \(G[2]\) has an \(H\)-decomposition._
If \(G_{1}\) and \(G_{2}\) are two edge-disjoint graphs with \(V(G_{1})=V(G_{2})\), then we use \(G_{1}\oplus G_{2}\) to denote the graph on the same vertex set with \(E\left(G_{1}\oplus G_{2}\right)=E\left(G_{1}\right)\cup E\left(G_{2}\right)\). We will denote the vertex disjoint union of \(\alpha\) copies of \(G\) by \(\alpha G\).
The above definitions can be extended to cover digraphs. Let D be a digraph and \(D_{0},D_{1},\ldots,D_{k-1}\) be \(k\) vertex disjoint copies of \(D\) with \(v_{i}\in V\left(D_{i}\right)\) for each \(v\in V(D)\). Then, \(D[k]\) has the vertex set \(V(D[k])=V\left(D_{0}\right)\cup V\left(D_{1}\right)\cup\cdots\cup V\left(D_{ k-1}\right)\) and arc set \(E(D[k])=\{(u_{i},v_{j}):(u,v)\in\,E(D)\text{ and }0\leq i,j\leq k-1\}\).
The following proposition, which is useful for transferring the results of undirected graphs to digraph and symmetric digraph, states that if we have an \(H\)-factorization of the undirected graph \(G\), then using this factorization an \(H^{*}\)-factorization of \(G^{*}\) can be obtained.
**Proposition 6**.: Let \(G\) be a graph and \(H\) be a subgraph of \(G\). If \(G\) has an \(H\)-factorization, then \(G^{*}\) has an \(H^{*}\)-factorization.
It is known that \(K_{2x}\) has a \(1\)-factorization [28]. Therefore, as a natural consequence of Proposition 6, the following proposition can be stated.
**Proposition 7**.: The complete symmetric digraph \(K_{2x}^{*}\) has a \(K_{2}^{*}\)-factorization for every integer \(x\geq 1\).
The following result of Liu on equpartite graph has been helpful in solving the Oberwolfach and Hamilton-Waterloo problems. We will use this result to obtain a \(\overrightarrow{C}_{m}\)-factorization of \(K_{(x:y)}^{*}\).
**Theorem 8**.: _[_26_]_ _The complete equipartite graph \(K_{(x:y)}\) has a \(C_{m}\)-factorization for \(m\geq 3\) and \(x\geq 2\) if and only if \(m|xy\), \(x(y-1)\) is even, \(m\) is even if \(y=2\) and \((x,y,m)\neq(2,3,3),(6,3,3),(2,6,3),(6,2,6)\)._
The necessary and sufficient condition for the existence of a \(1\)-factorization of a complete equipartite graph \(K_{(x:y)}\) is given by Hoffman and Rodger [23].
**Theorem 9**.: _[_23_]_ _The complete equipartite graph \(K_{(x:y)}\) has a \(1\)-factorization if and only if \(xy\) is even._
By Proposition 6 and Theorem 9, we can say that \(K_{(x:y)}^{*}\) has a \(K_{2}^{*}\)-factorization for even \(xy\).
**Lemma 10**.: _The complete symmetric equipartite digraph \(K_{(x:y)}^{*}\) has a \(K_{2}^{*}\)-factorization if and only if even \(xy\)._
We will also use the following two well-known results of Walecki.
**Lemma 11**.: _[_27_]_ _For all odd \(m\geq 3\), \(K_{m}\) decomposes into \(\left(\frac{m-1}{2}\right)\) Hamilton cycles._
**Lemma 12**.: _[_27_]_ _For all even \(m\geq 4,K_{m}-F_{m}\) has an Hamilton cycle decomposition with prescribed cycles \(\{C,\sigma\left(C\right)\), \(\sigma^{2}\left(C\right)\ldots,\sigma^{\frac{m-4}{2}}\left(C\right)\}\) for some permutation \(\sigma\) of \(\{0,1,\ldots,m-1\}\) where \(C=(0,1,\ldots,m-1)\) and \(E\left(F_{m}\right)=\{\{0,m/2\},\{i,m-i\}:1\leq i\leq(m/2)-1\}\)._
Lemmata 13 and 14 show the existence of the \(\{C_{m}^{r},C_{2m}^{s}\}\)-factorization of the \(C_{m}[2]\) and \((C\oplus F_{m})[2]\) for \(r+s\in\{2,3\}\). They will be used to find a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization of the \(C_{m}^{*}[2]\) for \(r\in\{0,2,4\}\), \(r+s=4\) and a \(\overrightarrow{C}_{2m}\)-factorization of \(C^{*}[2]\oplus F_{m}^{*}[2]\) where \(C^{*}\) is the symmetric version of the \(C\) defined in Lemma 12. Also, we use \(\Gamma_{m}\) and \(\Gamma_{m}^{*}\) to denote \(C[2]\oplus F_{m}[2]\) and \(C^{*}[2]\oplus F_{m}^{*}[2]\), respectively, for the rest of the paper.
**Lemma 13**.: _[_30_]_ _Let \(m\) be an integer with \(m\geq 3\). Then \(C_{m}[2]\) has a \(\{C_{m}^{r},C_{2m}^{s}\}\)-factorization for nonnegative integers \(r\) and \(s\) with \(r+s=2\) except when \(m\) is odd and \(r=2\), and except possibly when \(m\) is even and \(r=1\)._
**Lemma 14**.: _[_30_]_ _Let \(m\geq 4\) be an even integer and \(\Gamma_{m}\) where \(C=(0,1,\ldots,\)\(m-1)\) is an \(m\)-cycle and \(F_{m}\) is a \(1\)-factor of \(K_{m}\) with \(E\left(F_{m}\right)=\{\{0,m/2\},\{i,\)\(m-i\}:1\leq i\leq(m/2)-1\}\). Then \(\Gamma_{m}[2]\) has a
1. \(C_{2m}\)_-factorization,_
2. \(C_{m}\)_-factorization when_ \(m\equiv 0\pmod{4}\)_, and_
3. \(\left\{C_{m}^{2},C_{2m}^{1}\right\}\)_-factorization when_ \(m\equiv 2\pmod{4}\)_._
**Lemma 15**.: _[_11_]_ _Let \(m\geq 4\) be an even integer and \(x\) be a positive integer. Then, \(K_{(\frac{mx}{2}:2)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization._
**Theorem 16**.: _[_10_]_ _The complete symmetric equipartite digraph \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{3}\)-factorization if and only if \(3|xy\) and \((x,y)\neq(1,6)\) with possible exceptions \((x,y)=(x,6)\), where \(x\notin\{m:m\) is divisible by a prime less than \(17\}\)._
The following theorem presents a solution for the Directed Hamilton-Waterloo problem for small even cycle factors. It will also help us in solving \(\mathrm{HWP}^{*}(v;m^{r},\,2m^{s})\) in Section 4, when \(m=4\).
**Theorem 17**.: _[_31_]_ _For nonnegative integers \(r\) and \(s\), \(\mathrm{HWP}^{*}(v;m^{r},n^{s})\) has a solution for \((m,n)\in\{(4,6),(4,8),(4,12),(4,16),(6,12),(8,16)\}\) if and only if \(r+s=v-1\) and \(\mathrm{lcm}(m,n)|v\)._
Let \(A\) be a finite additive group and let \(S\) be a subset of \(A\), where \(S\) does not contain the identity of \(A\). The Directed Cayley graph \(\overrightarrow{X}(A;S)\) on \(A\) with connection set \(S\) is a digraph with \(V(\overrightarrow{X}(A;S))=A\) and \(E(\overrightarrow{X}(A;S))=\{(x,y):x,y\in A,y-x\in S\}\).
Let \(G\) be a digraph and \(R(G)\) denote the digraph on the same vertex set as \(G\) but the arcs are taken in opposite directions.
Let \(m\) be an even integer and the vertex set of \(K_{2m}^{*}\) be \(\mathbb{Z}_{2m}\). Let \(I_{2m}^{*}\) be a \(K_{2}^{*}\)-factor of \(K_{2m}^{*}\) with \(E\left(I_{2m}^{*}\right)=\{(i,m+i)^{*}:0\leq i\leq m-1\}\) and define the bijective function \(f:\mathbb{Z}_{2m}\rightarrow\mathbb{Z}_{2}\times\mathbb{Z}_{m}\) with
\[f(i)=\begin{cases}(0,i)&\text{ if }i<m,\\ (1,i-m)&\text{ if }i\geq m.\end{cases}\]
Then, \(E\left(I_{2m}^{*}\right)\) can be restated as a set \(\left\{\left((0,i),(1,i)\right)^{*}:0\leq i\leq m-1\right\}\) on \(\mathbb{Z}_{2}\times\mathbb{Z}_{m}\) using this bijective function. We will represent \(C_{m}^{*}[2]\) and \(C_{m}^{*}[2]\oplus I_{2m}^{*}\) as the directed Cayley graphs \(\overrightarrow{X}\big{(}\mathbb{Z}_{2}\times\mathbb{Z}_{m},S\big{)}\) and \(\overrightarrow{X}\big{(}\mathbb{Z}_{2}\times\mathbb{Z}_{m},\)\(S\cup\{(1,0)\}\big{)}\) where \(S=\{(0,1),(1,1),(0,-1),(1,-1)\}\).
Also, we define a factor \(F_{m}^{*}\) as a \(K_{2}^{*}\)-factor of \(K_{m}^{*}\) with \(E\left(F_{m}^{*}\right)=\{(0,m/2)^{*},\)\((i,m-i)^{*}:1\leq i\leq(m/2)-1\}\). The arc set of \(F_{m}^{*}\) which is denoted by \(E\left(F_{m}^{*}\right)\), can be expressed as \(\left\{\left((0,0),(0,m/2)\right)^{*},\left((0,i),(0,m-i)\right)^{*}:1\leq i \leq(m/2)-1\right\}\) using above bijective function. So, \(\Gamma_{m}\) can be represented as the directed Cayley graph \(\overrightarrow{X}\big{(}\mathbb{Z}_{2}\times\mathbb{Z}_{m},\)\(\bigcup_{i=1}^{(m/2)-1}\{(0,m-2i),(0,2i-m),(1,m-2i),(1,2i-m)\}\cup S\cup\{(0,m/2),(0,-m/2),(1,m/2),(1,- m/2)\}\big{)}\).
Using Proposition 6, Lemmata 11 and 12, we will obtain a \(\{(C_{\frac{m}{2}}^{*}[2])^{\frac{m-2}{4}},\)\(I_{m}^{*}\}\)-factorization and a \(\{(C_{\frac{m}{2}}^{*}[2])^{\frac{m-8}{4}},I_{m}^{*},\Gamma_{\frac{m}{2}}^{*}\}\)-factorization of \(K_{m}^{*}\) depending on whether \(m\equiv 0\,\text{ or }\,2\pmod{4}\), then use these factorizations to obtain
a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization of \(K_{mx}^{*}\). Since, \(I_{m}^{*}\) and \(F_{\frac{m}{2}}^{*}\) do not contain any \(m\)-cycles, we will factorize \(C_{\frac{m}{2}}^{*}[2]\oplus I_{m}^{*}\) and \(\Gamma_{\frac{m}{2}}^{*}\) into \(K_{2}^{*}\)-factors and \(\overrightarrow{C}_{m}\)-factors. Furthermore, it is necessary to have a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{2m}^{s}\}\)-factorization of \(C_{m}^{*}[2]\) in order to factorize \(K_{mx}^{*}\) into \(K_{2}^{*}\)-factors and \(\overrightarrow{C}_{m}\)-factors.
**Lemma 18**.: _Let \(m\geq 4\) be an integer. Then \(C_{m}^{*}[2]\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{2m}^{s}\}\)-factorization for \(r\in\{0,2,4\}\) and \(r+s=4\)._
Proof.: First, note that \(C_{m}[2]\) has a decomposition into two \(C_{2m}\)-factors by Haggkvist Lemma and each \(C_{2m}\)-factor has a decomposition into two \(1\)-factors.
**Case 1 (\(r=4\))** Decompose \(C_{m}[2]\) into four \(1\)-factors by using \(C_{2m}\)-factors. Then a \(K_{2}^{*}\)-factorization of \(C_{m}^{*}[2]\) is obtained by Proposition 6.
**Case 2 (\(r=2\))** Decompose \(C_{m}[2]\) into one \(C_{2m}\) and two \(1\)-factors. By using Proposition 6, we get a \(\{(K_{2}^{*})^{2},C_{2m}^{*}\}\)-factorization of \(C_{m}^{*}[2]\) and also \(C_{2m}^{*}\) has a \(\overrightarrow{C}_{2m}\)-factorization with two \(\overrightarrow{C}_{2m}\)-factors. So, we obtain a \(\{(K_{2}^{*})^{2},\overrightarrow{C}_{2m}^{2}\}\)-factorization of \(C_{m}^{*}[2]\).
**Case 3 (\(r=0\))** Obtain a \(C_{2m}^{*}\)-factorization of \(C_{m}^{*}[2]\) by Proposition 6. Since \(C_{2m}^{*}\) has a \(\overrightarrow{C}_{2m}\)-factorization with two \(\overrightarrow{C}_{2m}\)-factors, \(C_{m}^{*}[2]\) has a \(\overrightarrow{C}_{2m}\)-factorization.
Since \(I_{2m}^{*}\) and \(F_{m}^{*}\) are \(K_{2}^{*}\)-factors, the following result can be derived from Lemma 18.
**Corollary 19**.: Let \(m\geq 4\) be an even integer. Then \(\Gamma_{m}^{*}\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{2m}^{s}\}\)-factorization for \(r\in\{0,2,4,6\}\) with \(r+s=6\).
Proof.: \(F_{m}^{*}[2]\) decomposes into two \(K_{2}^{*}\)-factors. So, \(\Gamma_{m}^{*}\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{2m}^{s}\}\)-factorization for \(r\in\{2,4,6\}\) with \(r+s=6\) by Lemma 18. Also, \(\Gamma_{m}^{*}\) has a \(\overrightarrow{C}_{2m}\)-factorization by Lemma 14 and Proposition 6.
The following lemma is quite useful in solving the Directed Hamilton-Waterloo Problem for \(n=2\) and even \(m\) when the values of \(r\) are even.
**Lemma 20**.: _Let \(m\geq 5\) be an integer. Then \(C_{m}^{*}[2]\oplus I_{2m}^{*}\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{2m}^{s}\}\)-factorization for \(r\in\{0,1,3,5\}\) and \(r+s=5\)._
Proof.: The cases \(r\in\{1,3,5\}\) can be directly obtained from Lemma 18.
When \(r=0\), we will examine the problem in two cases; \(m\) is odd or even.
**Case 1 (odd \(m\geq 5\))**
Define five directed \(2m\)-cycles in \(C_{m}^{*}[2]\oplus I_{2m}^{*}\) as follows: \(\overrightarrow{C}_{2m}^{(0)}=(v_{0},v_{1},\dots,\)\(v_{2m-1})\) where \(v_{i}=(\lfloor\frac{i}{m}\rfloor,i)\), \(\overrightarrow{C}_{2m}^{(1)}=(u_{0},u_{1},\dots,u_{2m-1})\) where
\[u_{2i}=\begin{cases}(0,2i)&\text{ if }0\leq i\leq\frac{m-1}{2},\\ (0,-2i-1)&\text{ if }\quad\frac{m+1}{2}\leq i\leq m-1,\end{cases}\]
and
\[u_{2i+1}=\begin{cases}(1,2i+1)&\text{ if }0\leq i\leq\frac{m-3}{2},\\ (1,-2i-2)&\text{ if }\quad\frac{m-1}{2}\leq i\leq m-1.\end{cases}\]
\(C_{2m}^{(2)}=(x_{0},x_{1},\ldots,x_{2m-1})\) where
\[x_{i}=\begin{cases}(0,m-\lfloor\frac{i}{2}\rfloor)&\text{ if }i\equiv 0,3\pmod{4} \\ (1,m-\lfloor\frac{i}{2}\rfloor)&\text{ if }i\equiv 1,2\pmod{4}\end{cases}\text{ for }0\leq i\leq 2m-3,\]
and \(x_{2m-2}=(1,1)\), \(x_{2m-1}=(0,1)\). Also, \(C_{2m}^{(3)}=(y_{0},_{\cdot}y_{1},\ldots,y_{2m-1})\) where
\[y_{i}=u_{i}+(1,2)\;\;\text{for }\;0\leq i\leq m-3\;\;and\;\;m+2\leq i\leq 2m-1,\]
\[y_{m-2}=(1,0),\;\;y_{m-1}=(0,1),\;\;y_{m}=(1,1),\;\;y_{m+1}=(0,0).\]
Finally, \(\overrightarrow{C}_{2m}^{(4)}=(C_{m}^{*}[2]\oplus I_{2m}^{*})-\bigoplus_{i=0 }^{3}\overrightarrow{C}_{2m}^{(i)}\). Then, \(\{\overrightarrow{C}_{2m}^{(0)},\overrightarrow{C}_{2m}^{(1)},\overrightarrow {C}_{2m}^{(2)},\overrightarrow{C}_{2m}^{(3)},\)\(\overrightarrow{C}_{2m}^{(4)}\}\) is a \(\overrightarrow{C}_{2m}\)-factorization of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\).
**Case 2 (even \(m\geq 6\))**
Let \(\overrightarrow{C}_{2m}^{(0)}\) be the same as in Case 1 and define the directed \(2m\)-cycles in \(C_{m}^{*}[2]\oplus I_{2m}^{*}\) as follows:
\(\overrightarrow{C}_{2m}^{(1)}=(x_{0},x_{1},\ldots,x_{2m-1})\) where \(x_{0}=(0,0)\) and
\[x_{i}=\begin{cases}\big{(}0,m-\lfloor\frac{i+2}{2}\rfloor\big{)}&\text{ if }i\equiv 1,2\pmod{4}\\ (1,m-\lfloor\frac{i+2}{2}\rfloor+1)&\text{ if }i\equiv 0,3\pmod{4}\end{cases} \text{ for }\;1\leq i\leq 2m-8,\]
and \(x_{2m-6+2i}=(0,3-i)\) for \(0\leq i\leq 2\) and \(x_{2m-7+2i}=(1,3-i)\) for \(0\leq i\leq 3\). Also, \(\overrightarrow{C}_{2m}^{(2)}=(u_{0},u_{1},\ldots,u_{2m-1})\) where \(u_{0}=(0,0)\), \(u_{1}=(1,0)\), \(u_{2}=(0,m-1)\) and
\[u_{i}=\left\{\begin{array}{ll}\big{(}0,m-\lfloor\frac{i-1}{2}\rfloor-1\big{)} &\text{ if }i\equiv 0,1\pmod{4}\\ \big{(}1,m-\lfloor\frac{i-1}{2}\rfloor\big{)}&\text{ if }i\equiv 2,3\pmod{4} \end{array}\right.\qquad\text{for }\;3\leq i\leq 2m-9,\]
\(u_{2m-8+j}=\left\{\begin{array}{ll}(0,4-\lfloor\frac{j}{2}\rfloor)\text{ if }j\equiv 0,2\pmod{4}\\ (1,4-\lfloor\frac{j}{2}\rfloor)\text{ if }j\equiv 1,3\pmod{4}\end{array}\right.\) for \(0\leq j\leq 7\), and when \(m=6\), \(u_{3}=(1,5)\) and we only use above piecewise function. \(\overrightarrow{C}_{2m}^{(3)}=(y_{0},y_{1},\ldots,y_{2m-1})\) where \(y_{2i+2}=(0,m-i)\) for \(1\leq i\leq m-4\), \(y_{2i+1}=(1,m-i)\) for \(1\leq i\leq m-3\), \(y_{0}=(0,0)\), \(y_{1}=(1,1)\), \(y_{2}=(1,0)\), \(y_{2m-4}=(1,2)\), \(y_{2m-3}=(0,3)\), \(y_{2m-2}=(0,2)\) and \(y_{2m-1}=(0,1)\). \(\overrightarrow{C}_{2m}^{(4)}=(z_{0},z_{1}\ldots,z_{2m-1})\) where \(z_{9+2i}=(0,4+i)\) for \(1\leq i\leq m-5\), \(z_{10+2i}=(1,4+i)\) for \(0\leq i\leq m-6\), \(z_{0}=(0,0)\), \(z_{1}=(1,m-1)\), \(z_{2}=(1,0)\), \(z_{3}=(0,1)\), \(z_{4}=(1,2)\), \(z_{5}=(1,1)\), \(z_{6}=(0,2)\), \(z_{7}=(1,3)\), \(z_{8}=(0,4)\), \(z_{9}=(0,3)\). Then \(\left\{\overrightarrow{C}_{2m}^{(0)},\overrightarrow{C}_{2m}^{(1)}, \overrightarrow{C}_{2m}^{(2)},\overrightarrow{C}_{2m}^{(3)},\overrightarrow{C}_{2m}^ {(4)}\right\}\) is a \(\overrightarrow{C}_{2m}\)-factorization of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\).
By Lemma 13, we can decompose \(C_{m}[2]\) into two \(C_{m}\)-factors for even \(m\). So, we obtain the following lemma similar to Lemma 18. Also, the following Corollaries are obtained as a result of this lemma.
**Lemma 21**.: _Let \(m\geq 4\) be an even integer. Then \(C_{m}^{*}[2]\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization for \(r\in\{0,2,4\}\) with \(r+s=4\)._
**Corollary 22**.: Let \(m\geq 4\) be an even integer. Then \(C_{m}^{*}[2]\oplus I_{2m}^{*}\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization for \(r\in\{1,3,5\}\) with \(r+s=5\).
**Corollary 23**.: Let \(m\geq 4\) be an even integer. Then \(\Gamma_{m}^{*}\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization for \(r\in\{2,4,6\}\) with \(r+s=6\).
Theorem 8 states that \(K_{(x:y)}\) has a \(C_{m}\)-factorization with a few exceptions. We will use this result to show that \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization. However, some of the exceptions in the undirected version do not exist in the symmetric version. It is shown that there is actually a solution for these exceptions in the symmetric version. As a corollary of Lemma 15, Proposition 6 and Theorems 8 and 16, we can give the following result.
**Lemma 24**.: _The complete symmetric equipartite digraph \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization for \(m\geq 3\) and \(x\geq 2\) if \(m|xy\), \(x(y-1)\) is even, \(m\) is even when \(y=2\)._
Proof.: Let \(m|xy\), \(x(y-1)\) be even, \(m\) be even when \(y=2\), and let \((x,y,m)\notin\{(2,3,3),(6,3,3),(2,6,3),(6,2,6)\}\). By Theorem 8, \(K_{(x:y)}\) has a \(C_{m}\)-factorization and so, \(K_{(x:y)}^{*}\) has a \(C_{m}^{*}\)-factorization by Proposition 6. Since each \(C_{m}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization, \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization. Since \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization for \((x,y,m)\in\{(2,3,3),(6,3,3),(2,6,3),(6,2,6)\}\) by Theorem 16 and Lemma 15, we conclude that \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization for \(m\geq 3\) and \(x\geq 2\) if \(m|xy\), \(x(y-1)\) is even, \(m\) is even when \(y=2\).
Recall that \(\Gamma_{m}^{*}\) is \(C^{*}[2]\oplus F_{m}^{*}[2]\).
**Lemma 25**.: \(\Gamma_{m}^{*}\) _has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization for \(m\equiv 2\pmod{4}\) and \(r\in\{1,2,3,4,6\}\) with \(r+s=6\)._
Proof.: The cases \(r\in\{2,4,6\}\) are obtained by Corollary 23.
For \(r=1\), we define the following \(m\)-cycles. \(\overrightarrow{C}_{m}^{(0)}=(v_{0},v_{1},\ldots v_{m-1})\) where \(v_{i}=(0,i)\) for \(0\leq i\leq m-1\).
\(\overrightarrow{C}_{m}^{(1)}=(u_{0},u_{1},\ldots,u_{m-1})\) where \(u_{i}=\begin{cases}(0,i)&\text{ if i is even},\\ (1,i)&\text{ if i is odd}.\end{cases}\)
\(\overrightarrow{C}_{m}^{(2)}=(x_{0},x_{1},\ldots x_{m-1})\) where \(x_{0}=(0,0)\) and for \(1\leq i\leq m-1\)
\[x_{i}=\left\{\begin{array}{ll}\left(\frac{1-(-1)^{i}}{2},\frac{m}{2}-\lfloor \frac{i}{2}\rfloor\right)&\text{if }i\equiv 1,2\pmod{4},\\ \left(\frac{1-(-1)^{i}}{2},\frac{m}{2}+\lfloor\frac{i}{2}\rfloor\right)&\text{ if }i=0,3\pmod{4}.\end{array}\right.\]
Let's choose the factor \(F_{0}\) as isomorphic to \(F_{m}^{*}\oplus(F_{m}^{*}+(1,0))\), then \(F_{0}\) becomes a \(K_{2}^{*}\)-factor. Using the above \(m\)-cycles, we obtain the following \(m\)-cycle factors. \(F_{1}=\vec{C}_{m}^{(0)}\cup(\overrightarrow{C}_{m}^{(0)}+(1,0))\), \(F_{2}=R\left(F_{1}\right)\), \(F_{3}=\overrightarrow{C}_{m}^{(1)}\cup\overrightarrow{C}_{m}^{(1)}\cup\overrightarrow {C}_{m}^{(1)}\cup\overrightarrow{C}_{m}^{(1)}\cup\overrightarrow{C}_{m}^{(1)}\).
Let's consider the case \(r=1\). Let \(m\geq 4\) be an even integer. Then \(\Gamma_{m}^{*}\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization for \(r\in\{0,2,4\}\) with \(r+s=4\).
**Corollary 26**.: Let \(m\geq 4\) be an even integer. Then \(C_{m}^{*}[2]\oplus I_{2m}^{*}\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization for \(r\in\{0,2,4\}\) with \(r+s=4\).
**Corollary 27**.: Let \(m\geq 4\) be an even integer. Then \(\Gamma_{m}^{*}\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization for \(r\in\{0,2,4\}\) with \(r+s=4\).
Theorem 8 states that \(K_{(x:y)}\) has a \(C_{m}\)-factorization with a few exceptions. We will use this result to show that \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization. However, some of the exceptions in the undirected version do not exist in the symmetric version. It is shown that there is actually a solution for these exceptions in the symmetric version. As a corollary of Lemma 15, Proposition 6 and Theorems 8 and 16, we can give the following result.
**Lemma 28**.: _The complete symmetric equipartite digraph \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization for \(m\geq 3\) and \(x\geq 2\) if \(m|xy\), \(x(y-1)\) is even, \(m\) is even when \(y=2\)._
Proof.: Let \(m|xy\), \(x(y-1)\) be even, \(m\) be even when \(y=2\), and let \((x,y,m)\notin\{(2,3,3),(6,3,3),(2,6,3),(6,2,6)\}\). By Theorem 8, \(K_{(x:y)}\) has a \(C_{m}\)-factorization and so, \(K_{(x:y)}^{*}\) has a \(C_{m}^{*}\)-factorization by Proposition 6. Since each \(C_{m}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization, \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization. Since \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization for \((x,y,m)\in\{(2,3,3),(6,3,3),(2,6,3),(6,2,6)\}\) by Theorem 16 and Lemma 15, we conclude that \(K_{(x:y)}^{*}\) has a \(\overrightarrow{C}_{m}\)-factorization for \(m\geq 3\) and \(x\geq 2\) if \(m|xy\), \(x(y-1)\) is even, \(m\) is even when \(y=2\).
Recall that \(\Gamma_{m}^{*}\) is \(C^{*}[2]\oplus F_{m}^{*}[2]\).
**Lemma 29**.: \(\Gamma_{m}^{*}\) _has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization for \(m\equiv 2\pmod{4}\) and \(r\in\{1,2,3,4,6\}\) with \(r+s=4\)._
Proof.: The cases \(r\in\{2,4,6\}\) are obtained by Corollary 23.
For \(r=1\), we define the following \(m\)-cycles. \(\overrightarrow{C}_{m}^{(0)}=(v_{0},v_{1},\ldots v_{m-1})\) where \(v_{i}=(0,i)\) for \(0\leq i\leq m-1\).
\(\overrightarrow{C}_{m}^{(1)}=(u_{0},u_{1},\ldots,u_{m-1})\) where \(u_{i}=\begin{cases}(0,i)&\text{ if i is even},\\ (1,i)&\text{ if i is odd}.\end{cases}\)
\(\overrightarrow{C}_{m}^{(2)}=(x_{0},x_{1},\ldots x_{m-1})\) where \(x_{0}=(0,0)\) and for \(1
\(R(\overrightarrow{C}_{m}^{(1)}+(1,0))\), \(F_{4}=\overrightarrow{C}_{m}^{(2)}\cup R(\overrightarrow{C}_{m}^{(2)}+(1,0))\) and \(F_{5}=\Gamma_{m}^{*}-\bigoplus_{i=0}^{4}F_{i}\). Then, \(\{F_{0},F_{1},F_{2},F_{3},F_{4},F_{5}\}\) is a \(\{(K_{2}^{*})^{1},\overrightarrow{C}_{m}^{5}\}\)-factorization of \(\Gamma_{m}^{*}\).
For \(r=3\), \(F_{1}\oplus F_{2}\) is a \(C_{m}^{*}\)-factor of \(\Gamma_{m}^{*}\) and has a factorization into two \(K_{2}^{*}\)-factors of \(\Gamma_{m}^{*}\) say \(F_{1}^{{}^{\prime}}\) and \(F_{2}^{{}^{\prime}}\). Then \(\left\{F_{0},F_{1}^{{}^{\prime}},F_{2}^{{}^{\prime}},F_{3},F_{4},F_{5}\right\}\) is a \(\{(K_{2}^{*})^{3},\overrightarrow{C}_{m}^{3}\}\)-factorization of \(\Gamma_{m}^{*}\).
## 3. Solutions to \(\mathrm{HWP}^{*}(v;2^{r},m^{s})\)
Now, we can give solutions to the Directed Hamilton-Waterloo Problem for \(K_{2}^{*}\) and \(\overrightarrow{C}_{m}\) when even \(m\).
**Theorem 26**.: _Let \(r\), \(s\) be nonnegative integers, and let \(m\geq 6\) be even. Then, \(\mathrm{HWP}^{*}(v;2^{r},m^{s})\) has a solution if and only if \(m|v\), \(r+s=v-1\) and \(v\geq 6\) except for \(s=1\) and \((r,v)=(0,6)\), and except possibly when at least one of the following conditions holds;_
1. \(s=3\) _and_ \(m\equiv 0\pmod{4}\)_,_
2. \(s=3\)_,_ \(m\equiv 2\pmod{4}\) _and_ \(\frac{v}{m}\) _is odd._
Proof.: Take \((v-2)\) disjoint \(K_{2}^{*}\)-factors of \(K_{v}^{*}\), say \(H_{1}^{*},H_{2}^{*},\ldots,H_{v-2}^{*}\). It is obvious that \(K_{v}^{*}-(H_{1}^{*}\oplus H_{2}^{*}\oplus\cdots\oplus H_{v-2}^{*})\) is a \(K_{2}^{*}\)-factor in \(K_{v}^{*}\). Thus, there is no \(\{(K_{2}^{*})^{v-2},\overrightarrow{C}_{m}^{1}\}\)-factorization of \(K_{v}^{*}\). So, we may assume \(s\neq 1\).
Since \(\mathrm{HWP}^{*}(v;n^{r},m^{s})\) has a solution for \(r=0\) except for \((v,m)=(6,6)\) by Theorem 1, we may assume that \(r\geq 1\).
Let \(v=mx\) for a positive integer \(x\). Partition the vertices of \(K_{mx}^{*}\) into \(2x\) sets of size \(\frac{m}{2}\), represent each part of \(\frac{m}{2}\) vertices in \(K_{mx}^{*}\) with a single vertex and represent all double arcs between sets of size \(\frac{m}{2}\) as a single double arc, to get a \(K_{2x}^{*}\). By Proposition 7, \(K_{2x}^{*}\) has a decomposition into \((2x-1)\)\(K_{2}^{*}\)-factors. Then, construct a \(K_{m}^{*}\)-factor of \(K_{mx}^{*}\) from one of the \(K_{2}^{*}\)-factors, and a \(K_{(\frac{m}{2}:2)}^{*}\)-factor of \(K_{mx}^{*}\) from each of the remaining \((2x-2)\)\(K_{2}^{*}\)-factors. Then, \(K_{mx}^{*}\) can be factorized into a \(K_{m}^{*}\)-factor and \((2x-2)\)\(K_{(\frac{m}{2}:2)}^{*}\)-factors.
\(K_{(\frac{m}{2}:2)}^{*}\) decomposes into \(\frac{m}{2}\)\(K_{2}^{*}\)-factors or \(\frac{m}{2}\)\(\overrightarrow{C}_{m}\)-factors by Lemmata 10 and 15, respectively. So, we must decompose \(K_{m}^{*}\) into \(K_{2}^{*}\)-factors and \(\overrightarrow{C}_{m}\)-factors.
**Case 1 (odd _r_)
By Lemma 12, factorize \(K_{m}\) into a \(F_{m}\)-factor and \((\frac{m-2}{2})\)\(C_{m}\)-factors. So, \(K_{m}^{*}\) can be factorized into a \(F_{m}^{*}\)-factor and \((\frac{m-2}{2})\)\(C_{m}^{*}\)-factors by Proposition 6.
Since \(C_{m}^{*}\) can be decomposed into two \(K_{2}^{*}\)-factors or two \(\overrightarrow{C}_{m}\)-factors for even \(m\), \(K_{m}^{*}\) has a \(\{(K_{2}^{*})^{2r_{1}+1},\overrightarrow{C}_{m}^{2s_{1}}\}\)-factorization where \(r_{1}+s_{1}=\frac{m-2}{2}\).
Since \(K_{mx}^{*}\) has a \(\{K_{m}^{*},(K_{(\frac{m}{2}:2)}^{(2x-2)})\}\)-factorization, placing a \(K_{2}^{*}\)-factorization on \(r_{0}\) of the \(K_{(\frac{m}{2}:2)}^{*}\) factors for \(r_{0}\) even and \(0\leq r_{0}\leq 2x-2\), a \(\overrightarrow{C}_{m}\)-factorization on \(s_{0}\) of the \(K_{(\frac{m}{2}:2)}^{*}\) where \(r_{0}+s_{0}=2x-2\), and taking a \(\{(K_{2}^{*})^{2r_{1}+1},\overrightarrow{C}_{m}^{2s_{1}}\}\)-factorization of \(K_{m}^{*}\) give a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization
of \(K_{mx}^{*}\) where \(r=\frac{m}{2}r_{0}+2r_{1}+1\) and \(s=\frac{m}{2}s_{0}+2s_{1}\) with \(r+s=\frac{m}{2}(r_{0}+s_{0})+2(r_{1}+s_{1})+1=mx-1=v-1\).
Since any nonnegative odd integer \(1\leq r\leq mx-1\) can be written as \(r=\frac{m}{2}r_{0}+2r_{1}+1\) for integers \(0\leq r_{0}\leq 2x-2\) and \(0\leq r_{1}\leq\frac{m-2}{2}\), a solution to \(\mathrm{HWP}^{*}(v;2^{r},m^{s})\) exists for each odd \(r\geq 1\) and \(s\geq 1\) satisfying \(r+s=mx-1=v-1\).
**Case 2 (even \(r\))**
**(a)** Assume \(m\equiv 0\pmod{4}\). So, \(\frac{m}{2}\) is even. Each \(K_{(\frac{m}{2}:2)}^{*}\) decompose into \(\frac{m}{2}\)\(K_{2}^{*}\)-factors or \(\frac{m}{2}\)\(\overrightarrow{C}_{m}\)-factors. So, we need a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{m}^{s}\}\)-factorization of \(K_{m}^{*}\) for even \(r\).
Also, \(K_{\frac{m}{2}}^{*}\) can be factorized as \(\bigoplus_{i=1}^{\frac{m-8}{4}}C_{i}^{*}\oplus\Gamma_{\frac{m}{2}}^{*}\) where each \(C_{i}^{*}\) is isomorphic to \(C_{\frac{m}{2}}^{*}\). Then, \(K_{\frac{m}{2}}^{*}[2]\cong\bigoplus_{i=1}^{\frac{m-8}{4}}C_{i}^{*}[2]\oplus \Gamma_{\frac{m}{2}}^{*}[2]\). Also, \(K_{m}^{*}\) is isomorphic to \(K_{\frac{m}{2}}^{*}[2]\oplus I_{m}^{*}\). Therefore, \(K_{m}^{*}\) has a \(\{(C_{\frac{m}{2}}^{*}[2])^{\frac{m-12}{4}},C_{\frac{m}{2}}^{*}\oplus I_{m}^{ *},\Gamma_{\frac{m}{2}}^{*}\}\)-factorization. By Lemma 18, each of \(\frac{m-12}{4}\)\(C_{\frac{m}{2}}^{*}[2]\)-factors has a \(\{(K_{2}^{*})^{r_{0}},\overrightarrow{C}_{m}^{s_{0}}\}\)-factorization for \(r_{0}\in\{0,2,4\}\) and \(r_{0}+s_{0}=4\). By Lemma 20, \(C_{\frac{m}{2}}^{*}[2]\oplus I_{m}^{*}\) has a \(\{(K_{2}^{*})^{r_{1}},\overrightarrow{C}_{m}^{s_{1}}\}\)-factorization for \(r_{1}\in\{0,1,3,5\}\) and \(r_{1}+s_{1}=5\). By Corollary 19, \(\Gamma_{\frac{m}{2}}^{*}\) has a \(\{(K_{2}^{*})^{r_{2}},\overrightarrow{C}_{m}^{s_{2}}\}\)-factorization for even \(m\) and \(r_{2}\in\{0,2,4,6\}\) with \(r_{2}+s_{2}=6\). Those factorizations give a \(\{(K_{2}^{*})^{r^{\prime}},\overrightarrow{C}_{m}^{s^{\prime}}\}\)-factorization of \(K_{m}^{*}\) where \(r^{\prime}=(\frac{m-12}{4})r_{0}+r_{1}+r_{2}\) and \(s^{\prime}=(\frac{m-12}{4})s_{0}+s_{1}+s_{2}\) satisfying \(r^{\prime}+s^{\prime}=(\frac{m-12}{4})4+5+6=m-1\) with \(0\leq r^{\prime},s^{\prime}\leq m-1\). If we choose \(r_{1}=0\), we obtain a \(\{(K_{2}^{*})^{r^{\prime}},\overrightarrow{C}_{m}^{s^{\prime}}\}\)-factorization of \(K_{m}^{*}\) for even \(r^{\prime}\).
Placing a \(K_{2}^{*}\)-factorization on \(r^{\prime\prime}\) of the \(K_{(\frac{m}{2}:2)}^{*}\)-factors for \(0\leq r^{\prime\prime}\leq 2x-2\), a \(\overrightarrow{C}_{m}\)-factorization on \(s^{\prime\prime}\) of the \(K_{(\frac{m}{2}:2)}^{*}\) for \(r^{\prime\prime}+s^{\prime\prime}=2x-2\), and taking a \(\{(K_{2}^{*})^{r^{\prime}},\overrightarrow{C}_{m}^{s^{\prime}}\}\)-factorization of \(K_{m}^{*}\) give a \(\{(K_{2}^{*})^{\frac{m}{2}r^{\prime\prime}+r^{\prime}},\overrightarrow{C}_{m} ^{\frac{m}{2}r^{\prime\prime}+s^{\prime}}\}\)-factorization of \(K_{mx}^{*}\) where \(\frac{m}{2}r^{\prime\prime}+r^{\prime}\) is even. It can be seen that \(r^{\prime}=m-4\) cannot be obtained for the possible values of \(r_{0},r_{1}\) and \(r_{2}\) from the above factorizations.
Since any even integer \(1\leq r\leq mx-1\) can be written as \(r=\frac{m}{2}r^{\prime\prime}+r^{\prime}\) except for \(r=mx-4\) and for integers \(r^{\prime}\in[0,m-1]\backslash\{m-4\}\) and \(0\leq r^{\prime\prime}\leq 2x-2\), a solution to \(\mathrm{HWP}^{*}(v;2^{r},m^{s})\) exists for each even \(r\geq 2\) except possibly \(r=mx-4=v-4\) and \(s\geq 1\) satisfying \(r+s=v-1\).
**(b)** Assume \(m\equiv 2\pmod{4}\).
By Lemma 11, factorize \(K_{n}\) into \((\frac{n-1}{2})\)\(C_{n}\)-factors for odd \(n\), and get a \(C_{n}^{*}\)-factorization of \(K_{n}^{*}\) by Proposition 6. Also, \(K_{m}^{*}\) can be factorized as \(K_{\frac{m}{2}}^{*}[2]\oplus I_{m}^{*}\). Since \(\frac{m}{2}\) is odd, \(K_{m}^{*}\) has a \(\{(C_{\frac{m}{2}}^{*}[2])^{\frac{m-2}{4}},I_{m}^{*}\}\)-factorization. By Lemma 18, each of \(C_{\frac{m}{2}}^{*}[2]\)-factors has a \(\{(K_{2}^{*})^{r_{0}},\overrightarrow{C}_{m}^{s_{0}}\}\)-factorization for \(r_{0}\in\{0,2,4\}\) and \(r_{0}+s_{0}=4\). By Lemma 20, \(C_{\frac{m}{2}}^{*}[2]\oplus I_{m}^{*}\) has \(\{(K_{2}^{*})^{r_{1}},\overrightarrow{C}_{m}^{s_{1}}\}\)-factorization for \(r_{1}\in\{0,1,3,5\}\) and \(r_{1}+s_{1}=5\).
Those factorizations give a \(\{(K_{2}^{*})^{r_{2}},\overrightarrow{C}_{m}^{s_{2}}\}\)-factorization of \(K_{m}^{*}\) for \(r_{2}=\frac{m-6}{4}r_{0}+r_{1}\) and \(s_{2}=\frac{m-6}{4}s_{0}+s_{1}\) with \(r_{2}+s_{2}=m-1\).
Placing a \(K_{2}^{*}\)-factorization on \(r^{\prime}\) of the \(K_{(\frac{m}{2}:2)}^{*}\) factors for \(0\leq r^{\prime}\leq 2x-2\) where we choose \(r^{\prime}\) is even, a \(\overrightarrow{C}_{m}\)-factorization on \(s^{\prime}\) of the \(K_{(\frac{m}{2}:2)}^{*}\) with \(r^{\prime}+s^{\prime}=2x-2\), and taking a \(\{(K_{2}^{*})^{r_{2}},\overrightarrow{C}_{m}^{s_{2}}\}\)-factorization of \(K_{m}^{*}\) give a \(\{(K_{2}^{*})^{\frac{m}{2}r^{\prime}+r_{2}},\overrightarrow{C}_{m}^{\frac{m}{ 2}s^{\prime}+s_{2}}\}\)-factorization of \(K_{mx}^{*}\) where \(r=\frac{m}{2}r^{\prime}+r_{2}\) and \(s=\frac{m}{2}s^{\prime}+s_{2}\). Also, we obtain the requested even integer \(r\in[1,mx-1]\) except for \(r=mx-4\), from the sum of \(\frac{m}{2}r^{\prime}\) and \(r_{2}\) for integers \(0\leq r^{\prime}\leq 2x-2\) and \(r_{2}\in[0,m-1]\backslash\{m-4\}\). So, a solution to \(\mathrm{HWP}^{*}(v;2^{r},m^{s})\) exists for even \(r\geq 2\) except possibly \(r=mx-4=v-4\) and odd \(s\geq 1\) satisfying \(r+s=v-1\).
If \(x\) is even, say \(x=2t\), factorize \(K_{mx}^{*}\) into a \(K_{2m}^{*}\)-factor and \((2t-2)\)\(K_{(m:2)}^{*}\)-factors. \(K_{(m:2)}^{*}\) has a \(K_{2}^{*}\)-factorization with \(m\)\(K_{2}^{*}\)-factors and a \(\overrightarrow{C}_{m}\)-factorization with \(m\)\(\overrightarrow{C}_{m}\)-factors by Lemmata 10 and 15, respectively. So, we must decompose \(K_{2m}^{*}\) into \(K_{2}^{*}\)-factors and \(\overrightarrow{C}_{m}\)-factors. As before, \(K_{2m}^{*}\) can be factorized as \(K_{m}^{*}[2]\oplus I_{2m}^{*}\). So, \(K_{2m}^{*}\) has a \(\{(C_{m}^{*}[2])^{\frac{m-4}{2}},I_{2m}^{*},\Gamma_{m}^{*}\}\)-factorization. By Lemma 21, each of \(C_{m}^{*}[2]\)-factors has a \(\{(K_{2}^{*})^{r_{0}},\overrightarrow{C}_{m}^{s_{0}}\}\)-factorization for \(r_{0}\in\{0,2,4\}\) and \(r_{0}+s_{0}=4\). By Corollary 22, \(C_{m}^{*}[2]\oplus I_{2m}^{*}\) has a \(\{(K_{2}^{*})^{r_{1}},\overrightarrow{C}_{m}^{s_{1}}\}\)-factorization for \(r_{1}\in\{1,3,5\}\) and \(r_{1}+s_{1}=5\). By Lemma 25, \(\Gamma_{m}^{*}\) has a \(\{(K_{2}^{*})^{r_{2}},\overrightarrow{C}_{m}^{s_{2}}\}\)-factorization for \(m\equiv 2\pmod{4}\) and \(r_{2}\in\{1,2,3,4,6\}\) with \(r_{2}+s_{2}=6\). Using these factorizations, we obtain a solution to the problem for \(r=2mt-4=mx-4\) when \(m\equiv 2\pmod{4}\) and even \(x\). So, \(\mathrm{HWP}^{*}(v;2^{r},m^{s})\) has a solution for \(r=v-4\) and even \(\frac{v}{m}\) when \(m\equiv 2\pmod{4}\).
**Lemma 27**.: \(C_{4}^{*}[2]\oplus I_{8}^{*}\) _has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{4}^{s}\}\)-factorization for \(r\in\{0,1,2,3,5\}\) with \(r+s=5\)._
Proof.: Let \(V(C_{4}^{*}[2]\oplus I_{8}^{*})=\mathbb{Z}_{8}\) and we define the following \(\overrightarrow{C}_{4}\)-factorization of \(C_{4}^{*}[2]\oplus I_{8}^{*}\);
\(\mathcal{F}=\big{\{}\big{[}(0,1,2,3),(4,5,6,7)\big{]},[(0,3,2,1),(4,7,6,5) \big{]},[(0,5,1,4),(2,7,3,6)\big{]},[(0,4,3,7),\)
\((1,5,2,6)\big{]},[(0,7,2,5),(1,6,3,4)\big{]}\big{\}}\)
For \(r=2\), we define the following \(\overrightarrow{C}_{4}\)-factors of \(C_{4}^{*}[2]\oplus I_{8}^{*}\)
\(F_{1}=[(0,1,2,3),(4,5,6,7)],\)\(F_{2}=[(0,3,6,5),(1,4,7,2)],F_{3}=[(0,5,4,1),(2,7,6,3)]\)
and the following two \(K_{2}^{*}\)-factors,
\(F_{4}=[(0,4)^{*},(1,5)^{*},(2,6)^{*},(3,7)^{*}],\)\(F_{5}=[(0,7)^{*},(1,6)^{*},(2,5)^{*},(3,4)^{*}]\)
Therefore \(C_{4}^{*}[2]\oplus I_{8}^{*}\) has two \(K_{2}^{*}\)-factors and three \(\overrightarrow{C}_{4}\)-factors.
The remaining cases are obtained from Corollary 22 for \(m=4\).
**Lemma 28**.: \(K_{12}^{*}\) _has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{4}^{s}\}\)-factorization for \(r\in\{0,1,2,3,4,5,7,9,\)\(11\}\) with \(r+s=11\)._
Proof.: The cases \(r=0\) and \(r=11\) are obtained by Theorem 17 and Proposition 7, respectively. Since solutions to \(OP(4^{\frac{v}{4}})\) and \(OP(m^{\frac{v}{m}})\) exist except for \(v=6\) or \(v=12\) when \(m=3\), \(K_{12}-I\) has a \(C_{4}\)-factorization where \(I\) is a \(1\)-factor of \(K_{12}\). By Proposition 6, \(K_{12}^{*}\) can be factorized into five \(C_{4}^{*}\)-factors and one \(I^{*}\)-factor which is a \(K_{2}^{*}\)-factor of \(K_{12}^{*}\). Also, \(C_{4}^{*}\) has a \(\overrightarrow{C}_{4}\)-factorization and \(K_{2}^{*}\)-factorization. So, we obtain a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{4}\}\)-factorization of \(K_{12}^{*}\) for \(r\in\{1,3,5,7,9\}\) with \(r+s=11\).
Let \(V(K_{12}^{*})\) be \(\mathbb{Z}_{12}\), and define the following factorizations of \(K_{12}^{*}\) for \(r=2,4\), respectively.
\(\mathcal{F}_{1}=\Big{\{}[(0,6)^{*},(1,7)^{*},(2,8)^{*},(3,9)^{*},(4,10)^{*},( 5,11)^{*}],[(0,10)^{*},(4,6)^{*},(1,5)^{*},(7,11)^{*},\\ (2,9)^{*},(3,8)^{*}],[(0,1,2,3),(4,5,6,7),(8,9,10,11)],[(0,2,1,4),(3,5,7,6),(8,11,10,9)],\\ [(0,3,1,8),(2,4,11,6),(5,9,7,10)],[(0,4,2,11),(1,6,8,10),(3,7,9,5)],[(0,5,8,7),\\ (1,3,4,9),(2,10,6,11)],[(0,7,5,2),(1,10,8,4),(3,6,9,11)],[(0,8,6,1),(2,5,10,7),\\ (3,11,9,4)],[(0,9,6,5),(1,11,4,8),(2,7,3,10)],[(0,11,1,9),(2,6,10,3),(4,7,8,5)] \Big{\}},\]
\(\mathcal{F}_{2}=\Big{\{}[(0,6)^{*},(1,7)^{*},(2,8)^{*},(3,9)^{*},(4,10)^{*},(5,11)^{*}],[(0,10)^{*},(4,6)^{*},(1,5)^{*},(7,11)^{*},\\ (2,9)^{*},(3,8)^{*}],[(0,8)^{*},(2,6)^{*},(1,10)^{*},(4,7)^{*},(3,11)^{*},(5,9)^{*}],[(0,1)^{*},(2,3)^{*},(4,5)^{*},\\ (6,7)^{*},(8,9)^{*},(10,11)^{*}],[(0,2,1,3),(4,8,11,9),(5,7,10,6)],[(0,3,10,5),(1,8,6,11),\\ (2,4,9,7)],[(0,4,11,2),(1,6,10,9),(3,5,8,7)],[(0,5,6,9),(1,2,11,4),(3,7,8,10 )],\\ [(0,7,9,11),(1,4,3,6),(2,10,8,5)],[(0,9,10,7),(1,11,6,8),(2,5,3,4)],[(0,11,8,4 ),\\ (1,9,6,3),(2,7,5,10)]\Big{\}}.\]
So, \(K_{12}^{*}\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{4}^{s}\}\)-factorization for \(r\in\{0,1,2,3,4,5,7,\ 9,11\}\) with \(r+s=11\).
**Lemma 29**.: \(K_{(4:3)}^{*}\) _has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{4}^{s}\}\)-factorization for \(r\in\{0,1,2,4,6,8\}\) with \(r+s=8\)._
Proof.: The cases \(r=0\) and \(r=8\) are obtained by lemmata 24 and 10, respectively. By Theorem 8, \(K_{(4:3)}\) has a \(C_{4}\)-factorization and so, \(K_{(4:3)}^{*}\) has a \(C_{4}^{*}\)-factorization by Proposition 6. Since \(C_{4}^{*}\) has a \(K_{2}^{*}\)-factorization and a \(\overrightarrow{C}_{4}\)-factorization, \(K_{(4:3)}\) can be factorized into two \(K_{2}^{*}\)-factors and six \(\overrightarrow{C}_{4}\)-factors. Similarly, a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{4}^{s}\}\)-factorization of \(K_{(4:3)}\) is obtained for \(r\in\{4,6\}\) with \(r+s=8\).
Finally, let the vertex set of \(K_{(4:3)}^{*}\) be \(\mathbb{Z}_{12}\), and define the following factorization of \(K_{(4:3)}^{*}\) for \(r=1\),
\(\mathcal{F}_{1}=\Big{\{}[(0,4,2,5),(1,8,3,11),(6,9,7,10)],[(0,5,1,7),(2,9,4,11),(3,8,6,10)],[(0,7,1,9),\\ (2,4,3,10),(5,11,6,8)],[(0,8,1,10),(2,7,3,5),(4,9,6,11)],[(0,9,2,11),(1,5,3,6),\\ (4,10,7,8)],[(0,10,4,8),(1,11,5,9),(2,6,3,7)],[(0,11,3,4),(1,6,2,10),(5,8,7,9)],[(0,6) ^{*},\\ (1,4)^{*},(2,8)^{*},(3,9)^{*},(5,10)^{*},(7,11)^{*}]\Big{\}}.\)
In Theorem 26, we have given the necessary and sufficient conditions for the existence of a solution for \(\mathrm{HWP}^{*}(v;2^{r},m^{s})\) for even \(m\geq 6\). The construction in Theorem 26 is not valid when \(m=4\), therefore we also examine the case of \(m=4\) in the following theorem.
**Theorem 30**.: _Let \(r\), \(s\) be nonnegative integers. Then, \(\mathrm{HWP}^{*}(v;2^{r},4^{s})\) has a solution if and only if \(r+s=v-1\) except for \(s=1\) or \((r,v)=(0,4)\), and except possibly when at least one of the following conditions holds;_
1. \(r\geq 2\) _even and_ \(v\equiv 4,20\pmod{24}\)_,_
2. \(s\in\{3,5\}\) _and_ \(v\equiv 12\pmod{24}\)_._
Proof.: If you remove \((v-2)\) disjoint \(K_{2}^{*}\)-factors from \(K_{v}^{*}\), then the remaining factor must be a \(K_{2}^{*}\)-factor in \(K_{v}^{*}\). Thus, there is no \(\{(K_{2}^{*})^{v-2},\overline{C}_{4}^{1}\}\)-factorization of \(K_{v}^{*}\). So, we may assume \(s\neq 1\).
Since \(\mathrm{HWP}^{*}(v;n^{r},m^{s})\) has a solution for \(r=0\) except for \((v,m)=(4,4)\) by Theorem 1, \(\mathrm{HWP}^{*}(4;2^{r},4^{s})\) has no solution for \(r=0\). So, we may assume that \(r\geq 1\).
**Case 1 (\(v\equiv 0\pmod{8}\))**
Let \(v=8k\) for a positive integer \(k\). Note that, \(K_{8k}^{*}\) can be factorized as \(K_{4k}^{*}[2]\oplus I_{8k}^{*}\). Also, \(K_{4k}^{*}[2]\) can be factorized into \(C_{4}^{*}[2]\)-factors and a \(K_{2}^{*}[2]\)-factor. The graph \(kC_{4}^{*}[2]\oplus I_{8k}^{*}\) can be considered as \((C_{4}^{*}[2]\oplus I_{8}^{*})\)-factor in \(K_{8k}^{*}\). So, \(K_{8k}^{*}\) has a \(\{(C_{4}^{*}[2])^{2k-1},I_{8}^{*},K_{2}^{*}[2]\}\)-factorization. Also, \(C_{4}^{*}[2]\) has a \(\{(K_{2}^{*})^{r_{0}},\overline{C}_{4}^{s_{0}}\}\)-factorization for \(r_{0}\in\{0,2,4\}\) where \(r_{0}+s_{0}=4\) by Lemma 21. Since \(K_{2}^{*}[2]=C_{4}^{*}\), \(K_{2}^{*}[2]\) has a \(\{(K_{2}^{*})^{r_{1}},\overline{C}_{4}^{s_{1}}\}\)-factorization for \(r_{1}\in\{0,2\}\) and \(r_{1}+s_{1}=2\). \(C_{4}^{*}[2]\oplus I_{8}^{*}\) has a \(\{(K_{2}^{*})^{r_{2}},\overline{C}_{4}^{s_{2}}\}\)-factorization for \(r_{2}\in\{0,1,2,3,5\}\) where \(r_{2}+s_{2}=5\) by Lemma 27. These factorizations give a \(\left\{(K_{2}^{*})^{r}\,,\overline{C}_{4}^{s}\right\}\)-factorization of \(K_{8k}^{*}\) for \(r\neq 8k-2\) with \(r+s=8k-1\).
Then, \(\mathrm{HWP}^{*}(v;2^{r},4^{s})\) has a solution for \(r+s=v-1\), \(s\neq 1\) and \(v\equiv 0\pmod{8}\).
**Case 2 (\(v\equiv 4\pmod{8}\))**
Let \(v=8k+4\) for a nonnegative integer \(k\).
**(a)** Assume r is odd. Partition the vertices of \(K_{8k+4}^{*}\) into \(4k+2\) sets of size \(2\), represent each set of size \(2\) vertices in \(K_{8k+4}^{*}\) with a single vertex and represent all double arcs between sets of size \(2\) as a single double arc, to get a \(K_{4k+2}^{*}\). By Proposition 7, \(K_{4k+2}^{*}\) has a decomposition into \(4k+1\)\(K_{2}^{*}\)-factors. Construct a \(K_{4}^{*}\)-factor from one of the \(K_{2}^{*}\)-factors and a \(K_{(2:2)}^{*}\)-factor from each of the remaining \(4k\)\(K_{2}^{*}\)-factors. Then, factorize \(K_{8k+4}^{*}\) into a \(K_{4}^{*}\)-factor and \((4k)\)\(K_{(2:2)}^{*}\)-factors. \(K_{4}^{*}\) has a decomposition into one \(K_{2}^{*}\) and two \(\overrightarrow{C}_{4}\)-factors or three \(K_{2}^{*}\)-factors, and \(K_{(2:2)}^{*}\) has a \(\{(K_{2}^{*})^{r_{0}}\,,\overrightarrow{C}_{4}^{s_{0}}\}\)-factorization for \(r_{0}\in\{0,2\}\) satisfying \(r+s=2\). So, \(K_{8k+4}^{*}\) has a \(\{(K_{2}^{*})^{r},\overrightarrow{C}_{4}^{s}\}\)-factorization for odd \(r\). Therefore, \(\mathrm{HWP}^{*}(v;2^{r},4^{s})\) has a solution for odd \(r\) and \(v\equiv 4\pmod{8}\).
**(b)** Assume r is even, and also let \(k\equiv 1\pmod{3}\). Then, we have \(v=24l+12\) for some nonnegative integer \(l\).
Representing each part of \(4\) vertices in \(K_{24l+12}^{*}\) with a single vertex and all double arcs between parts of size \(4\) as a single double arc, we have a \(K_{6l+3}^{*}\). Since a Kirkman triple system exists for orders \(6l+3\), we have a
\(C_{3}\)-factorization of \(K_{6l+3}\). Then, a \(C_{3}^{*}\)-factorization of \(K_{6l+3}^{*}\) is obtained by Proposition 6.
Construct a \(K_{12}^{*}\)-factor from one of the \(C_{3}^{*}\)-factors and \(K_{(4:3)}^{*}\)-factor from each of the remaining \(3l\)\(C_{3}^{*}\)-factors. Then, get a \(\{K_{12}^{*},(K_{(4:3)}^{*})^{3l}\}\)-factorization-on of \(K_{24l+12}^{*}\). By Lemma 28, \(K_{12}^{*}\) has a \(\{(K_{2}^{*})^{r_{0}},\vec{C}_{m}^{*_{0}}\}\)-factorization \(r_{0}\in\{0,1,2,3,4,\,5,7,9,11\}\) with \(r_{0}+s_{0}=11\). Also, \(K_{(4:3)}^{*}\) has a \(\{(K_{2}^{*})^{r_{1}},\vec{C}_{4}^{s_{1}}\}\)-factorization by Lemma 29 for \(r_{1}\in\{0,1,2,4,6,8\}\) with \(r_{1}+s_{1}=8\). Those factorizations give a \(\{(K_{2}^{*})^{r},\vec{C}_{m}^{*}\}\)-factorization of \(K_{24l+12}^{*}\) where \(r=r_{0}+ar_{1}\) and \(s=s_{0}+bs_{1}\) satisfying \(r+s=24l+11=v-1\) with \(1\leq r,s\leq v-1\) and \(a+b=3l\). We obtain the requested even \(r\in[0,v-1]\) except for \(r=v-6\) and \(r=v-4\), from the sum of \(r_{0}\) and \(ar_{1}\). Then, \(\text{HWP}^{*}(v;2^{r},4^{s})\) has a solution for \(r+s=v-1\), \(s\notin\{3,5\}\) and \(v\equiv 12\pmod{24}\).
## 4. Solutions to \(\text{HWP}^{*}(v;m^{r},(2m)^{s})\)
In this section, we prove that for even \(m\), a solution to \(\text{HWP}^{*}(v;m^{r},(2m)^{s})\) exists for \(r+s=v-1\) and except possibly when \(s\in\{1,3\}\).
Firstly, factorize \(K_{2mx}^{*}\) into a \(K_{2m}^{*}\)-factor and \((2x-2)\)\(K_{(m:2)}^{*}\)-factors. \(K_{(m:2)}^{*}\) has a \(\{\vec{C}_{m}^{r},\vec{C}_{2m}^{s}\}\)-factorization for \(r\in\{0,m\}\) and \(r+s=m\). Using Lemma 12 and Proposition 6, a \(\{(C_{m}^{*}[2])^{\frac{m-4}{2}},I_{2m}^{*},\Gamma_{m}^{*}\}\)-factorization of \(K_{2m}^{*}\) is also obtained. Therefore, in order to factorize \(K_{2mx}^{*}\) into \(\vec{C}_{m}\)-factors and \(\vec{C}_{2m}\)-factors, \(\Gamma_{m}^{*}\), \(C_{m}^{*}[2]\oplus I_{2m}^{*}\) and \(C_{m}^{*}[2]\) must be factorized into \(\vec{C}_{m}\)-factors and \(\vec{C}_{2m}\)-factors. The following lemmata examine the existence of a \(\{\vec{C}_{m}^{r},\vec{C}_{2m}^{s}\}\)-factorization of these graphs for \(r+s\in\{4,5,6\}\).
**Lemma 31**.: _Let \(m\geq 4\) be an even integer. Then \(\Gamma_{m}^{*}\) has a \(\{\vec{C}_{m}^{r},\vec{C}_{2m}^{s}\}\)-factorization for \(r\in\{0,6\}\) and \(r+s=6\)._
Proof.: **Case 1 (\(r=0\))** By Lemma 14\((i)\) and Proposition 6, \(\Gamma_{m}^{*}\) has a \(\vec{C}_{2m}\)-factorization.
**Case 2 (\(r=6\))** By Lemma 14\((ii)\) and Proposition 6, \(\Gamma_{m}^{*}\) has a \(\vec{C}_{m}\)-factorization for \(m\equiv 0\pmod{4}\).
When \(m\equiv 2\pmod{4}\), define the following \(m\)-cycles. Also, let \(\vec{C}_{m}^{(0)}\) and \(\vec{C}_{m}^{(1)}\) cycles be equivalent to the \(\vec{C}_{m}^{(0)}\) and \(\vec{C}_{m}^{(2)}\) cycles respectively, as stated in Lemma 25.
\[\vec{C}_{m}^{(2)}=(u_{0},u_{1},\ldots,u_{m-1})\,\text{ where }u_{i}=\begin{cases}(1,m-1-i)&\text{ if }0 \leq i\leq\frac{m}{2},\\ (0,m-1-i)&\text{ if }\frac{m}{2}+1\leq i\leq m-1.\end{cases}\]
\(\vec{C}_{m}^{(3)}=(y_{0},y_{1},\ldots y_{m-1})\) where \(y_{0}=(0,0)\), \(y_{1}=(0,\frac{m}{2})\), \(y_{2}=(1,\frac{m}{2}+1)\), \(y_{3}=(1,\frac{m}{2}-1)\) and
\[y_{i=}\left\{\begin{array}{ll}(1,\frac{m}{2}+(-1)^{i+1}\lfloor\frac{i}{2} \rfloor)\,\text{if }i\equiv 0,1\pmod{4}\\ (0,\frac{m}{2}+(-1)^{i}\lfloor\frac{i}{2}\rfloor)\,\text{ if }i=2,3\pmod{4} \end{array}\right.\text{ for }4\leq i\leq m-1.\]
Using the above \(m\)-cycles, we obtain the following \(m\)-cycle factors. \(F_{0}=\vec{C}_{m}^{(0)}\cup(\vec{C}_{m}^{(0)}+(1,0))\), \(F_{1}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{2}=R\left(F_{1}\right)\), \(F_{3}=\vec{C}_{m}^{(0)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{1}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{2}=R\left(F_{1}\right)\), \(F_{3}=\vec{C}_{m}^{(0)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{1}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{2}=R\left(F_{1}\right)\), \(F_{3}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{3}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{4}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{5}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{6}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{7}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{8}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{9}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{1}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{1}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{2}=R\left(F_{1}\right)\), \(F_{3}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{4}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{5}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{6}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{7}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{8}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}+(1,0))\), \(F_{9}=\vec{C}_{m}^{(1)}\cup R(\vec{C}_{m}^{(1)}
\(\vec{C}_{m}^{(2)}\oplus(\vec{C}_{m}^{(2)}+(1,0))\), \(F_{4}=\vec{C}_{m}^{(3)}\cup(\vec{C}_{m}^{(3)}+(1,0))\) and \(F_{5}=\left(\Gamma_{m}^{*}\right)-\bigoplus_{i=0}^{4}F_{i}\). Then, \(\{F_{0},F_{1},F_{2},F_{3}\), \(F_{4},F_{5}\}\) is a \(\vec{C}_{m}\)-factorization of \(\Gamma_{m}^{*}\). So, \(\Gamma_{m}^{*}\) has a \(\vec{C}_{m}\)-factorization for even \(m\geq 4\).
**Lemma 32**.: _Let \(m\geq 4\) be an even integer. Then \(C_{m}^{*}[2]\oplus I_{2m}^{*}\) has a \(\{\vec{C}_{m}^{r},\vec{C}_{2m}^{s}\}\)-factorization for \(r\in\{1,3\}\) and \(r+s=5\)._
Proof.: **Case 1 (\(\boldsymbol{r=1}\))** Let \(\vec{C}_{m}^{(0)}=(v_{0},v_{1},\ldots,v_{m-1})\) be a directed \(m\)-cycle of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\), where \(v_{i}=(0,i)\) for \(0\leq i\leq m-1\), and it can be checked that \(F_{1}=\vec{C}_{m}^{(0)}\cup(\vec{C}_{m}^{(0)}+(1,0))\) is a directed \(m\)-cycle factor of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\). Also, let \(\vec{C}_{2m}^{(1)}=(u_{0},u_{1},\ldots,u_{2m-1})\) be a directed \(2m\)-cycle of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\), where \(u_{2i}=(0,i)\), and \(u_{2i+1}=(1,i)\) for \(0\leq i\leq m-1\). Similarly, it can be checked that \(F_{2}=\vec{C}_{2m}^{(1)}\) and \(F_{3}=\vec{C}_{2m}^{(1)}+(1,0)\) are arc disjoint directed \(2m\)-cycle factors of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\).
Let \(\vec{C}_{2m}^{(2)}=(x_{0},x_{1},\ldots,x_{2m-1})\) be a directed \(2m\)-cycle of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\), where \(x_{0}=(0,0)\), \(x_{m}=(1,0)\), \(x_{i+1}=(0,m-1-i)\) for \(0\leq i\leq m-2\) and \(x_{j+1+m}=(1,m-1-j)\) for \(0\leq j\leq m-2\).
\(F_{4}=\vec{C}_{2m}^{(2)}\) and \(F_{5}=(C_{m}^{*}[2]\oplus I_{2m}^{*})-\bigoplus_{i=0}^{4}F_{i}\) are arc disjoint directed \(2m\)-cycle factors of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\). Then, \(\{F_{1},F_{2},F_{3},F_{4},F_{5}\}\) is a \(\{\vec{C}_{m}^{1},\vec{C}_{2m}^{4}\}\)-factorization of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\).
**Case 2 (\(\boldsymbol{r=3}\))** Let \(F_{1}\), \(F_{2}\) and \(F_{3}\) be the same as in Case 1. Using the arcs of \(F_{4}\bigcup F_{5}\), we obtain two new \(\vec{C}_{m}\)-factors.
\(F_{4}^{{}^{\prime}}=R(F_{1})\) is a \(\vec{C}_{m}\)-factor of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\). Let \(\vec{C}=(y_{0},y_{1},\ldots,y_{m-1})\) be a directed \(m\)-cycle of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\), where
\[y_{i}=\begin{cases}(0,i)&if\;\;i\;is\;\;even\\ (1,i)&if\;\;i\;is\;\;odd\end{cases}\;\;\text{for}\;\;0\leq i\leq m-1.\]
It can be checked that \(F_{5}^{{}^{\prime}}=R(\vec{C})\cup R(\vec{C}+(1,0))\) is a directed \(m\)-cycle factor of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\).
So, \(\left\{F_{1},F_{2},F_{3},F_{4}^{{}^{\prime}},F_{5}^{{}^{\prime}}\right\}\) is a \(\{\vec{C}_{m}^{3},\vec{C}_{2m}^{2}\}\)-factorization of \(C_{m}^{*}[2]\oplus I_{2m}^{*}\).
**Lemma 33**.: _Let \(m\geq 4\) be an even integer. Then \(C_{m}^{*}[2]\) has a \(\{\vec{C}_{m}^{r},\vec{C}_{2m}^{s}\}\)-factorization for \(r\in\{0,2,4\}\) and \(r+s=4\)._
Proof.: The cases \(r\in\{0,4\}\) are obtained by lemmata 18 and 21. Let \(\vec{C}_{2m}^{(1)}=(u_{0},u_{1},\ldots,u_{2m-1})\) be a directed \(2m\)-cycle of \(C_{m}^{*}[2]\), where
\[u_{i}=\begin{cases}(0,i)&if\;\;0\leq i\leq m-1,\\ (1,i)&if\;\;m\leq i\leq 2m-1.\end{cases}\]
And it can be checked that \(F_{1}=\vec{C}_{2m}^{(1)}\) is a \(\vec{C}_{2m}\)-factor of \(C_{m}^{*}[2]\). Let \(\vec{C}_{2m}^{(2)}=(v_{0},v_{1},\ldots,v_{2m-1})\) be a directed \(2m\)-cycle of \(C_{m}^{*}[2]\), where
\[v_{i}=\begin{cases}u_{i}&if\;\;i\;\;is\;\;even,\\ u_{i}+(1,0)&if\;\;i\;\;is\;\;odd.\end{cases}\]
\(F_{2}=\overrightarrow{C}_{2m}^{(2)}\) is a \(\overrightarrow{C}_{2m}\)-factor of \(C_{m}^{*}[2]\). Then, \(\left\{F_{1},F_{2},F_{4}^{{}^{\prime}},F_{5}^{{}^{\prime}}\right\}\) is a \(\{\overrightarrow{C}_{m}^{2},\overrightarrow{C}_{2m}^{2}\}\)-factorization of \(C_{m}^{*}[2]\) where \(F_{4}^{{}^{\prime}}\) and \(F_{5}^{{}^{\prime}}\) are the same factors in Lemma 32.
**Theorem 34**.: _Let \(r\), \(s\) be nonnegative integers, and let \(m\geq 4\) be even. Then, \(\mathrm{HWP}^{*}(v;m^{r},(2m)^{s})\) has a solution if and only if \(m|v\), \(r+s=v-1\) and \(v\geq 4\) except for \((s,v,m)\in\{(0,4,4),(0,6,3),(5,6,6)\}\), and except possibly when \(s\in\{1,3\}\)._
Proof.: By Theorem 17, \(\mathrm{HWP}^{*}(v;4^{r},8^{s})\) has a solution for \(r+s=v-1\), so we may assume that \(m\geq 6\). Furthermore, by Theorem 1, a solution to the \(\mathrm{HWP}^{*}(v;m^{r},(2m)^{s})\) exists for \(r=s=0\) and except for \((s,v,m)\in\{(0,4,4)\), \((0,6,3),(5,6,6)\}\).
Factorize \(K_{2mx}^{*}\) into a \(K_{2m}^{*}\)-factor and \((2x-2)\)\(K_{(m:2)}^{*}\)-factors. \(K_{(m:2)}^{*}\) decomposes into \(m\)\(\overrightarrow{C}_{m}\)-factors or \(m\)\(\overrightarrow{C}_{2m}\)-factors by Lemma 24. So, \(K_{2m}^{*}\) must be decomposed into \(\overrightarrow{C}_{m}\)-factors and \(\overrightarrow{C}_{2m}\)-factors. As before, \(K_{2m}^{*}\) can be factorized as \(K_{m}^{*}[2]\oplus I_{2m}^{*}\). So, \(K_{2m}^{*}\) has a \(\{(C_{m}^{*}[2])^{\frac{m-4}{2}},I_{2m}^{*},\Gamma_{m}^{*}\}\)-factorization. By Lemma 33, each of \(C_{m}^{*}[2]\)-factors has a \(\{\overrightarrow{C}_{m}^{r_{0}},\overrightarrow{C}_{2m}^{s_{0}}\}\)-factorization for \(r_{0}\in\{0,2,4\}\) and \(r_{0}+s_{0}=4\). By lemmata 32 and 20, \(C_{m}^{*}[2]\oplus I_{2m}^{*}\) has a \(\{\overrightarrow{C}_{m}^{r_{1}},\overrightarrow{C}_{2m}^{s_{1}}\}\)-factorization for \(r_{1}\in\{0,1,3\}\) and \(r_{1}+s_{1}=5\). By Lemma 31, \(\Gamma_{m}^{*}\) has a \(\{\overrightarrow{C}_{m}^{r_{2}},\overrightarrow{C}_{2m}^{s_{2}}\}\)-factorization for \(r_{2}\in\{0,6\}\) with \(r_{2}+s_{2}=6\). Those factorizations give a \(\{\overrightarrow{C}_{m}^{r},\overrightarrow{C}_{2m}^{s}\}\)-factorization of \(K_{2m}^{*}\) where \(r=(\frac{m-6}{2})r_{0}+r_{1}+r_{2}\) and \(s=(\frac{m-6}{2})s_{0}+s_{1}+s_{2}\) satisfying \(r+s=(\frac{m-6}{2})4+5+6=2m-1\) with \(0\leq r,s\leq 2m-1\) and \(s\notin\{1,3\}\).
Placing a \(\overrightarrow{C}_{m}\)-factorization on \(r^{\prime}\) of the \(K_{(m:2)}^{*}\)-factors for \(0\leq r^{\prime}\leq 2x-2\), a \(\overrightarrow{C}_{2m}\)-factorization on \(s^{\prime}\) of the \(K_{(m:2)}^{*}\) for \(r^{\prime}+s^{\prime}=2x-2\), and taking a \(\{\overrightarrow{C}_{m}^{r},\overrightarrow{C}_{2m}^{s}\}\)-factorization of \(K_{2m}^{*}\) give a \(\{\overrightarrow{C}_{m}^{mr^{\prime}+r},\overrightarrow{C}_{2m}^{ms^{\prime}+s}\}\)-factorization of \(K_{2mx}^{*}\). Then, \(\mathrm{HWP}^{*}(v;m^{r},2m^{s})\) has a solution except possibly when \(s\in\{1,3\}\).
## 5. Conclusions
Combining the results of Theorem 26 and 30, we obtain one of the main results of this paper.
**Theorem 35**.: _Let \(r\), \(s\) be nonnegative integers, and let \(m\geq 4\) be even. Then, \(\mathrm{HWP}^{*}(v;2^{r},m^{s})\) has a solution if \(m|v\), \(r+s=v-1\), \(s\neq 1\), \((r,v)\neq(0,6)\), \((m,r,v)\neq(4,0,4)\), and one of the following conditions holds;_
1. \(m>4\)_,_ \(s\neq 3\) _and_ \(m\equiv 0\pmod{4}\)_,_
2. \(m>4\)_,_ \(\frac{v}{m}\) _is even,_ \(s\neq 3\) _and_ \(m\equiv 2\pmod{4}\)_,_
3. \(m=4\) _and_ \(v\equiv 0,8,16\pmod{24}\)_,_
4. \(m=4\)_,_ \(v\equiv 12\pmod{24}\) _and_ \(s\notin\{3,5\}\)_,_
5. \(m=4\)_,_ \(v\equiv 4,20\pmod{24}\) _and_ \(r\) _is odd._
In this paper we also show that \(K^{*}_{2mx}\) has a \(\{\overrightarrow{C}^{r}_{m},\overrightarrow{C}^{s}_{2m}\}\)-factorization for \(r+s=2mx-1\), which means that the solution of \(\operatorname{HWP}^{*}(2mx;m^{r},(2m)^{s})\) exists.
**Theorem 36**.: _Let \(r\), \(s\) be nonnegative integers, and let \(m\geq 4\) be even. Then, \(\operatorname{HWP}^{*}(v;m^{r},(2m)^{s})\) has a solution if and only if \(m|v\), \(r+s=v-1\) and \(v\geq 4\) except for \((s,v,m)\in\{(0,4,4),(0,6,3),(5,6,6)\}\), and except possibly when \(s\in\{1,3\}\)._
Actually, since \(\operatorname{HWP}^{*}(v;2^{r},m^{s})\) has a solution with a few possible exceptions, this result can be extended to \(m\geq 2\).
|
2310.17497 | Branching Particle Systems with Mutually Catalytic Interactions | We study a continuous time Mutually Catalytic Branching model on the
$\mathbb{Z}^{d}$. The model describes the behavior of two different populations
of particles, performing random walk on the lattice in the presence of
branching, that is, each particle dies at a certain rate and is replaced by a
random number of offsprings. The branching rate of a particle in one population
is proportional to the number of particles of another population at the same
site. We study the long time behavior for this model, in particular,
coexistence and non-coexistence of two populations in the long run. Finally, we
construct a sequence of renormalized processes and use duality techniques to
investigate its limiting behavior. | Alexandra Jamchi Fugenfirov, Leonid Mytnik | 2023-10-26T15:52:40Z | http://arxiv.org/abs/2310.17497v1 | # Branching particle systems with mutually catalytic interactions
###### Abstract.
We study a continuous time Mutually Catalytic Branching model on the \(\mathbb{Z}^{d}\). The model describes the behavior of two different populations of particles, performing random walk on the lattice in the presence of branching, that is, each particle dies at a certain rate and is replaced by a random number of offsprings. The branching rate of a particle in one population is proportional to the number of particles of another population at the same site. We study the long time behavior for this model, in particular, coexistence and non-coexistence of two populations in the long run. Finally, we construct a sequence of renormalized processes and use duality techniques to investigate its limiting behavior.
## 1. Introduction
### Background and motivation
In the last four decades there has been a lot of interest in spatial branching models. These models include branching random walks, branching Brownian motion, super-processes and so on. During the last three decades branching models with interactions were studied very extensively on the level of continuous state models and particle models. Below we will give the partial list of branching models with interactions that were studied in the literature.
Models with catalytic branching, where one population catalyzes another were studied in [11], [27], [12]. For measure-valued diffusions with mutual catalytic branching, see Dawson-Perkins [14], Dawson et al.[10, 13, 9]. Models with symbiotic branching -- these are models with a correlation in branching laws of two populations -- were investigated in [18], [2], [21], [4], [3], [20] among others. Infinite rate branching models was introduced in [23], [26] and studied later in [24], [15], [16], [17], [25]. In addition, various particle models were introduced in [1], [22].
Let us say a few words about mutually catalytic branching model in the continuous state setting introduced in [14].
Dawson and Perkins in [14] constructed the model with \(\mathbb{Z}^{d}\) being a space of sites and \((u,v)\in\mathbb{R}_{+}^{\mathbb{Z}^{d}}\times\mathbb{R}_{+}^{\mathbb{Z}^{d}}\) a pair which undergo random migration and continuous state mutually catalytic branching. The random migration is described by a \(\mathbb{Z}^{d}\)-valued Markov chain with the associated \(Q\)-matrix, \(Q=(q_{ij})\). The branching rate of one population at a site is proportional to the mass of the other population at the site. The system is modeled by the following infinite system of stochastic differential equations:
\[\left\{\begin{array}{ll}u_{t}(x)=u_{0}(x)+\int_{0}^{t}u_{s}Q(x)ds+\int_{0}^{t }\sqrt{\gamma u_{s}(x)v_{s}(x)}dB_{s}^{x},&t\geq 0,\ x\in\mathbb{Z}^{d},\\ v_{t}(x)=v_{0}(x)+\int_{0}^{t}v_{s}Q(x)ds+\int_{0}^{t}\sqrt{\widehat{\gamma}u_ {s}(x)v_{s}(x)}dW_{s}^{x},&t\geq 0,\ x\in\mathbb{Z}^{d},\end{array}\right. \tag{1.1}\]
## 1. Introduction
In this paper we consider the following problem of determining the value of the function \(f\) of a function \(f\) on \(\mathbb{R}^{d}\).
(1.1) \[\left\{\begin{array}{ll}\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left( \frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2} \left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2 \left(\frac{1}{2}\left(\frac{1}{2}\left(\frac{1}{2\left(\frac{1}{2}\left(\frac{1}{2 \left(\frac{1}{2\left(\frac{1}{2\left(\frac{1}\left(\frac{1}{2\left(\frac{1}\left( \frac{1}\left{2\left(\frac{1}\left(\frac{1\left(\frac{1}\left(\frac{1\left(1 \frac{1\left(1\left(1\left(1\left(1\left(1\left(1\left(1\left(1 \left\left(1\left\left(\frac\left(1\left\left(\left\left(1 \left\left(\left\left\left(\left\left\left(\left\left\left(\left\left\left( \left\left\left(\left\left\left\left\left\left\left\{\left\left\{\left\{ \left\left\{\left\left\{\left\left\{\left\left\{\left\left\{\left\left\{\left\{ \left\left\{\left\left\{\left\left\{\left\left\{\left\left\{\left\left\{ \left\left\{\left\left\{\left\left\left\{\left\left\{\left\left\{ \left\left\{\left\left\left\{\left\left\left\{\left\left\left{ \left\left\left\{\left\left\{\left\left\left\{\left\left\left{ \left\left\left\{\left\left\left\{\left\left\left\{\left\left\right\{ \left\left\{\left\left\left\{\left\left\right\{ \left\left\left\{\left\left\left\{\left\left\{ \left\left\left\{\left\left\left\{\left\left\right\{\left\left\{ \left\left\left\{\left\left\left\{\left\left\{\leftleft\right\{ \left\left\left\{\left\left\left\{\left\left\{ \left\left\left\{\left\left\{\left\left\{\leftleft\left\{ \left\left\{\left\left\{\leftleft\left\{\left\left\{\leftleftleftleft\{ \left\left\{\leftleft\left\{\leftleft\left\{\leftleftleft\{ \leftleftleft\{\leftleftleft\left\{\leftleftleftleft\{\leftleftleftleftleftleft \{\leftleftleftleft\{\leftleftleftleft\{\leftleftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleft\{\leftleftleftleftleftleftleft \{\leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleft\{ \leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleft\{\leftleftleftleftleftleftleftleft \{\leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleft\{ \right\leftleftleft\{\leftleftleftleftleft\{\right\leftleftleft\{\rightright\{ \leftleftleftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleft\{ \leftleftleftleftleft\right\{\leftleftleftleft\{\leftleftleftleftleftleft\{ \leftleftleftleftleft\right\{\leftleftleftleft\{\leftleftleftleftleftleft\{ \leftleftleftleftleft\right\{\leftleftleftleft\{\leftleftleftleftleftleftleft\{ \leftleftleftleftleft\right\{\leftleftleftleft\{\leftleftleftleftleftleftleftleft{ \leftleftleftleft\right\{\leftleftleftleftleftleft\{\leftleftleftleftleftleftleftleft\{ \leftleftleftleftleft\right\{\leftleftleftleftleftleft\{\leftleftleftleftleftleftleftleft{ \leftleftleftleftleft\right\{\leftleftleftleftleftleft\{\leftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleft\right\{\leftleftleftleftleftleftleft\{\leftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleft\right\{\leftleftleftleftleftleftleftleftleft\{\leftleftleftleftleftleftleftleftleftleft{ \leftleftleftleftleftleftleft\right}\{\leftleftleftleft\right\{\leftleftleftleftleftleft\{ \leftleftleftleftleftleft\right\{\leftleftleftleftleftleftleft\right\{\leftleftleftleftleftleftleftleft\{ \leftleftleftleftleftleftleftleft\right\{\leftleftleftleftleftleftleftleftleft\right\right\} \right\{\leftleftleftleft\{\leftleftleftleftleft\{\leftleftleftleftleftleft\right\{ \leftleftleftleftleftleftleftleft\right\right\{\leftleftleftleftleftleftleft\right\right\{ \leftleftleftleftleftleftleft\right\right\{\leftleftleftleftleftleftleftleftleft\right\right\right\{ \leftleftleftleftleftleftleft\right\right\{\leftleftleftleftleftleftleftleft\right\right\{ \leftleftleftleftleftleftleftleftleft\right\right\right\leftleftleft\{\leftleftleftleftleftleftleftleftleftleft\right\rightright \right\leftleftleft\right\right\left\{\leftleftleftleftleftleftleftleftleft\rightright\right\right\rightright \leftleftleftleft\{\leftleftleftleftleftleftleftleftleftleft\right\right\right\rightright\leftleftleftleftleftleftleftleftleftleftleft \right\right\rightright\leftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftright
In what follows \(\mathcal{L}(\cdot)\) denotes the law of random variable or process.
It is shown in [6] that in dimensions \(d\geq 3\) the sequence \(D_{n}\)-processes with suitably rescaled time is tight and converges to a diffusion.
**Theorem 1.1**.: _(Theorem 1(a) in [6]) Let \(d\geq 3\), and let \(Q\) be a generator of a simple random walk on \(\mathbb{Z}^{d}\). Assume that_
\[u_{0}^{n}(i)=\theta_{1},v_{0}^{n}(i)=\theta_{2},\ \forall i\in\Lambda_{n}.\]
_Then_
\[\mathcal{L}\left(D_{n}\left(U_{\beta_{n}(t)}^{n},V_{\beta_{n}(t)}^{n}\right)_ {t\geq 0}\right)\longrightarrow\mathcal{L}\left(\left(X_{t},Y_{t}\right)_{t \geq 0}\right),\ as\,n\rightarrow\infty,\]
_where \(\left(X_{t},Y_{t}\right)_{t\geq 0}\) is the unique weak solution for the following system of stochastic differential equations_
\[\begin{cases}dX_{t}&=\sqrt{\widehat{\gamma}X_{t}Y_{t}}dw^{1}(t),\ \ t\geq 0,\\ dY_{t}&=\sqrt{\widehat{\gamma}X_{t}Y_{t}}dw^{2}(t),\ \ t\geq 0\end{cases}\]
_with initial conditions \(\left(X_{0},Y_{0}\right)=\bar{\theta}\), where \(w^{1},\,w^{2}\) are two independent standard Brownian motions._
In this paper we consider the Dawson-Perkins mutually catalytic model for _particle systems_ and study its properties. As for other particle models in the presence of interactions, they have been considered earlier by many authors. The partial list of examples follows.
M. Birkner in [1] studies a system of particles performing random walks on \(\mathbb{Z}^{d}\) and performing branching; rate of branching of any particle on a site depends on the number of other particles at the same site (this is the "catalytic" effect). Birkner introduces a formal construction of such processes, via solutions of certain stochastic equations, proves existence and uniqueness theorem for these equations, and studies properties of the processes. Under suitable assumptions he proves the existence of an equilibrium distribution for shift-invariant initial conditions. He also studies survival and extinction of the process in the long run. Note that the construction of the process in [1] is motivated by the construction of Ligget-Spitzer in [28].
Among many other works where branching particle systems with catalysts were studied we can mention [22] and [27]. For example, Kesten and Sidoravicius in [22] investigate the survival/extinction of two particle populations A and B. Both populations perform an independent random walk. The B-particles perform a branching random walk, but with a birth rate of new particles which is proportional to the number of A-particles which coincide with the appropriate B-particles. It is shown that for a choice of parameters the system becomes (locally) extinct in all dimensions.
In [27] catalytic discrete state branching processes with immigration are defined as strong solutions of stochastic integral equations. Z. Li and C. Ma in [27] prove limit theorems for these processes.
In this paper we consider two interactive populations -- to be more precise we construct so-called mutually-catalytic branching model and study its long time behavior and finite system scheme.
### Paper overview
In the next two subsections we introduce our model and state main results. In Section 2 the process is formally constructed and main results are stated. Sections 3--6 are devoted to the proofs of our results.
**Acknowledgements.** LM is supported in part by ISF grants No. 1704/18 and 1985/22.
## 2. Our Model and Main Results
### Description of the Model
Let us define the following interactive particle system. We consider two populations on a countable set of sites \(S\subset\mathbb{Z}^{d}\), where particles in both populations move as independent Markov chains on \(S\) with rate of jumps \(\kappa>0\), and transition jump probabilities
\[p_{x,y}=p_{y,x},\,\,\,x,y\in S.\]
They also undergo branching events. In order to define our model formally we are following the ideas of [1].
Let \(\left\{\nu_{k}\right\}_{k\geq 0}\) be the branching law. Suppose that \(Z\) is a random variable distributed according to \(\nu\). We assume that branching law is critical and has a finite variance:
\[\mathbb{E}(Z)=\sum_{k\geq 0}k\nu_{k}=1,\,\,\,Var(Z)=\sum_{k\geq 0}(k-1)^{2}\nu_{ k}=\sigma^{2}<\infty. \tag{2.1}\]
The pair of processes \((\xi,\eta)\) describes the time evolution of the following "particle" model. Between branching events in \(\xi\) and \(\eta\) populations move as independent Markov chains on \(S\) with rate of jumps \(\kappa\) and transition probabilities \(p_{xy}\), \(x,y\in S\). Fix some \(\gamma>0\). The "infinitesimal" rate of a branching event for a particle from population \(\xi\) at site \(x\) at time \(t\) equals to \(\gamma\eta_{t}(x)\); similarly the "infinitesimal" rate of a branching event for a particle from population \(\eta\) at site \(x\) at time \(t\) equals to \(\gamma\xi_{t}(x)\). When a "branching event" occurs a particle dies and is replaced by a random number of offsprings distributed according to the law \(\left\{\nu_{k}\right\}_{k\geq 0}\), independently from the history of the process. To define a process formally, as a solution to a system of equations we need more notations. Note that construction of the process follows the steps in [1].
The Markov chain is defined in the following way: Let \((W_{t},P)\) be a continuous time \(S\)-valued Markov chain and set \(p_{t}(j,k)=P(W_{t}=k|W_{0}=j)\) as transitions probabilities. Let \(Q=(q_{j,k})\) denote the associated \(Q\)-matrix; that is \(q_{j,k}\) is the jump rate from \(j\) to \(k\) (for \(j\neq k\)) and \(q_{j,j}=-\sum_{k\neq j}q_{j,k}>-\infty\). We will assume that that \(Q\)-matrix is symmetric (\(q_{x,y}=q_{y,x}\)). Define the Green function for every \(x,y\in S\):
\[g_{t}(x,y)=\int\limits_{0}^{t}p_{s}(x,y)ds. \tag{2.2}\]
Note that if our motion process is a symmetric random walk on \(S\), hence, with certain abuse of notation, \(g_{t}(x,y)=g_{t}(y,x)=g_{t}(x-y)\), in particular \(g_{t}(x,x)=g_{t}(0)\).
Let \(P_{t}f(j)=\sum_{j}p_{t}(j,k)f(k)\) be the semigroup associated with random walk \(W\) and \(Qf(k)=\sum_{j}q_{k,j}f(j)\) is its generator.
_Remark 2.1_.: Clearly \(g_{\infty}(0)<\infty\) means that \(W\) is transient, and \(g_{\infty}(0)=\infty\) implies that \(W\) is recurrent.
Let \(\mathcal{F}=\left(\mathcal{F}_{t}\right)_{t\geq 0}\) be a (right-continuous, complete) natural filtration. In what follows, when we call a process martingale, we mean that it is an \(\mathcal{F}_{t}\)-martingale.
Let
\[\{N_{x,y}^{RW_{\xi}}\}_{x,y\in S},\ \ \{N_{x,y}^{RW_{\eta}}\}_{x,y\in S},\ \ \{N_{x,k}^{br_{\xi}}\}_{x\in S,k\in\mathbb{Z}_{+}},\ \ \{N_{x,k}^{br_{\eta}}\}_{x\in S,k\in \mathbb{Z}_{+}}\]
denote independent Poisson point processes on \(\mathbb{R}_{+}\times\mathbb{R}_{+}\). We assume that, for any \(x,y\in S,\ x\neq y\) both Poisson point processes \(N_{x,y}^{RW_{\xi}}\) and \(N_{x,y}^{RW_{\eta}}\) have intensity measure \(\kappa p_{x,y}ds\otimes du\). Similarly, we assume that, for any \(x\in S,\ k\in\mathbb{Z}_{+}\) both Poisson point processes \(N_{x,k}^{br_{\xi}}\) and \(N_{x,k}^{br_{\eta}}\) have intensity measure \(\nu_{k}ds\otimes du\). We assume that the above Poisson processes are \(\mathcal{F}\)-adapted in the "time" component.
Now we are going to define the pair of processes \((\xi_{t},\eta_{t})_{t\geq 0}\) such that \((\xi_{t},\eta_{t})\in\mathbb{N}_{0}^{S}\times\mathbb{N}_{0}^{S}\). For any \(x\in S\), \(\xi_{t}(x)\) counts the number of particles from the first population at site \(x\) at time \(t\). Similarly, for any \(x\in S\), \(\eta_{t}(x)\) counts number of particles from the second population at site \(x\) at time \(t\).
Now we are ready to describe \((\xi_{t},\eta_{t})_{t\geq 0}\) formally as a solution of the following system of equations:
\[\xi_{t}(x)= \xi_{0}(x)+\sum_{y\neq x}\left\{\int\limits_{0}^{t}\int\limits_{ \mathbb{R}_{+}}1_{\{\xi_{s-}(y)\geq u\}}N_{y,x}^{RW_{\xi}}(dsdu)-\int\limits_{ 0}^{t}\int\limits_{\mathbb{R}_{+}}1_{\{\xi_{s-}(x)\geq u\}}N_{x,y}^{RW_{\xi}} (dsdu)\right\}\] \[+\sum_{k\geq 0}\int\limits_{0}^{t}\int\limits_{\mathbb{R}_{+}}(k-1 )\nu_{k}1_{\{\gamma\eta_{s-}(x)\xi_{s-}(x)\geq u\}}N_{x,k}^{br_{\xi}}(dsdu)\;, \;t\geq 0,\,x\in S\] \[\eta_{t}(x)= \eta_{0}(x)+\sum_{y\neq x}\left\{\int\limits_{0}^{t}\int\limits_ {\mathbb{R}_{+}}1_{\{\eta_{s-}(y)\geq u\}}N_{y,x}^{RW_{\eta}}(dsdu)-\int\limits _{0}^{t}\int\limits_{\mathbb{R}_{+}}1_{\{\eta_{s-}(x)\geq u\}}N_{x,y}^{RW_{ \eta}}(dsdu)\right\}\] \[+\sum_{k\geq 0}\int\limits_{0}^{t}\int\limits_{\mathbb{R}_{+}}(k-1 )\nu_{k}1_{\{\gamma\xi_{s-}(x)\eta_{s-}(x)\geq u\}}N_{x,k}^{br_{\eta}}(dsdu)\;,\;t\geq 0,\,x\in S. \tag{2.3}\]
Why do these equations actually describe our processes? The first sum on the right-hand side of equations for \(\xi\) and \(\eta\) describes the random walks of particles, and the second sum describes their branching. The first integrals in the first sums describe all particles jumping to site \(x\) from different sites \(y\neq x\). The second integrals in the first sum describe particles that leave site \(x\). The last integral describes the death of a particle at site \(x\) and the birth of its \(k\) offsprings, so after that event there will be \((k-1)\) particles at the site. The branching events at site \(x\) happen with the infinitesimal rate proportional to the product of the number of particles of both populations at site \(x\).
**Definition 2.2**.: The process \((\xi_{t},\eta_{t})\) solving (2.3) is called a _mutually catalytic branching process_ with initial conditions \((\xi_{0},\eta_{0})\).
### Main Results
We start with stating the result on the existence and uniqueness of the solution for the system of equation (2.3). This implies that the process we described
in the introduction does exist and is defined uniquely via the solution to (2.3). In the next theorem, we formulate the result for finite initial conditions, i.e. each population has a finite number of particles at initial time (\(t=0\)). First, we introduce another piece of notation. Fix a refernce function \(\lambda:S\longrightarrow[0,\infty)\) such that \(\sum_{x\in S}\lambda_{x}=1\) and
\[\sum_{y\in S}p_{xy}\lambda_{y}\leq M\lambda_{x},\ \ \forall x\in S, \tag{2.4}\]
for some \(M>0\). For a standard way of choosing \(\lambda\) see [28]. Now, define
\[E_{fin}=\left\{f:S\longrightarrow\mathbb{N}_{0}|\sum_{x\in S}f(x)<\infty \right\}.\]
In addition define the space of functions \(E_{1}\) :
\[E_{1}=\left\{f:S\longrightarrow\mathbb{N}_{0}|\sum_{x\in S}\lambda_{x}f(x)< \infty\right\}.\]
Also define the space of functions on \(S\) with finite "second moments":
\[E_{2}=\left\{f:S\longrightarrow\mathbb{N}_{0}|\sum_{x\in S}\lambda_{x}\,|f(x) |^{2}<\infty\right\}.\]
**Theorem 2.3**.: _Let \(S\subset\mathbb{Z}^{d}\). a) For any initial conditions \((\xi_{0},\eta_{0})\in E_{fin}\times E_{fin}\) there is a unique strong solution \((\xi_{t},\eta_{t})_{t\geq 0}\) to (2.3), taking values in \(E_{fin}\times E_{fin}\)._
_b) The solution \(\{(\xi_{t},\eta_{t}),\ t\geq 0\}\) to (2.3) is a Markov process._
It is possible to generalize the result to some infinite mass initial conditions case but since this is not the goal of this paper it will be done elsewhere.
**Convention:**: We say that the motion process for the _mutually catalytic branching process_ on \(S=\mathbb{Z}^{d}\), is the nearest neighbor random walk if
\[p_{x,y}=\frac{1}{2d}\ \,\text{for}\ y=x\pm e_{i},\]
for \(e_{i}\) a unit vector in an axis direction, \(i=1,...,d\).
Let \((\xi,\eta)\) be the process constructed in Theorem 2.3 with finite initial conditions. Denote
\[X^{1}_{t}=\left\langle\xi_{t},\mathbf{1}\right\rangle,\ \ X^{2}_{t}=\left\langle \eta_{t},\mathbf{1}\right\rangle,\ \ t\geq 0.\]
That is, \(X^{1}\) is the total mass process of \(\xi\), and \(X^{2}\) is the total mass process of \(\eta\). Clearly by construction, \(X^{1}\) and \(X^{2}\) are non-negative local martingales and hence by the martingale convergence theorem there exist an a.s. limits
\[X^{1}_{\infty}=\lim_{t\rightarrow\infty}X^{1}_{t},\ \ X^{2}_{\infty}=\lim_{t \rightarrow\infty}X^{2}_{t}.\]
Now we are ready to give a definition of coexistence or non-coexistence.
**Definition 2.4**.: Let \((\xi,\eta)\) be a unique strong solution to (2.3) with \((\xi_{0},\eta_{0})\in E_{fin}\times E_{fin}\). We say that _coexistence is possible_ for \((\xi,\eta)\) if \(\mathbb{P}(X^{1}_{\infty}X^{2}_{\infty}>0)>0\). We say that _coexistence is impossible_ for \((\xi,\eta)\) if \(\mathbb{P}(X^{1}_{\infty}X^{2}_{\infty}>0)=0\).
We will prove that in the finite initial conditions case, with motion process being the nearest neighbor random walk, the coexistence is possible if and only if the random walk is transient. Recall that the nearest neighbor random walk is recurrent in dimensions \(d=1,2\), and it is transient in dimensions \(d\geq 3\). Then we have the following theorem.
**Theorem 2.5**.: _Let \(S=\mathbb{Z}^{d}\) and assume that the motion process is the nearest neighbor random walk. Let \((\xi_{0},\eta_{0})\in E_{fin}\times E_{fin}\)._
_(a) If \(d\geq 3\), then coexistence of types is possible._
_(b) If \(d\leq 2\), then coexistence of types is impossible._
The proof is simple and based on the following observation: if there is a finite number of particles and the motion is recurrent -- the particles will meet an infinite number of times, and eventually one of the populations dies out, due to the criticality of the branching mechanism. On the other hand, if the motion is transient -- there exists a finite time such that after this time the particles of different populations never meet, and hence there is a positive probability of survival of both populations.
Finally we are interested in a finite system scheme. We construct a system of renormalized processes started from an exhausting sequence of finite subsets of \(\mathbb{Z}^{d}\), \(\Lambda_{n}\subset\mathbb{Z}^{d}\). The duality techniques will be used to investigate its limiting behavior.
Define
\[\Lambda_{n} = \left\{x\in\mathbb{Z}^{d}\,|\,\forall i=1,...,d,|x_{i}|\leq n \right\}\subseteq\mathbb{Z}^{d},\] \[|\Lambda_{n}| = (2n+1)^{d}.\]
**Convention:**: Let \(S=\Lambda_{n}\). We say that the motion process is the nearest neighbor random walk on \(\Lambda_{n}\) if its transition jump probabilities are given by
\[p_{x,y}^{n}=p_{0,y-x}^{n}=\begin{cases}\frac{1}{2d},&\text{if }|x-y|=1,\\ 0,&\text{otherwise},\end{cases}\]
where "\(y-x\)" is the difference on the torus \(\Lambda_{n}\).
Fix \(\bar{\theta}=(\theta_{1},\theta_{2})\) with \(\theta_{1},\theta_{2}>0\). Assume \(\bar{\boldsymbol{\theta}}=(\boldsymbol{\theta}_{1},\boldsymbol{\theta}_{2})\), where \(\boldsymbol{\theta}_{i}=(\theta_{i},\theta_{i},...)\in\mathbb{N}_{0}^{\Lambda_ {n}}\), \(i=1,2\). Let \((\xi_{t},\eta_{t})_{t\geq 0}\) be the mutually catalytic branching process with initial conditions \((\xi_{0},\eta_{0})=\bar{\boldsymbol{\theta}}\), and site space \(S=\Lambda_{n}\), and motion process being the nearest neighbor walk on \(\Lambda_{n}\).
Set
\[\boldsymbol{\xi}_{t}^{n}=\sum_{j\in\Lambda_{n}}\xi_{j}(t),\quad\boldsymbol{ \eta}_{t}^{n}=\sum_{j\in\Lambda_{n}}\eta_{j}(t).\]
We define the following time change:
\[\beta_{n}(t)=|\Lambda_{n}|\,t,\,\,\,t\geq 0.\]
Our goal is to identify the limiting distribution of
\[\frac{1}{|\Lambda_{n}|}\left(\boldsymbol{\xi}_{\beta_{n}(t)}^{n},\boldsymbol{ \eta}_{\beta_{n}(t)}^{n}\right),\]
as \(n\to\infty\), for all \(t\geq 0\).
**Theorem 2.6**.: _Let \(d\geq 3\), and assume that \(\gamma\sigma^{2}<\frac{1}{\sqrt{2\cdot 3^{5}}g_{\infty}(0)}\). Then for any \(T>0\), we have_
\[\mathcal{L}\left(\frac{1}{|\Lambda_{n}|}\left(\boldsymbol{\xi}_{\beta_{n}(T)}^ {n},\boldsymbol{\eta}_{\beta_{n}(T)}^{n}\right)\right)\longrightarrow\mathcal{ L}\left(X_{T},Y_{T}\right),\;\text{as }n\rightarrow\infty,\]
_where \(\left(X_{t},Y_{t}\right)_{t\geq 0}\) is a solution of the following system of stochastic differential equations_
\[\begin{cases}dX_{t}=\sqrt{\gamma\sigma^{2}X_{t}Y_{t}}dw^{1}(t),&t\geq 0,\\ dY_{t}=\sqrt{\gamma\sigma^{2}X_{t}Y_{t}}dw^{2}(t),&t\geq 0,\end{cases} \tag{2.5}\]
_with initial conditions \(\left(X_{0},Y_{0}\right)=\bar{\theta}\), where \(w^{1},\,w^{2}\) are two independent standard Brownian motions._
The above result is similar, although a bit weaker, to the result in Theorem 1 in [6], where a finite scheme for the system of continuous stochastic differential equations (SDE's) is studied. The proof of Theorem 2.6 is based on the duality principle for our particle system and the result for stochastic differential equations in [6]. In fact, let us mention that the self-duality property for our mutually catalytic branching particle model (the property which is well known for processes solving equations of type (1.3)) does not hold. Thus, we use the so called approximating duality technique to prove Theorem 2.6. The approximating duality technique was used in the past to resolve a number of weak uniqueness problems (see e.g. [29], [31]). We believe that using approximate duality to prove limit theorems is novel and this technique is of independent interest.
Let us note that it would be very interesting to extend the above results. First of all it would be nice to address the question of coexistence/non-coexistence for more general motions and infinite mass initial conditions. As for extending results in Theorem 2.6, we would be interested to check what happens in the case of large \(\gamma\), to prove 'functional convergence" result as in [6], and investigate the system's behavior for the case of recurrent motion, that is, in the dimensions \(d=1,2\). We plan to address these problems in the future.
## 3. Existence and Uniqueness. Proof of Theorem 2.3
This section is devoted to the proof of Theorem 2.3. Note that our proofs follow closely the argument of Birkner [1] with suitable adaptation to the two types case.
Recall that we chose a reference function \(\lambda:S\mapsto[0,\infty)\) with \(\sum_{x\in S}\lambda_{x}=1\) and satisfying (2.4).
For \(m\in\mathbb{N}\), define \(L^{m}(\lambda)\)-norm of \((x,y)\in\mathbb{Z}^{S}\times\mathbb{Z}^{S}\):
\[\left\|(x,y)\right\|_{\lambda,m}:=\left(\sum_{i\in S}\lambda_{i}\left(|x(i)|^{ m}+|y(i)|^{m}\right)\right)^{1/m}.\]
Similarly, for any \(x,y,z,w\in\mathbb{Z}^{S}\), with some abuse of notation, we define
\[\left\|(x,y,z,w)\right\|_{\lambda,m}:=\left(\sum_{i\in S}\lambda_{i}\left(|x( i)|^{m}+|y(i)|^{m}+|z(i)|^{m}+|w(i)|^{m}\right)\right)^{1/m}.\]
For any metric space \(D\) with metric \(d\), let \(\operatorname{Lip}(D)\) denote a set of Lipschitz functions on \(D.\) We say that \(f:D\rightarrow\mathbb{R}\) is in \(\operatorname{Lip}(D)\) if and only if there exists a positive constant \(C\in\mathbb{R}_{+}\) such that for any \(x,y\in D\), \(|f(x)-f(y)|\leq Cd(x,y)\).
Theorem 2.3 follows immediately from the next lemma.
**Lemma 3.1**.: _a) For any initial conditions \((\xi_{0},\eta_{0})\in E_{fin}\times E_{fin}\) there is a unique strong solution \((\xi_{t},\eta_{t})_{t\geq 0}\) to (2.3), taking values in \(E_{fin}\times E_{fin}\)._
_b) The solution \(\{(\xi_{t},\eta_{t}),\ t\geq 0\}\) to (2.3) is a Markov process._
_c) Let \(m\in\mathbb{N}\). If \(\sum_{k}k^{m}\nu_{k}<\infty\), there exists a constant \(C_{m}\) such that_
\[\mathbb{E}\left[\|(\xi_{t},\eta_{t})\|_{\lambda,m}^{m}\right]\leq\exp(C_{m}t) \left\|(\xi_{0},\eta_{0})\right\|_{\lambda,m}^{m}. \tag{3.1}\]
_d) Let \(L^{(2)}\) denote the following operator_
\[L^{(2)}f(\xi,\eta) = \kappa\sum_{x,y\in S}\xi(x)p_{xy}\left(f(\xi^{x\to y},\eta)-f( \xi,\eta)\right)\] \[+\kappa\sum_{x,y\in S}\eta(x)p_{xy}\left(f(\xi,\eta^{x\to y})-f( \xi,\eta)\right)\] \[+\sum_{x\in S}\gamma\xi(x)\eta(x)\sum_{k\geq 0}\nu_{k}\left(f(\xi+(k -1)\delta_{x},\eta)-f(\xi,\eta)\right)\] \[+\sum_{x\in S}\gamma\xi(x)\eta(x)\sum_{k\geq 0}\nu_{k}\left(f(\xi, \eta+(k-1)\delta_{x})-f(\xi,\eta)\right),\]
_where \(\xi^{x\to y}=\xi+\delta_{y}-\delta_{x}\), i.e. \(\xi^{x\to y}(x)=\xi(x)-1\), \(\xi^{x\to y}(y)=\xi(y)+1\) and \(\xi^{x\to y}(z)=\xi(z)\) for all \(z\in S\) and \(z\neq x,y\)._
_For \(f\in\operatorname{Lip}(E_{1}\times E_{1}),\ \ (\xi_{0},\eta_{0})\in E_{fin} \times E_{fin}\),_
\[M^{f}(t):=f(\xi_{t},\eta_{t})-f(\xi_{0},\eta_{0})-\int\limits_{0}^{t}L^{(2)}f( \xi_{s},\eta_{s})ds,\]
_is a martingale. Moreover, if \(f\in\operatorname{Lip}(\mathbb{R}_{+}\times E_{1}\times E_{1})\) and there is constant \(C^{*}\) such that_
\[\left|\frac{\partial}{\partial s}f(s,x,y)\right|\leq C^{*}\left\|(x,y)\right\| _{\lambda,2}^{2} \tag{3.2}\]
_for any \(x,y\in E_{2}\), then_
\[N^{f}(t):=f(t,\xi_{t},\eta_{t})-f(0,\xi_{0},\eta_{0})-\int\limits_{0}^{t} \left[L^{(2)}f(s,\xi_{s},\eta_{s})+\frac{\partial}{\partial s}f(s,\xi_{s}, \eta_{s})\right]ds,\]
_is also a martingale._
Proof.: Note that in our proof we follow ideas of the proof of Lemma 1 in Birkner [1].
**a)**\(\{N_{x,y}^{RW_{\xi}}\}_{x,y\in S},\ \{N_{x,y}^{RW_{\eta}}\}_{x,y\in S},\ \{N_{x,k}^{br_{\xi}}\}_{x\in S,k\in\mathbb{Z}_{+}},\ \{N_{x,k}^{br_{\eta}}\}_{x\in S,k\in\mathbb{Z}_{+}}\) are a collection of independent Poisson point processes. Therefore with probability 1 there is no more than one jump simultaneously. Then we can define a stopping time \(T_{1}\) -- the first time a jump happens. Then \((\xi_{t},\eta_{t})=(\xi_{0},\eta_{0})\) for \(t\in[0,T_{1})\). In the same way we define a sequence of stopping times: \(0=T_{0}<T_{1}<T_{2}<\cdots<\infty\). Note that at each interval \([T_{i},T_{i+1})\) there are only a finite number of Poisson processes involved. Our process \((\xi_{t},\eta_{t})\) is constant on the intervals \([T_{i},T_{i+1})\). In order to show that this construction defines the process properly (that is, it does not explode in finite time), it is enough to show that \(\lim_{n\to\infty}T_{n}=\infty\),
almost surely. To this end define \(M_{n}=\sum_{x\in S}\left(\xi_{T_{n}}(x)+\eta_{T_{n}}(x)\right)\). \(M_{n}\) denotes the total number of particles in both populations at time \(T_{n}\). Since the branching mechanism is critical (see (2.1)), it is easy to see that \(\left\{M_{n}\right\}_{n\geq 0}\) is a non-negative martingale.
Indeed, suppose that \(T_{n}\) is the stopping time defined by a "random walk" jump, that is by the jump of \(R_{t}^{\xi}=\sum_{x}N_{x,k}^{RW_{\xi}}\) or \(R_{t}^{\eta}=\sum_{x}N_{x,k}^{RW_{\eta}}\). In that case the total number of particles does not change (this can also be readily seen from our equation (2.3)) and we have \(M_{n}=M_{n-1}\). Alternatively \(T_{n}\) can be originated from the "branching", that is from the jump of one of the processes \(B_{t}^{\xi}=\sum_{x}N_{x,k}^{br_{\xi}}\) or \(B_{t}^{\eta}=\sum_{x}N_{x,k}^{br_{\eta}}\). In this case one can easily get that
\[\mathbb{E}\left(M_{n}\right|M_{0},...,M_{n-1})=M_{n-1}+\mathbb{E}\left(Z-1 \right)=M_{n-1},\]
where \(Z\) is distributed according to the branching law \(\nu\).
Therefore by the well known martingale convergence theorem \(\sup_{n\geq 1}M_{n}<\infty\) almost surely (Theorem 1.6.4 in [32]). This implies that \(\sup_{n}T_{n}=\infty\), almost surely.
Now let us turn to the proof of uniqueness. Let \((\tilde{\xi},\tilde{\eta})_{t}\) be another solution to (2.3) starting from the same initial conditions \(\tilde{\xi}_{0}=\xi^{0},\ \tilde{\eta}_{0}=\eta^{0}\). We see from (2.3) that \(\tilde{\xi}_{t}(x)=\xi_{t}(x),\ \tilde{\eta}_{t}(x)=\eta_{t}(x)\) for all \(x\in S\) and \(t\in[0,T_{1})\), and also that \(\tilde{\xi}_{T_{1}}=\xi_{T_{1}},\ \tilde{\eta}_{T_{1}}=\eta_{T_{1}}\). Then, by induction, \((\xi,\eta)\) and \((\tilde{\xi},\tilde{\eta})\) agree on \([T_{n},T_{n+1})\) for all \(n\in\mathbb{N}\).
**b)** Poisson processes have independent and stationary increments. Therefore, by construction described in **(a)**, we can immediately see that the distribution of \((\xi_{t+h},\eta_{t+h})\), given \(\mathcal{F}_{t}\), depends only on \((\xi_{t},\eta_{t})\), and hence the process \((\xi_{t},\eta_{t})_{t\geq 0}\) is Markov.
**c)** Now we will show that \((\xi_{t},\eta_{t})_{t\geq 0}\) satisfies (3.1). If \(f\) is bounded measurable or a Lipschitz function on \(E_{1}\times E_{1}\), then \(M_{t}^{f}\) is a local martingale (see Ito's formula II.5.1 in [32] and discussion about compensation after Definition II.3.3 in [32]). A natural localizing sequence of stopping times is given by
\[T_{n}:=\inf\left\{t\geq 0\,:\begin{array}{c}\sum_{x\in S}\left(\xi_{t}(x)+ \eta_{t}(x)\right)>n\,\mbox{or}\,\mbox{there}\,\mbox{is}\,\,y\notin S_{n}\\ \mbox{such}\,\mbox{that}\,\xi_{t}(y)>0\,\mbox{or}\,\eta_{t}(y)>0\end{array} \right\},\]
where \(S_{n}\nearrow S\) and \(S_{n}\) is a finite set for all \(n\). Choose arbitrarily \(m\in\mathbb{N}\) such that
\[\sum_{k}k^{m}\nu_{k}<\infty.\]
By our assumptions, this clearly holds for \(m=1,\,2\). Define
\[\psi_{m}(\xi,\eta):=\|(\xi,\eta)\|_{\lambda,m}^{m}\,.\]
Then by the above we have that \(\left\{M_{t\wedge T_{n}}^{\psi_{m}}\right\}\) is a martingale. Let \(\xi\), \(\eta\) be such that \(\psi_{m}(\xi,\eta)<\infty\). Apply the \(L^{(2)}\) operator on \(\psi_{m}(\xi,\eta)\) to get
\[L^{(2)}\psi_{m}(\xi,\eta) = \kappa\sum_{x,y}\xi(x)p_{xy}\left\{\lambda_{x}\left((\xi(y)+1)^{ m}-\xi(y)^{m}\right)\right.\] \[\left.+\lambda_{x}\left((\xi(x)-1)^{m}-\xi(x)^{m}\right)\right\}\] \[+\kappa\sum_{x,y}\eta(x)p_{xy}\left\{\lambda_{x}\left((\eta(y)+1)^ {m}-\eta(y)^{m}\right)\right.\] \[\left.+\lambda_{x}\left((\eta(x)-1)^{m}-\eta(x)^{m}\right)\right\}\]
\[+\sum_{x}\gamma\xi(x)\eta(x)\sum_{k}\nu_{k}\lambda_{x}\left\{(\xi(x)^{m}+k-1)- \xi(x)^{m}\right\}\] \[+\sum_{x}\gamma\xi(x)\eta(x)\sum_{k}\nu_{k}\lambda_{x}\left\{(\eta(x )^{m}+k-1)-\eta(x)^{m}\right\}\] \[= \kappa\sum_{x}\lambda_{x}\xi(x)\sum_{j=1}^{m}{m\choose j}(-1)^{j} \xi(x)^{m-j}\] \[+\kappa\sum_{x,y}\xi(x)p_{xy}\lambda_{y}\sum_{j=1}^{m}{m\choose j }\xi(y)^{m-j}\] \[+\kappa\sum_{x}\lambda_{x}\eta(x)\sum_{j=1}^{m}{m\choose j}(-1)^ {j}\eta(x)^{m-j}\] \[+\kappa\sum_{x,y}\eta(x)p_{xy}\lambda_{y}\sum_{j=1}^{m}{m\choose j }\eta(y)^{m-j}\] \[+\sum_{x}\gamma\xi(x)\eta(x)\lambda_{x}\sum_{k}\nu_{k}\sum_{j= \mathbf{2}}^{m}{m\choose j}\xi(x)^{m-j}(k-1)^{j}\] \[+\sum_{x}\gamma\xi(x)\eta(x)\lambda_{x}\sum_{k}\nu_{k}\sum_{j= \mathbf{2}}^{m}{m\choose j}\eta(x)^{m-j}(k-1)^{j}.\]
where in the last equality we used the binomial expansion, the fact that \(\sum_{y}p_{xy}=1\) and our assumptions \(\sum_{k\geq 0}(k-1)\nu_{k}=0.\) Now we can estimate
\[\left|L^{(2)}\psi_{m}(\xi,\eta)\right| \leq \kappa\left(\sum_{j=1}^{m}{m\choose j}\right)\left(\psi_{m}(\xi, \eta)+\sum_{x,y}p_{xy}\lambda_{x}\left[\xi(x)\xi(y)^{m-1}+\eta(x)\eta(y)^{m-1} \right]\right)\] \[+\left(3\gamma\sum_{j=2}^{m}{m\choose j}\sum_{k}\nu_{k}(k-1)^{j} \right)\psi_{m}(\xi,\eta)\]
where we used the following simple inequalities: for \(m\geq j\geq 1,\)
\[\xi(x)^{m-j}\leq\xi(x)^{m-1}\]
(recall that \(\xi(x)\) is a non-negative integer number),
\[\xi(x)\eta(x)\leq\xi(x)^{2}+\eta(x)^{2}\]
and
\[\max\left(\xi(x)^{2}\eta(x)^{m-2},\ \xi(x)^{m-2}\eta(x)^{2}\right)\leq\xi(x)^{m }+\eta(x)^{m},\ \mbox{for}\ m\geq 2.\]
Denote
\[c_{m}:=\sum_{j=1}^{m}{m\choose j}=2^{m}-1,\ \ c_{m}^{\prime}:=3\sum_{j=2}^{m}{m \choose j}\sum_{k}\nu_{k}(k-1)^{j}<\infty.\]
Then we get
\[\left|L^{(2)}\psi_{m}(\xi,\eta)\right|\leq \kappa c_{m}\left(\psi_{m}(\xi,\eta)+\sum\limits_{x,y}p_{xy}\lambda_{ x}\left[\xi(x)\xi(y)^{m-1}+\eta(x)\eta(y)^{m-1}\right]\right) \tag{3.3}\] \[+\,c^{\prime}_{m}\psi_{m}(\xi,\eta).\]
Now define new functions \(\tilde{\xi}\) and \(\tilde{\eta}\) as follows:
\[\tilde{\xi}(y):=\sum\limits_{x}\xi_{x}p_{xy},\ \tilde{\eta}(y):=\sum\limits_{x} \eta_{x}p_{xy}.\]
Then we get the following bound on the \(L^{m}(\lambda)\)-norm of \((\tilde{\xi},\tilde{\eta})\):
\[\left\|(\tilde{\xi},\tilde{\eta})\right\|_{\lambda,m}^{m} = \sum\limits_{y}\lambda_{y}\left(\left|\sum\limits_{x}\xi_{x}p_{ xy}\right|^{m}+\left|\sum\limits_{x}\eta_{x}p_{xy}\right|^{m}\right)\] \[= \sum\limits_{y}\lambda_{y}\left(\sum\limits_{z}p_{zy}\right)^{m} \left(\left|\sum\limits_{x}\xi_{x}\frac{p_{xy}}{\sum\nolimits_{z}p_{zy}} \right|^{m}+\left|\sum\limits_{x}\eta_{x}\frac{p_{xy}}{\sum\nolimits_{z}p_{zy} }\right|^{m}\right)\] \[\leq \sum\limits_{y}\lambda_{y}\left(\sum\limits_{z}p_{zy}\right)^{m- 1}\sum\limits_{x}p_{xy}\left(\xi(x)^{m}+\eta(x)^{m}\right)\] \[\leq A^{m-1}M\sum\limits_{x}\lambda_{x}\left(\xi(x)^{m}+\eta(x)^{m} \right)=A^{m-1}M\left\|(\xi,\eta)\right\|_{\lambda,m}^{m},\]
where inequality in the third line follows by the Jensen inequality, where \(A=\sup_{y}\sum_{z}p_{zy}\) and \(M\) satisfies (2.4). This allows us to use Holder inequality (with \(p=m,\,q=m/(m-1)\)) on the right-hand side of (3.3) to get
\[\left|L^{(2)}\psi_{m}(\xi,\eta)\right|\leq\left(\kappa c_{m}\left[1+(A^{m-1}M) ^{(m-1)/m}\right]+c^{\prime}_{m}\right)\psi_{m}(\xi,\eta).\]
Now recall that for any \(n\), \(\left(M^{\psi_{m}}(t\wedge T_{n})\right)_{t\geq 0}\) is a martingale. Therefore,
\[\mathbb{E}\left[\psi_{m}(\xi_{t\wedge T_{n}},\eta_{t\wedge T_{n} })\right] = \psi_{m}(\xi^{0},\eta^{0})+\mathbb{E}\left[\int\limits_{0}^{t \wedge T_{n}}L^{(2)}\psi_{m}(\xi_{s},\eta_{s})ds\right]\] \[\leq \psi_{m}(\xi^{0},\eta^{0})+C_{m}\int\limits_{0}^{t}\mathbb{E} \left[\mathbf{1}_{\{s\leq T_{n}\}}\psi_{m}(\xi_{s},\eta_{s})\right]ds\] \[\leq \psi_{m}(\xi^{0},\eta^{0})+C_{m}\int\limits_{0}^{t}\mathbb{E} \left[\psi_{m}(\xi_{s\wedge T_{n}},\eta_{s\wedge T_{n}})\right]ds.\]
Thus, from Gronwall's lemma we get that
\[\mathbb{E}\left[\psi_{m}(\xi_{t\wedge T_{n}},\eta_{t\wedge T_{n}})\right]\leq \exp(C_{m}t)\psi_{m}(\xi^{0},\eta^{0}),\]
uniformly in \(n\). Inequality (3.1) follows from Fatou's lemma by letting \(n\to\ \infty\).
**d)** Let \(f\in\operatorname{Lip}(E_{1}\times E_{1})\). We wish to show that \(M^{f}\) is indeed a martingale. In order to do that, first we show that for any such \(f\) there is a constant \(C=C(\kappa,p,\sigma,\nu,f)\) such that
\[\left|L^{(2)}f(\xi,\eta)\right|\leq C\left\|(\xi,\eta)\right\|_{\lambda,2}^{2} \text{ for all }\xi,\eta\in E_{2}. \tag{3.4}\]
We decompose \(L^{(2)}f(\xi,\eta)\) into two parts corresponding to motion and branching mechanisms:
\[L^{(2)}f(\xi,\eta)=L_{RW}f(\xi,\eta)+L_{br}f(\xi,\eta),\]
where
\[L_{RW}f(\xi,\eta)= \kappa\sum_{x,y\in S}\xi(x)p_{xy}\left(f(\xi^{x\to y},\eta)-f(\xi, \eta)\right)\] \[+\kappa\sum_{x,y\in S}\eta(x)p_{xy}\left(f(\xi,\eta^{x\to y})-f(\xi, \eta)\right),\] \[L_{br}f(\xi,\eta)= \sum_{x\in S}\gamma\xi(x)\eta(x)\sum_{k\geq 0}\nu_{k}\left(f(\xi +(k-1)\delta_{x},\eta)-f(\xi,\eta)\right)\] \[+\sum_{x\in S}\gamma\xi(x)\eta(x)\sum_{k\geq 0}\nu_{k}\left(f(\xi,\eta+(k-1)\delta_{x})-f(\xi,\eta)\right).\]
Using the Lipshitz property of \(f\) we obtain
\[\left|L_{RW}f(\xi,\eta)\right| = \left|\sum_{x,y}\xi(x)p_{xy}\left(f(\xi^{(x,y)},\eta)-f(\xi,\eta)\right)\right.\] \[\left.+\sum_{x,y}\eta(x)p_{xy}\left(f(\xi,\eta^{(x,y)})-f(\xi, \eta)\right)\right|\] \[\leq C_{f}\sum_{x,y}\xi(x)p_{xy}\left\|(\xi^{(x,y)},\eta)-(\xi, \eta)\right\|_{\lambda,1}\] \[+C_{f}\sum_{x,y}\eta(x)p_{xy}\left\|(\xi,\eta^{(x,y)})-(\xi,\eta )\right\|_{\lambda,1}\] \[\leq C_{f}\sum_{x,y}\xi(x)p_{xy}(\lambda_{y}+\lambda_{x})+C_{f}\sum_{ x,y}\eta(x)p_{xy}(\lambda_{y}+\lambda_{x})\] \[\leq C_{f}(M+1)\left\|(\xi,\eta)\right\|_{\lambda,1}\leq C_{f}(M+1) \left\|(\xi,\eta)\right\|_{\lambda,2}^{2}\]
where in the last inequality we used \(\left\|(\xi,\eta)\right\|_{\lambda,1}\leq\left\|(\xi,\eta)\right\|_{\lambda,2} ^{2}\), which holds since the functions \(\left\{\xi(x)\right\}_{x\in S}\) and \(\left\{\eta(x)\right\}_{x\in S}\) are integer valued. Turning to \(L_{br}f(\xi,\eta)\) we get
\[\left|L_{br}f(\xi,\eta)\right| = \left|\sum_{x}\gamma\xi(x)\eta(x)\sum_{k}\nu_{k}\left(\left[f(\xi +(k-1)\delta_{x},\eta)-f(\xi,\eta)\right]\right.\right.\] \[\left.+\left[f(\xi,\eta+(k-1)\delta_{x})-f(\xi,\eta)\right]\right)\] \[\leq C_{f}\sum_{x}\gamma\xi(x)\eta(x)\sum_{k}\nu_{k}\left(2\left\|(k- 1)\delta_{x}\right\|_{\lambda,1}\right)\]
\[= \left(C_{f}\sum_{k}2\nu_{k}|k-1|\right)\sum_{x}\gamma\xi(x)\eta(x) \lambda_{x}\] \[\leq \gamma C_{f}\sum_{k\geq 0}2\nu_{k}|k-1|\cdot\left\|(\xi,\eta)\right\|_{ \lambda,2},\]
where in the last inequality we used the fact that \(\xi(x)\eta(x)\leq\xi(x)^{2}+\eta(x)^{2}\). Thus (3.4) holds with
\[C:=C_{f}(M+1+\gamma\sum_{k\geq 0}2\nu_{k}|k-1|).\]
Consider a bounded \(f\in\operatorname{Lip}(E_{1}\times E_{1})\), then \(M^{f}\) is a local martingale, so we have for all \(t,h\geq 0\)
\[\mathbb{E}\left[\left.M^{f}\left((t+h)\wedge T_{n}\right)\right|\mathcal{F}_{ t}\right]=M^{f}\left((t+h)\wedge T_{n}\right).\]
The right-hand side converges to \(M^{f}(t)\) a.s., as \(n\to\infty\). Then
\[\mathbb{E}\left[\left.\int\limits_{0}^{(t+h)\wedge T_{n}}L^{(2)}f(\xi_{s}, \eta_{s})ds\right|\mathcal{F}_{t}\right]\to\mathbb{E}\left[\left.\int\limits _{0}^{t+h}L^{(2)}f(\xi_{s},\eta_{s})ds\right|\mathcal{F}_{t}\right],\ \text{ a.s},\]
as \(n\to\infty\). Here we used again the dominated convergence, since by (3.4)
\[\int\limits_{0}^{(t+h)\wedge T_{n}}L^{(2)}f(\xi_{s},\eta_{s})ds\leq C\int \limits_{0}^{t+h}\left\|(\xi_{s},\eta_{s})\right\|_{\lambda,2}^{2}ds\]
and the expectation on the right-hand side of the above inequality is bounded due to (3.1) and finite initial conditions, which are automatically in \(E_{2}\times E_{2}\). Thus \(M^{f}\) is indeed a martingale in the case of bounded \(f\in\operatorname{Lip}(E_{1}\times E_{1})\).
Next, consider \(f\in\operatorname{Lip}(E_{1}\times E_{1})\) which is non-negative, but not necessarily bounded. Define \(f_{n}(\xi,\eta):=f(\xi,\eta)\wedge n\). Note that \(f_{n}\) is bounded and \(f_{n}\in\operatorname{Lip}(E_{1}\times E_{1})\) with Lipshitz constant \(C_{f_{n}}\leq C_{f}\). As \(n\to\infty\), we have
\[M^{f_{n}}(t)\to M^{f}(t),\ \text{ a.s.}\] \[\mathbb{E}\left[f_{n}(\xi_{t+h},\eta_{t+h})\,|\,\mathcal{F}_{t} \right]\to\mathbb{E}\left[f(\xi_{t+h},\eta_{t+h})\,|\,\mathcal{F}_{t}\right], \ \text{a.s.}\]
by monotone convergence. Observe that \(\left|L^{(2)}f_{n}(\xi,\eta)\right|\leq C\left\|(\xi,\eta)\right\|_{\lambda,2}^ {2}\) uniformly in \(n\). We thus obtain
\[\mathbb{E}\left[\left.\int\limits_{0}^{t+h}L^{(2)}f_{n}(\xi_{s},\eta_{s})ds \right|\mathcal{F}_{t}\right]\to\mathbb{E}\left[\left.\int\limits_{0}^{t+h}L ^{(2)}f(\xi_{s},\eta_{s})ds\right|\mathcal{F}_{t}\right]\]
a.s. as \(n\to\infty\) by the dominated convergence theorem. Therefore \(M^{f}\) is a martingale for non-negative Lipschitz \(f\). For the general case we use the decomposition of \(f\in\operatorname{Lip}(E_{1}\times E_{1})\) as \(f=f^{+}-f^{-}\), where \(f^{+}:=\max(f,0)\) and \(f^{-}:=\max(-f,0)\).
The same proof holds for \(N^{f}\) too, since \(\frac{\partial}{\partial s}f\) is bounded by (3.2).
## 4. Proof of Theorem 2.5.
The aim of this section is to prove Theorem 2.5.
Let \((\xi_{t},\eta_{t})\) be the mutually catalytic branching process described in Theorem 2.5, starting at \((\xi_{0},\eta_{0})\) with \(\langle\xi_{0},\mathbf{1}\rangle+\langle\eta_{0},\mathbf{1}\rangle<\infty\). Recall that \(X_{t}^{1}\), \(X_{t}^{2}\) denote the total size of each population at time \(t\):
\[X_{t}^{1}=\sum_{x\in\mathbb{Z}^{d}}\xi_{t}(x)\;\;\text{and}\;\;X_{t}^{2}=\sum_ {x\in\mathbb{Z}^{d}}\eta_{t}(x).\]
### Proof of Theorem 2.5(a) \(-\) Transient case
The proof is simple and we decided to avoid technical details. The observation is as follows: since the motion of particles is transient and the number of particles in the original populations is finite, there exists almost surely a finite time \(\hat{T}\) such that, if one suppresses the branching, the initial particles of different populations never meet after time \(\hat{T}\). On the other hand due to the finiteness of number of particles, the total branching rate in the system is finite and thus there is a positive probability for the event that in the original particle system there is no branching event until time \(\hat{T}\). On this event, particles of different populations never meet after time \(\hat{T}\) and therefore there is a positive probability of survival of both populations.
### Proof of Theorem 2.5(b) \(-\) Recurrent case
We would like to show that
\[X_{\infty}^{1}\cdot X_{\infty}^{2}=0,\;\;\mathbb{P}-\text{a.s.}\]
First, recall why \(\lim_{t\to\infty}X_{t}^{1}X_{t}^{2}=X_{\infty}^{1}X_{\infty}^{2}\) exists. By Ito's formula it is easy to see that \(\left\{X_{t}^{1}X_{t}^{2}\right\}_{t\geq 0}\) is a non-negative local martingale, that is a non-negative supermartingale. By the Martingale Convergence theorem non-negative supermartingales converge a.s. as time goes to infinity. Hence,
\[\lim_{t\to\infty}X_{t}^{1}X_{t}^{2}=X_{\infty}^{1}X_{\infty}^{2},\;\;\mathbb{ P}-\text{a.s.}\]
Also note that \(\left\{X_{t}^{1}X_{t}^{2}\right\}_{t\geq 0}\) is an integer-valued supermartingale. Therefore there exists a random time \(T_{0}\) such that
\[X_{t}^{1}X_{t}^{2}=X_{\infty}^{1}X_{\infty}^{2}\;\text{for all}\;t\geq T_{0}. \tag{4.1}\]
Now assume that \(X_{\infty}^{1}X_{\infty}^{2}>0\), that is \(X_{t}^{1}>0\) and \(X_{t}^{2}>0\) for \(t\geq T_{0}\). Since the motion is recurrent, there is probability one for a "meeting" of two populations after time \(T_{0}\) at some site. Moreover, on the event \(\left\{X_{\infty}^{1}X_{\infty}^{2}>0\right\}\), by recurrence, after time \(T_{0}\), two populations spend an infinite amount of time "together". Since the branching rate is at least \(\gamma>0\), when particles of two populations spend time "together" on the same site, we immediately get that eventually a branching event will happen with probability one. However, this is a contradiction with (4.1). Therefore, \(X_{t}^{1}=0\) or \(X_{t}^{2}=0\) for all \(t\geq T_{0}\), that is one of the populations becomes extinct, and coexistence is not possible.
## 5. Moment computations for \(S=\Lambda_{n}\)
In this section we derive some useful moment estimates for \((\xi_{t},\eta_{t})\) solving (2.3) in the case of \(S=\Lambda_{n}\) for arbitrary \(n\geq 1\) (recall that \(\Lambda_{n}\) is the torus defined in Section 1.1). These estimates will be very essential for proving Theorem 2.6 in Section 6.
To simplify the notations we suppress dependence on "\(n\)" in the notations. Throughout the section the motion process for a mutually catalytic process \((\xi_{t},\eta_{t})\) is the nearest neighbor random walk on \(S=\Lambda_{n}\). The transition semigroup (respectively to transition density \(\left\{p_{t}(\cdot,\cdot)_{t\geq 0}\right\}\), \(Q\)-matrix) of the motion process will be denoted by \(\left\{P_{t}\right\}_{t\geq 0}\). Motion process will be the nearest neighbor random walk on \(S\).
**Lemma 5.1**.: _Assume that \(S=\Lambda_{n}\). Let \((\xi_{0},\eta_{0})\in E_{fin}\times E_{fin}\). If \(\phi,\,\psi:S\to\mathbb{R}_{+}\), then_
\[\left\langle\xi_{t},\phi\right\rangle=\left\langle\xi_{0},P_{t}\phi\right\rangle +N_{t}^{\xi}(t,\phi),\quad\left\langle\eta_{t},\psi\right\rangle=\left\langle \eta_{0},P_{t}\psi\right\rangle+N_{t}^{\eta}(t,\psi),\]
_where_
\[N_{s}^{\xi}(t,\phi)= \sum_{x\in S}\left(\sum_{y\neq x}\left\{\int\limits_{0}^{s}\int \limits_{\mathbb{R}_{+}}P_{t-r}\phi(x)1_{\xi_{r-}(y)\geq n}N_{y,x}^{RW_{\xi}} (drdu)\right.\right.\] \[\left.\left.-\int\limits_{0}^{s}\int\limits_{\mathbb{R}_{+}}P_{t -r}\phi(x)1_{\xi_{r-}(x)\geq n}N_{x,y}^{RW_{\xi}}(drdu)\right\}-\int\limits_{0 }^{s}\xi_{r}Q(x)P_{t-r}\phi(x)dr\right)\] \[+\sum_{x\in S}\sum_{k\geq 0}(k-1)\int\limits_{0}^{s}\int\limits_{ \mathbb{R}_{+}}P_{t-r}\phi(x)1_{\left\{\lambda\eta_{r-}(x)\xi_{r-}(x)\geq n \right\}}N_{x,k}^{br_{\xi}}(drdu),\,\,\,s\leq t,\]
_and_
\[N_{s}^{\eta}(t,\phi) = \sum_{x\in S}\left(\sum_{y\neq x}\left\{\int\limits_{0}^{s}\int \limits_{\mathbb{R}_{+}}P_{t-r}\phi(x)1_{\eta_{r-}(y)\geq n}N_{y,x}^{RW_{\eta} }(drdu)\right.\right.\] \[\left.\left.-\int\limits_{0}^{s}\int\limits_{\mathbb{R}_{+}}P_{t -r}\phi(x)1_{\eta_{r-}(x)\geq n}N_{x,y}^{RW_{\eta}}(drdu)\right\}-\int\limits _{0}^{s}\eta_{r}Q(x)P_{t-r}\phi(x)dr\right)\] \[+\sum_{x\in S}\sum_{k\geq 0}(k-1)\int\limits_{0}^{s}\int\limits_{ \mathbb{R}_{+}}P_{t-r}\phi(x)1_{\left\{\lambda\eta_{r-}(x)\xi_{r-}(x)\geq n \right\}}N_{x,k}^{br_{\eta}}(drdu),\,\,\,s\leq t\]
_are orthogonal continuous square-integrable \(\mathcal{F}_{s}\)-martingales on \(s\in[0,t]\) (the series converge in \(L^{2}\) uniformly in \(s\leq t\)) with quadratic variations_
\[\left\langle N_{\cdot}^{\xi}(t,\phi)\right\rangle_{s}= \kappa\sum_{y\in S}\int\limits_{0}^{s}\xi_{r-}(y)\mathbb{E}\left[ \left(P_{t-r}\phi(Z+y)-P_{t-r}\phi(y)\right)^{2}\right]dr+\] \[+\sigma^{2}\gamma\left(\sum_{x\in S}\int\limits_{0}^{s}\left(P_{ t-r}\phi(x)\right)^{2}\xi_{r-}(x)\eta_{r-}(x)dr\right),\]
_and_
\[\left\langle N_{\cdot}^{\eta}(t,\phi)\right\rangle_{s} = \kappa\sum_{y\in S}\int\limits_{0}^{s}\eta_{r-}(y)\mathbb{E}\left[ \left(P_{t-r}\phi(Z+y)-P_{t-r}\phi(y)\right)^{2}\right]dr+\] \[+ \sigma^{2}\gamma\left(\sum_{x\in S}\int\limits_{0}^{s}\left(P_{t- r}\phi(x)\right)^{2}\xi_{r-}(x)\eta_{r-}(x)dr\right).\]
_Here \(Z\) is the random variable distributed as a jump of the nearest neighbor random walk._
Proof.: The proof goes through application of Lemma 3.1 and Ito's formula to functions in the form of \(f(s,\xi_{s},\eta_{s})=\left\langle\xi_{s},P_{t-s}\phi\right\rangle\) and \(f(s,\xi_{s},\eta_{s})=\left\langle\eta_{s},P_{t-s}\phi\right\rangle\). The proof is pretty standard and we leave details to the enthusiastic reader.
In the end the orthogonality of the martingales \(N_{\cdot}^{\xi}(t,\varphi)\) and \(N_{\cdot}^{\eta}(t,\psi)\) follows from independence of driving Poisson point processes.
**Corollary 5.2**.: _Assume \(S=\Lambda_{n}\). Let \((\xi_{0},\eta_{0})\in E_{fin}\times E_{fin}\). If \(\phi,\psi:S\to\mathbb{R}_{+}\), then_
\[\mathbb{E}\left(\left\langle\xi_{t},\phi\right\rangle\right)=\left\langle\xi_ {0},P_{t}\phi\right\rangle,\ \ \mathbb{E}\left(\left\langle\eta_{t},\psi\right\rangle\right)=\left\langle\eta _{0},P_{t}\psi\right\rangle,\ \forall t\geq 0, \tag{5.1}\]
_and_
\[\mathbb{E}\left(\left\langle\xi_{t},\phi\right\rangle\left\langle\eta_{t}, \psi\right\rangle\right)=\left\langle\xi_{0},P_{t}\phi\right\rangle\left\langle \eta_{0},P_{t}\psi\right\rangle,\ \forall t\geq 0. \tag{5.2}\]
Proof.: For non-negative \(\phi\) and \(\psi\) with finite support (5.1) follows immediately from Lemma 5.1. For general non-negative \(\phi\) and \(\psi\) (5.1) follows immediately by the monotone convergence theorem.
As for (5.2), first for \(\phi\) and \(\psi\) with finite support, from Lemma 5.1 we get
\[\mathbb{E}\left(\left\langle\xi_{t},\phi\right\rangle\left\langle \eta_{t},\psi\right\rangle\right) = \mathbb{E}\left(\left[\left\langle\xi_{0},P_{t}\phi\right\rangle+ N_{t}^{\xi}(t,\phi)\right]\left[\left\langle\eta_{0},P_{t}\psi\right\rangle+N_{t}^{ \eta}(t,\psi)\right]\right)\] \[= \left\langle\xi_{0},P_{t}\phi\right\rangle\left\langle\eta_{0},P_ {t}\psi\right\rangle+\left\langle\eta_{0},P_{t}\psi\right\rangle\mathbb{E} \left(N_{t}^{\xi}(t,\phi)\right)\] \[+\left\langle\xi_{0},P_{t}\phi\right\rangle\mathbb{E}\left(N_{t}^ {\eta}(t,\psi)\right)+\mathbb{E}\left(N_{t}^{\xi}(t,\phi)N_{t}^{\eta}(t,\psi)\right)\] \[= \left\langle\xi_{0},P_{t}\phi\right\rangle\left\langle\eta_{0},P_ {t}\psi\right\rangle,\]
where the second and the third terms on the right-hand side equal to zero since \(N_{t}^{\xi}(t,\phi)\) and \(N_{t}^{\eta}(t,\psi)\) are martingales, and the last term vanishes because of the orthogonality of \(N_{t}^{\xi}(t,\phi)\) and \(N_{t}^{\eta}(t,\psi)\). Then for general non-negative \(\phi\) and \(\psi\) the result follows by the monotone convergence theorem.
Let \(B\subseteq S\) be an arbitrary finite (bounded) subset, and let \(|B|\) denote a number of sites in \(B\).
Now we are ready to compute the expected value and variance of a number of particles (each population separately) at a site \(x\in S\) and at set \(B\).
**Lemma 5.3**.: _Assume \(S=\Lambda_{n}\). Let \(\xi_{0}(x)\equiv v,\ \eta_{0}(x)\equiv u\ \,\forall x\in S\). Then,_
\[\mathbb{E}(\xi_{t}(x))=v,\ \ \text{and}\ \ \mathbb{E}(\eta_{t}(x))=u,\ \ \forall x\in S,\,\forall t\geq 0.\]
_Moreover, for any finite \(B\subset S\),_
\[\mathbb{E}(\xi_{t}(B))=v\left|B\right|,\ \ \text{and}\ \ \mathbb{E}(\eta_{t}(B))=u \left|B\right|,\ \ \forall t>0.\]
Proof.: Proof follows easily from Lemma 5.1, and thus is omitted.
Before we treat the second moments of \(\xi_{t}(x)\) and \(\eta_{t}(x)\), let us prove a simple technical lemma. Recall that \(g_{t}(\cdot,\cdot)\) is the Green function defined in (2.2) (for the nearest neighbor random walk on \(S=\Lambda_{n}\)). Note that whenever the motion process is the nearest neighbor random walk, we have \(g_{t}(x,y)=g_{t}(x-y)\) (with certain abuse of notation).
**Lemma 5.4**.: _Assume \(S=\Lambda_{n}\), and the motion process is a nearest neighbor random walk on \(S\). For every \(x\in S\):_
\[g_{t}(x)-\frac{1}{2d}\sum_{i=1}^{d}\left[g_{t}(x+e_{i})+g_{t}(x-e_{i})\right]= \frac{1}{\kappa}(\delta_{x,0}-p_{t}(x)),\ \forall t\geq 0,\]
_where \(\delta_{x,y}=1\) iff \(x=y\) and \(\delta_{x,y}=0\) otherwise._
Proof.: The proof follows standard procedure, using evolution equation for transition densities of continuous time nearest neighbor random walk. Thus, it is omitted.
Now we are ready to handle the second moments of \(\xi_{t}(x)\) and \(\eta_{t}(x)\).
**Lemma 5.5**.: _Assume \(S=\Lambda_{n}\). Let \(\xi_{0}(x)\equiv v\), \(\eta_{0}(x)\equiv u\) for all \(x\in S\). Then, for all \(t\geq 0\),_
\[\mathbb{E}(\xi_{t}(x)^{2})=v^{2}+\frac{1}{2}\sigma^{2}\gamma wvg_{2t}(0)+v(1-p_ {2t}(0)), \tag{5.3}\]
_and_
\[\mathbb{E}(\eta_{t}(x)^{2})=u^{2}+\frac{1}{2}\sigma^{2}\gamma uvg_{2t}(0)+u(1- p_{2t}(0)). \tag{5.4}\]
Proof.: We will prove only (5.3), since the proof of (5.4) is the same. Again we use the representation of the process from Lemma 5.1, with \(\phi\left(\cdot\right)=\delta_{x}\left(\cdot\right)\) and use notation \(\phi_{r}(\cdot)=p_{t-r}\delta_{x}(\cdot)=p_{t-r}(\cdot-x)\).
For \(0\leq s\leq t\) denote
\[N_{s}^{\xi}(y)=N_{s}^{\xi}(t,y)=N_{s}^{\xi}(t,\delta_{y}).\]
\[\mathbb{E}(\xi_{t}(x)^{2}) = \left(P_{t}\xi_{0}(x)\right)^{2}+\mathbb{E}\left(\left(N_{t}^{ \xi}(x)\right)^{2}\right)\] \[= v^{2}+\mathbb{E}\left(\left\langle N_{\cdot}^{\xi}(x)\right\rangle _{t}\right)\] \[= v^{2}+v\kappa\sum_{y\in S}\int\limits_{0}^{t}\sum_{j\in S}(\phi_ {r}(j)-\phi_{r}(y))^{2}p_{y,x}dr\] \[+\sigma^{2}\gamma uv\sum_{y\in S}\int\limits_{0}^{t}\phi_{r}(y)^{ 2}dr\]
\[=: v^{2}+v\kappa J_{1}(t)+\sigma^{2}\gamma uvJ_{2}(t), t\geq 0\]
where in the fifth inequality we used Lemma 5.1; then we used the Fubini theorem, and the seventh equality follows from the fact (see Corollary 5.2) that
\[\mathbb{E}(\xi_{r-}(y)\eta_{r-}(y))=P_{t}\xi_{0}(y)P_{t}\eta_{0}(y).\]
Now we will compute each term separately.
First, let us evaluate \(J_{2}(t)\). Recall that \(\phi_{r}(y)=p_{t-r}(y-x)\). Then we have
\[J_{2}(t)=\int\limits_{0}^{t}\sum\limits_{y\in S}p_{t-r}(y-x)^{2}dr=\int\limits _{0}^{t}p_{2(t-r)}(0)dr=\frac{1}{2}\int\limits_{0}^{2t}p_{\tau}(0)d\tau=0.5g_{2 t}(0), \tag{5.5}\]
where \(p_{s}(x,x)=p_{s}(0,0)=p_{s}(0)\), for all \(x\in S\).
Now we will handle \(J_{1}(t)\):
\[J_{1}(t) =\sum\limits_{y\in S}\int\limits_{0}^{t}\sum\limits_{j\in S}(\phi _{r}(j)-\phi_{r}(y))^{2}p_{y,j}dr\] \[=\sum\limits_{y\in S}\sum\limits_{j\in S}\int\limits_{0}^{t}p_{t- r}(j-x)^{2}p_{y,j}dr+\sum\limits_{y\in S}\sum\limits_{j\in S}\int\limits_{0}^{t}p_{t- r}(y-x)^{2}p_{y,j}dr\] \[-2\sum\limits_{y\in S}\sum\limits_{j\in S}\int\limits_{0}^{t}p_{ t-r}(j-x)p_{t-r}(y-x)p_{y,j}dr.\]
We will treat each of the three terms above separately. For the first term we have
\[\sum\limits_{y\in S}\sum\limits_{j\in S}\int\limits_{0}^{t}p_{t-r}(j,x)^{2}p_ {y,j}dr=\sum\limits_{j\in S}\int\limits_{0}^{t}p_{t-r}(j,x)^{2}dr=0.5g_{2t}(0) \tag{5.6}\]
where the last equality follows as in (5.5).
Similarly we get
\[\sum\limits_{y\in S}\sum\limits_{j\in S}\int\limits_{0}^{t}p_{t-r}(y,x)^{2}p_ {y,j}dr=0.5g_{2t}(0). \tag{5.7}\]
Finally it is easy to obtain
\[\sum\limits_{y\in S}\sum\limits_{j\in S}\int\limits_{0}^{t}p_{t-r }(j,x)p_{t-r}(y,x)p_{y,j}dr\\ =0.5\frac{1}{2d}\sum\limits_{i=1}^{d}\left[g_{2t}(e_{i})+g_{2t}(- e_{i})\right]=\frac{1}{2d}\sum\limits_{i=1}^{d}g_{2t}(e_{i}). \tag{5.8}\]
By putting (5.6), (5.7), (5.8) and (5.5) together we have
\[\mathbb{E}(\xi_{t}(x)^{2})=v^{2}+\sigma^{2}\gamma uv\frac{1}{2}g_{2s}(0)+v\kappa( g_{2t}(0)-\frac{1}{2d}\sum_{i=1}^{d}g_{2t}(e_{i})),\ t\geq 0,\ x\in S. \tag{5.9}\]
Now use (5.9) and Lemma 5.4 with \(x=y\) to get
\[\mathbb{E}(\xi_{t}(x)^{2})=v^{2}+\frac{1}{2}\sigma^{2}\gamma uvg_{2t}(0)+v(1- p_{2t}(0)),\ \ \forall t\geq 0,x\in S.\]
We will also need to evaluate \(\mathbb{E}(\xi_{t}(x)\xi_{t}(y))\) for \(x\neq y\). To this end we will prove the following lemma.
**Lemma 5.6**.: _Assume \(S=\Lambda_{n}\). Let \(x\neq y\), then_
\[\mathbb{E}(\xi_{t}(x)\xi_{t}(y))=v^{2}-vp_{2t}(x-y)+\frac{1}{2}\sigma^{2} \gamma uvg_{2t}(x-y),\ \ \forall t\geq 0,\]
_and_
\[\mathbb{E}(\eta_{t}(x)\eta_{t}(y))=u^{2}-up_{2t}(x-y)+\frac{1}{2}\sigma^{2} \gamma uvg_{2t}(x-y),\ \ \forall t\geq 0.\]
Proof.: The proof goes along the similar lines as the proof of Lemma 5.5 and thus is omitted.
## 6. Proof of Theorem 2.6
Let \((\xi_{t}^{n},\eta_{t}^{n})\) be a pair of processes solving (2.3) with site space \(S=\Lambda_{n}\), and \(N_{x,y}^{RW_{\xi}}\), \(N_{x,y}^{RW_{\eta}}\) being Poisson point processes with intensity measure \(q^{n}(x,y)ds\otimes du\), \(q^{n}\) is defined by (1.2). \(\left\{p_{x,y}^{n}\right\}_{x,y\in\Lambda_{n}}\) is a transition probabilities of random walk, and \(\left\{P_{t}^{n}\right\}_{t\geq 0}\) is an associated semigroup. In what follows we assume \(d\geq 3\).
Fix \(\theta_{1},\theta_{2}>0\). Assume the following initial conditions for \((\xi_{t}^{n},\eta_{t}^{n})\) :
\[\xi_{0}^{n}(x)=\theta_{1}\ \eta_{0}^{n}(x)=\theta_{2}\ \forall x\in\Lambda_{n}.\]
Set
\[\boldsymbol{\xi}_{t}^{n}=\sum_{j\in\Lambda_{n}}\xi_{j}^{n}(t),\quad\boldsymbol {\eta}_{t}^{n}=\sum_{j\in\Lambda_{n}}\eta_{j}^{n}(t).\]
We define the following time change:
\[\beta_{n}(t)=|\Lambda_{n}|\,t,\,\ t\geq 0.\]
Theorem 2.6 identifies the limiting distribution of
\[\frac{1}{|\Lambda_{n}|}\left(\boldsymbol{\xi}_{\beta_{n}(t)}^{n},\boldsymbol{ \eta}_{\beta_{n}(t)}^{n}\right),\]
as \(n\to\infty\), for all \(t\geq 0\).
In Section 1 we defined a system of Dawson-Perkins processes \((U_{t}^{n},V_{t}^{n})_{t\geq 0}\) on \(\Lambda_{n}\), that solves (1.3). Recall that
\[\mathbf{U}_{t}^{n}=\sum_{i\in\Lambda_{n}}u_{t}^{n}(i),\quad\mathbf{V}_{t}^{n}= \sum_{i\in\Lambda_{n}}v_{t}^{n}(i).\]
The limiting behavior of \((\mathbf{U}_{t}^{n},\mathbf{V}_{t}^{n})_{t\geq 0}\) was studied in [6], we stated the result in Theorem 1.1.
Theorem 2.6 claims that the limiting behavior of \(\frac{1}{|\Lambda_{n}|}\left(\boldsymbol{\xi}_{\beta_{n}(t)}^{n},\boldsymbol{ \eta}_{\beta_{n}(t)}^{n}\right)\) is similar to \(\frac{1}{|\Lambda_{n}|}\left(\mathbf{U}_{\beta_{n}(t)}^{n},\mathbf{V}_{\beta_ {n}(t)}^{n}\right)\) for every \(t\geq 0\). As we have mentioned above, contrary to Dawson-Perkins processes solving equation (1.3), the useful self-duality property does not hold for our branching particle model. However we use the so called approximating duality technique that allows us to prove Theorem 2.6.
In what follows we will use a periodic sum on \(\Lambda_{n}\): for \(x,y\in\Lambda_{n}\) we have \(x+y=(x+y)\mod\Lambda_{n}\in\Lambda_{n}\).
Next proposition is crucial for the proof of Theorem 2.6.
**Proposition 6.1**.: _Let \((X_{t},Y_{t})_{t\geq 0}\) be the solution to (2.5). Then for all \(a,b\geq 0\),_
\[\lim_{n\to\infty}\mathbb{E}\left(e^{-\frac{1}{|\Lambda_{n}|}( \boldsymbol{\xi}_{\beta_{n}(t)}^{n}+\boldsymbol{\eta}_{\beta_{n}(t)}^{n})(a+b) -i\frac{1}{|\Lambda_{n}|}(\boldsymbol{\xi}_{\beta_{n}(t)}^{n}-\boldsymbol{ \eta}_{\beta_{n}(t)}^{n})(a-b)}\right)\] \[=\mathbb{E}\left(e^{-(X_{t}+Y_{t})(a+b)-i(X_{t}-Y_{t})(a-b)} \right),\]
_for all \(t\geq 0\)._
**Proof of Theorem 2.6.** By easy adaptation of Lemma 2.5 of [30] one gets that the mixed Laplace-Fourier transform
\[\mathbb{E}\left(e^{-(X+Y)(a+b)-i(X-Y)(a-b)}\right),\,\,\,a,b\geq 0,\]
determines the distribution of non-negative two-dimentional random variables \((X,Y)\). Therefore, Theorem 2.6 follows easily from Proposition 6.1 and properties of weak convergence.
The rest of the section is organized as follows. Section 6.1 is devoted to the proof of Proposition 6.1, and the proof of one of the technical propositions is deferred to Section 6.2.
### Proof of Proposition 6.1
In what follows fix \(T>0\). Let \((\xi_{t}^{n},\eta_{t}^{n})_{t\geq 0}\) be a mutually catalytic branching random walk from Theorem 2.6 (we will refer to it as the "discrete process"). In the proof of the proposition we will use the duality technique introduced in [30]. To this end we will need the following Dawson-Perkins processes:
* Let \((u_{t}^{n},v_{t}^{n})_{t\geq 0}\) be a solution to (1.3), with \(Q\) being the \(Q\)-matrix of the nearest neighbor random walk, and with some initial conditions \((u_{0},v_{0})\).
* For arbitrary \(a,b>0\), the sequence \((\tilde{u}_{t}^{n},\tilde{v}_{t}^{n})_{t\geq 0}\) solving (1.3) with initial conditions (6.1) \[\tilde{u}_{0}^{n}(x)=\frac{a}{|\Lambda_{n}|},\,\,\,\tilde{v}_{0}^{n}(x)=\frac{ b}{|\Lambda_{n}|}\,\,\,\text{for every $x\in\Lambda_{n}$.}\]
In what follows we assume that \((u_{t}^{n},v_{t}^{n})_{t\geq 0}\,,\,(\tilde{u}_{t}^{n},\tilde{v}_{t}^{n})_{t \geq 0}\) and \((\xi_{t}^{n},\eta_{t}^{n})_{t\geq 0}\) are independent. Now let us describe the state spaces for the processes involved in this section.
Similarly to \(E_{fin}\) define \(E_{fin}^{n}=\{f:\Lambda_{n}\longrightarrow\mathbb{N}_{0}\}\), and \(E_{fin,con}^{n}=E_{fin}^{n}\times E_{fin}^{n}\). Clearly, since \(\Lambda_{n}\) is compact, the \(L^{1}\) norm of functions in \(E_{fin}^{n}\) is finite.
First, by Theorem 2.3, the process \((\xi_{t}^{n},\eta_{t}^{n})\) that solves (2.3) with initial conditions \((\xi_{0}^{n},\eta_{0}^{n})=\boldsymbol{\bar{\theta}}\) is \(E_{fin}^{n}\times E_{fin}^{n}\)-valued process. By our definition (6.1), \((\tilde{u}_{0}^{n},\tilde{v}_{0}^{n})\in E_{fin,con}^{n}\). Moreover, by simple adaptation of the proof of Theorem 2.2(d) in [14] to our state space \(\Lambda_{n}\), we get
\[(\tilde{u}_{t}^{n},\tilde{v}_{t}^{n})\in E_{fin,con},\,\,\forall t\geq 0.\]
For \(\left(x,y,z,w\right)\in\mathbb{N}_{0}^{\Lambda_{n}}\times\mathbb{N}_{0}^{\Lambda_ {n}}\times\mathbb{R}_{+}^{\Lambda_{n}}\times\mathbb{R}_{+}^{\Lambda_{n}}\) define
\[H\left(x,y,z,w\right)=e^{-\left\langle x+y,z+w\right\rangle-i\left\langle x-y,z- w\right\rangle},\]
and
\[F_{t,s}^{n} = \mathbb{E}\left[H\left(\xi_{t}^{n},\eta_{t}^{n},\tilde{u}_{s}^{n},\tilde{v}_{s}^{n}\right)\right]\] \[= \mathbb{E}\left[e^{-\left\langle\xi_{t}^{n}+\eta_{t}^{n},\tilde{ u}_{s}^{n}+\tilde{v}_{s}^{n}\right\rangle-i\left\langle\xi_{t}^{n}-\eta_{t}^{n}, \tilde{u}_{s}^{n}-\tilde{v}_{s}^{n}\right\rangle}\right],\]
for \(0\leq s,t\leq\beta_{n}(T)\).
Let us recall the self-duality lemma from [6] (Lemma 4.1 in [6]).
**Lemma 6.2**.: _Let \(\left(u_{0},v_{0}\right),\left(\tilde{u}_{0},\tilde{v}_{0}\right)\in E_{fin, con}^{n}\), where \(\left(u_{t},v_{t}\right)_{t\geq 0},\left(\tilde{u}_{t},\tilde{v}_{t}\right)_{t \geq 0}\) are independent solutions of (1.3). Then_
\[\mathbb{E}\left(H(u_{t},v_{t},\tilde{u}_{0},\tilde{v}_{0})\right)=\mathbb{E} \left(H(u_{0},v_{0},\tilde{u}_{t},\tilde{v}_{t})\right).\]
_Remark 6.3_.: In [6] the above lemma is proved for more general state spaces and initial conditions. The conditions in Lemma 4.1 in [6] hold trivially in our case.
Then we have the following proposition.
**Proposition 6.4**.: _For any \(\theta_{1},\theta_{2}>0\), \(a,b\geq 0\),_
\[\lim_{n\to\infty}\mathbb{E}\left[F_{\beta_{n}(T),0}^{n}\right]=\lim_{n\to \infty}\mathbb{E}\left[F_{0,\beta_{n}(T)}^{n}\right].\]
Proof.: Postponed till the end of this section. It is proved via a series of other results.
Given Proposition 6.4, it is easy to complete
Proof of Proposition 6.1.: Fix arbitrary \(\theta_{1},\theta_{2}>0\) and \(a,b\geq 0\). For any \(n\geq 1\), let \(\left(u_{t}^{n},v_{t}^{n}\right)_{t\geq 0}\) be the solution to (1.3) with \(Q^{n}\) being a \(Q\)-matrix of the nearest neighbor random walk on \(\Lambda_{n}\), and initial conditions \(\left(u_{0}^{n},v_{0}^{n}\right)=\left(\xi_{0}^{n},\eta_{0}^{n}\right)= \boldsymbol{\bar{\theta}}\). Recall that
\[\boldsymbol{U}_{t}^{n}=\sum_{x\in\Lambda_{n}}u_{t}^{n}(x),\ \boldsymbol{V}_{t}^{n}=\sum_{x\in\Lambda_{n}}v_{t}^{n}(x).\]
Note that
\[\lim_{n\to\infty}\mathbb{E}\left[F_{0,\beta_{n}(T)}^{n}\right] =\lim_{n\to\infty}\mathbb{E}\left(e^{-\left\langle\xi_{0}^{n}+\eta _{0}^{n},\tilde{u}_{\beta_{n}(T)}^{n}+\tilde{v}_{\beta_{n}(T)}^{n}\right\rangle -i\left\langle\xi_{0}^{n}-\eta_{0}^{n},\tilde{u}_{\beta_{n}(T)}^{n}-\tilde{v} _{\beta_{n}(T)}^{n}\right\rangle}\right)\] \[=\lim_{n\to\infty}\mathbb{E}\left(e^{-\left(\mathbf{U}_{\beta_{n }(T)}^{n}+\mathbf{V}_{\beta_{n}(T)}^{n}\right)\frac{1}{|\Lambda_{n}|}(a+b)-i \left(\mathbf{U}_{\beta_{n}(T)}^{n}-\mathbf{V}_{\beta_{n}(T)}^{n}\right) \frac{1}{|\Lambda_{n}|}(a-b)}\right) \tag{6.2}\] \[= \mathbb{E}\left(e^{-\left(X_{T}+Y_{T}\right)(a+b)-i\left(X_{T}-Y_ {T}\right)(a-b)}\right),\]
where the second equality follows by a self-duality relation in Lemma 6.2, and the third equality follows by Theorem 1.1. This means that
\[\lim_{n\to\infty}\mathbb{E}\left(e^{-\left(\boldsymbol{\xi}_{\beta(T)}^{n}+ \boldsymbol{\eta}_{\beta_{n}(T)}^{n}\right)\frac{1}{|\Lambda_{n}|}(a+b)-i( \boldsymbol{\xi}_{\beta_{n}(T)}^{n}-\boldsymbol{\eta}_{\beta_{n}(T)}^{n})\frac {1}{|\Lambda_{n}|}(a-b)}\right)\] \[= \lim_{n\to\infty}\mathbb{E}\left[F_{\beta_{n}(T),0}^{n}\right]= \lim_{n\to\infty}\mathbb{E}\left[F_{0,\beta_{n}(T)}^{n}\right]\] \[= \mathbb{E}\left(e^{-\left(X_{T}+Y_{T}\right)(a+b)-i\left(X_{T}-Y_ {T}\right)(a-b)}\right),\]
where the third equality follows by Proposition 6.4, and the last equality follows by (6.2). This finishes the proof of Proposition 6.1.
To prove Proposition 6.4 we will need other results. First we need Lemma 4.10 from [19].
**Lemma 6.5** ( Lemma 4.10 in [19]).: _Suppose a function \(f(s,t)\) on \([0,\infty)\times[0,\infty)\) is absolutely continuous in \(s\) for each fixed \(t\) and absolutely continuous in \(t\) for each fixed \(s\). Set \((f_{1},f_{2})\equiv\nabla f\), and assume that_
\[\int\limits_{0}^{T}\int\limits_{0}^{T}\left|f_{i}(s,t)\right|dsdt<\infty,\ i=1,2,\ \forall T>0.\]
_Then for almost every \(t\geq 0\),_
\[f\left(t,0\right)-f\left(0,t\right)=\int\limits_{0}^{t}\left(f_{1}\left(s,t-s \right)-f_{2}\left(s,t-s\right)\right)ds. \tag{6.3}\]
We will apply this lemma for the function \(F_{r,s}^{n}=\mathbb{E}\left[H\left(\xi_{r}^{n},\eta_{r}^{n},\tilde{u}_{s}^{n}, \tilde{v}_{s}^{n}\right)\right]\). Then we will show that for \(f(r,s)=F_{r,s}^{n}\) and \(t=\beta_{n}(T)\), the right-hand side of (6.3) tends to \(0\), as \(n\to\infty\).
In order to check the conditions in Lemma 6.5 we will need several lemmas. In the next two lemmas we will derive martingale problems for processes \((\xi_{\cdot}^{n},\eta_{\cdot}^{n})\) and \((\tilde{u}_{\cdot}^{n},\tilde{v}_{\cdot}^{n})\). Recall that \(p_{x,y}^{n}\) are transition jump probabilities of nearest neighbor random walk on \(\Lambda_{n}\).
**Lemma 6.6**.: _For any \((\varphi,\psi)\in E_{fin,con}^{n}\) define_
\[g\left(\xi_{s}^{n},\eta_{s}^{n},\varphi,\psi\right)=H\left(\xi_ {s}^{n},\eta_{s}^{n},\varphi,\psi\right)\left\{\kappa\sum\limits_{x,y\in \Lambda_{n}}\xi_{s}^{n}(x)\right.\] \[\times p_{xy}^{n}\left[e^{-\varphi(y)-\psi(y)+\varphi(x)+\psi(x)- i(\varphi(y)-\psi(y)-\varphi(x)+\psi(x))}-1\right]\] \[+\kappa\sum\limits_{x,y\in\Lambda_{n}}\eta_{s}^{n}(x)p_{xy}^{n} \left[e^{-\varphi(y)-\psi(y)+\varphi(x)+\psi(x)+i(\varphi(y)-\psi(y)-\varphi(x )+\psi(x))}-1\right]\] \[+\gamma\sum\limits_{x\in\Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^{n}(x )\sum\limits_{k\geq 0}\nu_{k}\left[e^{-(k-1)\left(\varphi(x)+\psi(x)+i(\varphi(x)- \psi(x))\right)}-1\right]\] \[\left.+\gamma\sum\limits_{x\in\Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^ {n}(x)\sum\limits_{k\geq 0}\nu_{k}\left[e^{-(k-1)\left(\varphi(x)+\psi(x)-i( \varphi(x)-\psi(x))\right)}-1\right]\right\},\ \forall s\geq 0. \tag{6.4}\]
_Then_
\[H\left(\xi_{t}^{n},\eta_{t}^{n},\varphi,\psi\right)-\int\limits_{0}^{t}g\left( \xi_{s}^{n},\eta_{s}^{n},\varphi,\psi\right)ds,\ \ \forall t\geq 0.\]
_is an \(\left\{\mathcal{F}_{t}^{\xi,\eta}\right\}_{t\geq 0}\)-martingale._
Proof.: The result is immediately by Lemma 3.1(c).
Similar result holds for the Dawson-Perkins process.
**Lemma 6.7**.: _For any \(\left(\varphi,\psi\right)\in E_{fin,con}^{n}\), define_
\[h\left(\varphi,\psi,\tilde{u}_{s}^{n},\tilde{v}_{s}^{n}\right)= H\left(\varphi,\psi,\tilde{u}_{s}^{n},\tilde{v}_{s}^{n}\right)\left\{ \sum\limits_{i\in\Lambda_{n}}\tilde{u}_{s}^{n}Q(i)\left(\varphi(i)+\psi(i)+i \left(\varphi(i)-\psi(i)\right)\right)\right.\] \[+\sum\limits_{i\in\Lambda_{n}}\tilde{v}_{s}^{n}Q(i)\left(\varphi(i )+\psi(i)-i\left(\varphi(i)-\psi(i)\right)\right)\] \[\left.+4\tilde{\gamma}\sum\limits_{i\in\Lambda_{n}}\tilde{u}_{s}^ {n}(i)\tilde{v}_{s}^{n}(i)\varphi(i)\psi(i)\right\},\ \ \forall s\geq 0.\]
_Then_
\[H\left(\varphi,\psi,\tilde{u}_{t}^{n},\tilde{v}_{t}^{n}\right)-\int\limits_{0 }^{t}h\left(\varphi,\psi,\tilde{u}_{s}^{n},\tilde{v}_{s}^{n}\right)ds,\ \ t\geq 0,\]
_is an \(\left\{\mathcal{F}_{t}^{\tilde{u}^{n},\tilde{v}^{n}}\right\}_{t\geq 0}\)-martingale._
Proof.: The result is immediate by Theorem 2.2(c)(iv) in [14], Ito's lemma (Theorem II.5.1 in [32]) and simple algebra.
**Lemma 6.8**.: _For any \(t>0\),_
\[\begin{array}{l}\sup\limits_{\begin{array}{l}0\leq s\leq t\\ 0\leq r\leq t\end{array}}\mathbb{E}\left|h(\xi_{r}^{n},\eta_{r}^{n},\tilde{u}_{ s}^{n},\tilde{v}_{s}^{n})\right|<\infty\\ 0\leq r\leq t\end{array} \tag{6.5}\]
_and_
\[\begin{array}{l}\sup\limits_{\begin{array}{l}0\leq s\leq t\\ 0\leq r\leq t\end{array}}\mathbb{E}\left|g(\xi_{r}^{n},\eta_{r}^{n},\tilde{u}_ {s}^{n},\tilde{v}_{s}^{n})\right|<\infty.\\ 0\leq r\leq t\end{array} \tag{6.6}\]
Proof.: (6.5) is verified in the proof of Theorem 2.4(b) in [14].
Now let us check (6.6). First, by simple algebra it is trivial to see that for any \(z\in\mathbb{R}_{+}\) and \(y\in\mathbb{R}\),
\[\left|e^{-z+iy}-1\right|\leq\left(\left|z\right|+\left|y\right|\right). \tag{6.7}\]
Hence
\[\begin{array}{l}\sup\limits_{\begin{array}{l}0\leq s\leq t\\ 0\leq r\leq t\end{array}}\mathbb{E}\left|g(\xi_{r}^{n},\eta_{r}^{n},\tilde{u}_ {s}^{n},\tilde{v}_{s}^{n})\right|\\ \begin{array}{l}0\leq r\leq t\end{array}\end{array}\]
\[\leq \sup_{0\leq s,r\leq t}C\mathbb{E}\left\{\kappa\sum_{x,y\in\Lambda_{n} }\xi_{r}^{n}(x)p_{xy}^{n}\left[\tilde{u}_{s}^{n}(y)+\tilde{v}_{s}^{n}(y)+\tilde{u }_{s}^{n}(x)+\tilde{v}_{s}^{n}(x)\right]\right.\] \[+\kappa\sum_{x,y\in\Lambda_{n}}\eta_{r}^{n}(x)p_{xy}^{n}\left[ \tilde{u}_{s}^{n}(y)+\tilde{v}_{s}^{n}(y)+\tilde{u}_{s}^{n}(x)+\tilde{v}_{s}^{n }(x)\right]\] \[+\gamma\sum_{x\in\Lambda_{n}}\xi_{r}^{n}(x)\eta_{r}^{n}(x)\sum_{ k\geq 0}\nu_{k}\left|k-1\right|\left[\tilde{u}_{s}^{n}(x)+\tilde{v}_{s}^{n}(x)\right]\] \[\left.+\gamma\sum_{x\in\Lambda_{n}}\xi_{r}^{n}(x)\eta_{r}^{n}(x) \sum_{k\geq 0}\nu_{k}\left|k-1\right|\left[\tilde{u}_{s}^{n}(x)+\tilde{v}_{s}^{n }(x)\right]\right\},\]
where \(C>0\) is a constant and the last inequality follows from (6.7). Recall that, by Lemma 5.3, \(\mathbb{E}\left[\xi_{s}^{n}(x)\right]=\theta_{1},\ \mathbb{E}\left[\eta_{s}^{n}(x) \right]=\theta_{2}\) and, by Lemma 5.6, \(\mathbb{E}\left[\xi_{s}^{n}(x)\eta_{s}^{n}(x)\right]=\theta_{1}\theta_{2}\). By Theorem 2.2b(iii) in [14],
\[\mathbb{E}\left[\langle\tilde{u}_{s}^{n},1\rangle\right] = a<\infty,\ \mathbb{E}\left[\langle\tilde{v}_{s}^{n},1\rangle\right]=b< \infty,\ \forall s\geq 0,\]
since initial conditions have a finite mass. Also note that \(\sum_{k\geq 0}\left|k-1\right|\nu_{k}<\infty\). Then (6.6) holds.
Now we are ready to prove the following lemma.
**Lemma 6.9**.: For any \(n\geq 1\), and every \(t>0\),
\[\mathbb{E}\left[H\left(\xi_{t}^{n},\eta_{t}^{n},\tilde{u}_{0}^{n },\tilde{v}_{0}^{n}\right)\right]-\mathbb{E}\left[H\left(\xi_{0}^{n},\eta_{0}^ {n},\tilde{u}_{t}^{n},\tilde{v}_{t}^{n}\right)\right] \tag{6.8}\] \[=\mathbb{E}\left[\int\limits_{0}^{t}\left\{g\left(\xi_{s}^{n}, \eta_{s}^{n},\tilde{u}_{t-s}^{n},\tilde{v}_{t-s}^{n}\right)-h\left(\xi_{s}^{n},\eta_{s}^{n},\tilde{u}_{t-s}^{n},\tilde{v}_{t-s}^{n}\right)\right\}ds\right].\]
Proof.: By Lemmas 6.6, 6.7, 6.8 we can apply Lemma 6.5 to function
\[F_{r,s}^{n}=\mathbb{E}\left[H\left(\xi_{r}^{n},\eta_{r}^{n},\tilde{u}_{s}^{n}, \tilde{v}_{s}^{n}\right)\right],\]
and immediately see that (6.8) holds for almost every \(t>0\). However, again by Lemmas 6.6, 6.7, 6.8 one can see that both left-hand and right-hand sides of (6.8) are continuous in \(t\). Hence (6.8) holds for all \(t>0\).
Define
\[e(T,n)= \mathbb{E}\left[\int\limits_{0}^{\beta_{n}(T)}\left\{g\left(\xi_ {s}^{n},\eta_{s}^{n},\tilde{u}_{\beta_{n}(T)-s}^{n},\tilde{v}_{\beta_{n}(T)-s} ^{n}\right)\right.\right.\] \[\left.\left.-h\left(\xi_{s}^{n},\eta_{s}^{n},\tilde{u}_{\beta_{n} (T)-s}^{n},\tilde{v}_{\beta_{n}(T)-s}^{n}\right)ds\right]. \tag{6.9}\]
To finish the proof of Proposition 6.4 we need the following proposition.
**Proposition 6.10**.: \(e(T,n)\to 0\) _as \(n\to\infty\)._
The next subsection is devoted to the proof of the above proposition Now we are ready to complete
**Proof of Proposition 6.4.** The proof is immediate by Lemma 6.9 and Proposition 6.10.
### Proof of Proposition 6.10
Fix \(t>0\). For simplicity, denote \(f_{s}=H(\xi_{s}^{n},\eta_{s}^{n},\varphi,\psi)\). Apply the Taylor series expansion on the exponents inside the sums on the right-hand side of (6.4) to get
\[g(\xi_{s}^{n},\eta_{s}^{n},\varphi,\psi)\] \[= f_{s}\left\{\kappa\sum_{x,y\in\Lambda_{n}}\xi_{s}^{n}(x)p_{xy}^{n}\right.\] \[\times\left[-\varphi(y)-\psi(y)+\varphi(x)+\psi(x)-i\left(\varphi (y)-\psi(y)-\varphi(x)+\psi(x)\right)\right.\] \[+\frac{1}{2}\left(-\varphi(y)-\psi(y)+\varphi(x)+\psi(x)-i\left( \varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)\right)^{2}\] \[\left.+G^{1,1}(\varphi,\psi,x,y)\right]\right\}\] \[+f_{s}\left\{\kappa\sum_{x,y\in\Lambda_{n}}\eta_{s}^{n}(x)p_{xy}^ {n}\right.\] \[\times\left[-\varphi(y)-\psi(y)+\varphi(x)+\psi(x)+i\left( \varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)\right.\] \[+\left.\frac{1}{2}\left(-\varphi(y)-\psi(y)+\varphi(x)+\psi(x)+i \left(\varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)\right)^{2}\right.\] \[+\left.G^{1,2}(\varphi,\psi,x,y)\right]\] \[+\gamma\sum_{x\in\Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^{n}(x)\left[ \frac{1}{2}\sigma^{2}\left(\varphi(x)+\psi(x)+i\left(\varphi(x)-\psi(x) \right)\right)^{2}+G^{1,3}(\varphi,\psi,x,x)\right]\] \[\left.\forall s\geq 0.\right.\]
where \(G^{1,m}(\varphi,\psi,x,y)=o\left(\left|\varphi(y)+\varphi(x)\right|^{2}\right)\)\(m=1,2,3,4\), and we also used our assumption on the branching mechanism:
\[\sum_{k\geq 0}\nu_{k}(k-1)=0\,\text{ and }\,\sum_{k\geq 0}\nu_{k}(k-1)^{2}=\sigma^{2}.\]
We use simple algebra to obtain
\[g(\xi_{s}^{n},\eta_{s}^{n},\varphi,\psi)= f_{s}\left\{\kappa\sum_{x,y\in\Lambda_{n}}\xi_{s}^{n}(x)p_{xy}^{n}\right.\] \[\times\left[-\varphi(y)-\psi(y)+\varphi(x)+\psi(x)-i\left(\varphi (y)-\psi(y)-\varphi(x)+\psi(x)\right)\right.\] \[\left.+2\left(\varphi(x)-\varphi(y)\right)\left(\psi(x)-\psi(y) \right)+i\left(\left(\varphi(x)-\varphi(y)\right)^{2}-\left(\psi(x)-\psi(y) \right)^{2}\right)\right.\] \[\left.+G^{1,1}(\varphi,\psi,x,y)\right]\right\}\] \[+f_{s}\left\{\kappa\sum_{x,y\in\Lambda_{n}}\eta_{s}^{n}(x)p_{xy}^ {n}\right.\]
\[\times\left[-\varphi(y)-\psi(y)+\varphi(x)+\psi(x)+i\left(\varphi(y)- \psi(y)-\varphi(x)+\psi(x)\right)\right.\] \[+2\left(\varphi(x)-\varphi(y)\right)\left(\psi(x)-\psi(y)\right)-i \left(\left(\varphi(x)-\varphi(y)\right)^{2}-\left(\psi(x)-\psi(y)\right)^{2}\right)\] \[+G^{1,2}(\varphi,\psi,x,y)\] \[\left.+\gamma\sum\limits_{x\in\Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^ {n}(x)\left[4\sigma^{2}\varphi(x)\psi(x)+G^{1,3}(\varphi,\psi,x,x)\right] \right\},\ \ \forall s\geq 0.\]
Now by using the above and Lemmas 6.7, 6.6 we get (recall that \(\tilde{\gamma}=\gamma\sigma^{2}\) and \(e(T,n)\) is defined in (6.9)):
\[e(T,n)=e_{\xi,RW}(T,n)+e_{\eta,RW}(T,n)+e_{br}(T,n), \tag{6.10}\]
where
\[e_{\xi,RW}(T,n) = \mathbb{E}\int\limits_{0}^{\beta_{n}(T)}f_{s}\left\{\kappa\sum \limits_{x,y\in\Lambda_{n}}2p_{xy}^{n}\xi_{s}^{n}(x)\left(\left(\tilde{u}_{ \beta_{n}(T)-s}^{n}(x)-\tilde{u}_{\beta_{n}(T)-s}^{n}(y)\right)\right.\right.\] \[\left.\left.+i\left[\left(\tilde{u}_{\beta_{n}(T)-s}^{n}(x)- \tilde{u}_{\beta_{n}(T)-s}^{n}(y)\right)^{2}-\left(\tilde{v}_{\beta_{n}(T)-s} ^{n}(x)-\tilde{v}_{\beta_{n}(T)-s}^{n}(y)\right)^{2}\right]\right)\right\}ds\] \[e_{\eta,RW}(T,n) = \mathbb{E}\int\limits_{0}^{\beta_{n}(T)}f_{s}\left\{\kappa\sum \limits_{x,y\in\Lambda_{n}}2p_{xy}^{n}\eta_{s}^{n}(x)\left(\left(\tilde{u}_{ \beta_{n}(T)-s}^{n}(x)-\tilde{u}_{\beta_{n}(T)-s}^{n}(y)\right)\right.\right.\] \[\left.\left.\times\left(\tilde{v}_{\beta_{n}(T)-s}^{n}(x)-\tilde{ v}_{\beta_{n}(T)-s}^{n}(y)\right)\right.\right.\] \[\left.\left.-i\left[\left(\tilde{u}_{\beta_{n}(T)-s}^{n}(x)- \tilde{u}_{\beta_{n}(T)-s}^{n}(y)\right)^{2}-\left(\tilde{v}_{\beta_{n}(T)-s }^{n}(x)-\tilde{v}_{\beta_{n}(T)-s}^{n}(y)\right)^{2}\right]\right)\right\}ds\] \[e_{br}(T,n) = \mathbb{E}\int\limits_{0}^{\beta_{n}(T)}f_{s}\sum\limits_{x\in \Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^{n}(x)o\left(\left(\tilde{u}_{\beta_{n}(T)- s}^{n}(x)\right)^{2}+\left(\tilde{v}_{\beta_{n}(T)-s}^{n}(x)\right)^{2} \right)ds,\]
where \(o(z)\) is the function such that there exists \(p>1\) and constant \(C_{p}\) (independent of \(z\)) such that \(\left|o(z)\right|\leq C_{p}\left|z\right|^{p}\).
Now we are going to show that indeed \(e(T,n)\) vanishes, as \(n\to\infty\). Before we start the proof, we need to derive several important moment estimates. First we state the lemma whose proof goes along the same lines as Lemma 2.2 from [5], and thus is omitted.
**Lemma 6.11**.: _Let \(d\geq 3\). Let \(\left(u_{t}^{n},v_{t}^{n}\right)_{t\geq 0}\) be a solution of (1.3), with \(Q^{n}\) being a \(Q\)-matrix of the nearest neighbor random walk on \(\Lambda_{n}\). Assume that initial distribution of \(\left(u_{0}^{n},v_{0}^{n}\right)\) is shift invariant. Then_
_(a) There exists \(p>2\) such that_
\[\sup\limits_{x\in\Lambda_{n}}\sup\limits_{t\geq 0}\mathbb{E}\left|u_{t}^{n}(x) \right|^{p}<\infty,\ \ \sup\limits_{x\in\Lambda_{n}}\sup\limits_{t\geq 0}\mathbb{E}\left|v_{t}^{n}(x) \right|^{p}<\infty.\]
_(b) For any \(T<\infty\) there is a finite constant \(M\) such that_
\[\sup_{n}\sup_{x\in\Lambda_{n}}\sup_{0\leq t\leq\beta_{n}(T)}\mathbb{E}\left(u_{t} ^{n}(x)^{2}\right)\leq M,\ \sup_{n}\sup_{x\in\Lambda_{n}}\sup_{0\leq t\leq\beta_{n}(T)}\mathbb{E}\left(v_{t }^{n}(x)^{2}\right)\leq M.\]
_(c) There exists \(p>2\) such that for any \(T<\infty\)_
\[\sup_{n}\sup_{x\in\Lambda_{n}}\sup_{0\leq t\leq\beta_{n}(T)}\mathbb{E}\left|u_{ t}^{n}(x)\right|^{p}<\infty,\ \sup_{n}\sup_{x\in\Lambda_{n}}\sup_{0\leq t\leq\beta_{n}(T)}\mathbb{E}\left|v_{ t}^{n}(x)\right|^{p}<\infty.\]
**Corollary 6.12**.: _If \(\tilde{\gamma}=\gamma\sigma^{2}<\frac{1}{\sqrt{2\cdot 3^{5}}g_{\infty}(0)}\), then part (c) in the previous lemma holds for \(p=4\), i.e._
\[\sup_{n}\sup_{0\leq t\leq\beta_{n}(t)}\mathbb{E}\left(\left(u_{t}^{n}(x) \right)^{4}\right)<\infty,\ \text{and}\ \sup_{n}\sup_{0\leq t\leq\beta_{n}(t)}\mathbb{E}\left(\left(v_{t}^{n}(x) \right)^{4}\right)<\infty.\]
Proof.: Follows easily from the proof of Lemma 2.2 in [5].
It is clear that if \(d\geq 3\), the random walk in \(\mathbb{Z}^{d}\) is transient. The second moments of \((\xi_{t}^{n},\eta_{t}^{n})\) are bounded for \(t\leq\beta_{n}(T)\) uniformly in \(n\). To this end we need the following lemma that was proved in Lemma 2.1 in [5].
**Lemma 6.13**.: _Assume that \(2\tilde{\gamma}g_{\infty}(0)<1\), where \(\left\{g_{t}(\cdot)\right\}_{t\geq 0}\) is the Green function of the nearest neighbor random walk on \(\mathbb{Z}^{d}\). Denote by \(p_{t}^{n}(i,j)\) the transition probabilities for the symmetric nearest neighbor random walk on domain \(\Lambda_{n}\). Then the following holds._
_a) If \(t_{n}/n^{2}\to\infty\) as \(n\to\infty\), then_
\[\sup_{t\geq t_{n}}\sup_{i,j\in\Lambda_{n}}(2n)^{d}\left|p_{t}^{n}(i,j)-(2n)^{- d}\right|\to\infty.\]
_b) If \(d\geq 3\), and \(\lambda>0\), then_
\[\lim_{n\to\infty}\int\limits_{0}^{\infty}e^{-\lambda t/(2n)^{d}}p_{2t}^{n}(i, j)dt=\frac{1}{\lambda}+\int\limits_{0}^{\infty}p_{2t}(i,j)dt.\]
_c) If \(d\geq 3\), and \(T(n)/\left|\Lambda_{n}\right|\to s\in(0,\infty)\) as \(n\to\infty\), then_
\[\lim_{n\to\infty}\int\limits_{0}^{T(n)}p_{2t}^{n}(i,j)dt=\int\limits_{0}^{ \infty}p_{2t}(i,j)dt+s.\]
**Corollary 6.14**.: _For any \(x,y\in\Lambda_{n}\),_
\[\sup_{n}\sup_{t\leq\beta_{n}(T)}\sup_{x,y\in\Lambda_{n}}\mathbb{E}\left(\xi_{t }^{n}(x)\xi_{t}^{n}(y)\right),\ \sup_{n}\sup_{t\leq\beta_{n}(T)}\sup_{x,y\in\Lambda_{n}}\mathbb{E}\left(\eta_ {t}^{n}(x)\eta_{t}^{n}(y)\right)<\infty.\]
Proof.: By Lemmas 5.5 and 5.6 it is enough to show that.
\[\sup_{n}\sup_{t\leq\beta_{n}(T)}\sup_{x,y\in\Lambda_{n}}g_{t}^{n}(x,y)<\infty,\]
where \(\left\{g_{t}^{n}(\cdot,\cdot)\right\}_{t\geq 0}\) is the Green function of the symmetric nearest neighbor random walk on \(\Lambda_{n}\). For any \(t\geq 0\), \(x,y\in\Lambda_{n}\), we have
\[g_{t}^{n}(x,y)\leq g_{\beta_{n}(T)}^{n}(x,y)\leq g_{\beta_{n}(T)}^{n}(0,0).\]
By Lemma 6.13\(\sup_{n}g_{\beta_{n}(T)}^{n}(0,0)\) is finite, and we are done.
Since, for \(i,j\in\Lambda_{n}\), \(p_{t}^{n}(i,j)\), \(g_{t}^{n}(i,j)\) are functions of \(i-j\), with some abuse of notation we will sometimes use the notation \(p_{t}^{n}(i-j)\), \(g_{t}^{n}(i-j)\) for \(p_{t}^{n}(i,j)\), \(g_{t}^{n}(i,j)\) respectively.
The proof of the next lemma is simple and thus is omitted.
**Lemma 6.15**.: _For any \(n\in\mathbb{N}\), \(r>0\),_
\[\sum_{x_{1},y_{1}}\sum_{x_{2},y_{2}}p_{x_{1},y_{1}}^{n}p_{x_{2},y_{2}}^{n}\left( p_{r}^{n}(x_{1}-x_{2})+p_{r}^{n}(y_{1}-y_{2})+p_{r}^{n}(x_{1}-y_{2})+p_{r}^{n}(y_ {1}-x_{2})\right)=4\left|\Lambda_{n}\right|.\]
We start by checking the limiting behavior of \(e_{\xi,RW}(T,n)\) and \(e_{\eta,RW}(T,n)\).
**Lemma 6.16**.: \[\lim_{n\to\infty}e_{\xi,RW}(T,n)=0,\ \ \lim_{n\to\infty}e_{\eta,RW}(T,n)=0.\]
Proof.: Look at the absolute value of the motion part. We will take care of \(e_{\xi,RW}(T,n)\); the proof for \(e_{\eta,RW}(T,n)\) is the same.
\[|e_{\xi,RW}(T,n)|\] \[\leq C\kappa\mathbb{E}\left(\left.\int\limits_{0}^{\beta_{n}(T)}|f_{ s}|\left\{\left|\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x) \left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)(\tilde{v}_{s}^{n}(x)- \tilde{v}_{s}^{n}(y))\right|\right.\right.\] \[\left.\left.+\left|\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{ n}(T)-s}^{n}(x)\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)^{2}-( \tilde{v}_{s}^{n}(x)-\tilde{v}_{s}^{n}(y))^{2}\right|ds\right\}\right)\] \[\leq C\mathbb{E}\left(\left.\int\limits_{0}^{\beta_{n}(T)}J_{1}^{n}(s) ds+\left.\int\limits_{0}^{\beta_{n}(T)}J_{2}^{n}(s)ds\right),\right. \tag{6.11}\]
where
\[J_{1}^{n}(s) = \left|\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n} (x)\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)(\tilde{v}_{s}^{n}(x )-\tilde{v}_{s}^{n}(y))\right|,\] \[J_{2}^{n}(s) = \left|\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n} (x)\left((\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y))^{2}-(\tilde{v}_{s}^{n}(x )-\tilde{v}_{s}^{n}(y))^{2}\right)\right|.\]
Let's bound the expected value of \(J_{1}^{n}\):
\[\mathbb{E}(J_{1}^{n}(s)) = \mathbb{E}\left|\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n} (T)-s}^{n}(x)\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)(\tilde{v}_ {s}^{n}(x)-\tilde{v}_{s}^{n}(y))\right|\] \[\leq \sqrt{\mathbb{E}\left[\left(\sum_{x,y\in\Lambda_{n}}p_{xy}^{n} \xi_{\beta_{n}(T)-s}^{n}(x)\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y) \right)(\tilde{v}_{s}^{n}(x)-\tilde{v}_{s}^{n}(y))\right)^{2}\right]}.\]
Now we will recall the following representation from Theorem 2.2 in [14] with \(\phi=\delta_{x}\):
\[\begin{cases}\tilde{u}_{t}^{n}(x)=P_{t}^{n}\tilde{u}_{0}^{n}(x)+\sum_{z\in\Lambda _{n}}\int_{0}^{t}p_{t-s}^{n}(x-z)\sqrt{\tilde{\gamma}\tilde{u}_{s}^{n}(z) \tilde{v}_{s}^{n}(z)}dB_{s}(z),&x\in\Lambda_{n},\\ \tilde{v}_{t}^{n}(x)=P_{t}^{n}\tilde{v}_{0}^{n}(x)+\sum_{z\in\Lambda_{n}}\int_ {0}^{t}p_{t-s}^{n}(x-z)\sqrt{\tilde{\gamma}\tilde{u}_{s}^{n}(z)\tilde{v}_{s}^ {n}(z)}dW_{s}(z),&x\in\Lambda_{n},\end{cases}\]
to get
\[N_{r}^{t}(x,y):= P_{t-r}^{n}\tilde{u}_{r}^{n}(x)-P_{t-r}^{n}\tilde{u}_{r}^{n}(y)\] \[= P_{t}^{n}\tilde{u}_{0}^{n}(x)-P_{t}^{n}\tilde{u}_{0}^{n}(y)\] \[+\sum_{z\in\Lambda_{n}}\int\limits_{0}^{r}\left(p_{t-s}^{n}(x-z) -p_{t-s}^{n}(y-z)\right)\sqrt{\tilde{\gamma}\tilde{u}_{s}^{n}(z)\tilde{v}_{s}^ {n}(z)}dB_{s}(z),\,\,\,r\leq t,\]
where the last equality follows from the Chapman-Kolmogorov formula. Similarly, for \(r\leq t\) we get
\[M_{r}^{t}(x,y):= P_{t-r}^{n}\tilde{v}_{t}^{n}(x)-P_{t-r}^{n}\tilde{v}_{t}^{n}(y)\] \[= P_{t}^{n}\tilde{v}_{0}^{n}(x)-P_{t}^{n}\tilde{v}_{0}^{n}(y)\] \[+\sum_{z\in\Lambda_{n}}\int\limits_{0}^{r}\left(p_{t-s}^{n}(x-z) -p_{t-s}^{n}(y-z)\right)\sqrt{\tilde{\gamma}\tilde{u}_{s}^{n}(z)\tilde{v}_{s}^ {n}(z)}dW_{s}(z)\]
where \(\left\{B_{.}(z)\right\}_{z\in\Lambda_{n}},\left\{W_{.}(z)\right\}_{z\in \Lambda_{n}}\) are orthogonal Brownian motions.
Note that \(\left\{N_{r}^{t}(x,y)\right\}_{0\leq r\leq t}\) and \(\left\{M_{r}^{t}(x,y)\right\}_{0\leq r\leq t}\) are martingales; in addition
\[\begin{array}{l}\tilde{u}_{t}^{n}(x)-\tilde{u}_{t}^{n}(y)=\left.P_{t-r}^{n} \tilde{u}_{r}^{n}(x)-P_{t-r}^{n}\tilde{u}_{r}^{n}(y)\right|_{r=t}=\left.N_{r}^ {t}(x,y)\right|_{r=t},\\ \tilde{v}_{t}^{n}(x)-\tilde{v}_{t}^{n}(y)=\left.P_{t-r}^{n}\tilde{v}_{r}^{n}(x )-P_{t-r}^{n}\tilde{v}_{r}^{n}(y)\right|_{r=t}=\left.M_{r}^{t}(x,y)\right|_{r =t}.\end{array} \tag{6.12}\]
Then by orthogonality of the Brownian motions \(B_{.}(z)\) and \(W_{.}(z)\) for all \(z\in\Lambda_{n}\), and the Ito formula we get
\[\sum_{x,y\in\Lambda_{n}}p_{x,y}^{n}\xi_{\beta_{n}(T)-s}^{n}M_{s}^{s}(x,y)N_{s} ^{s}(x,y)\]
\[= \sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\left( \tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)\left(\tilde{v}_{s}^{n}(x)- \tilde{v}_{s}^{n}(y)\right)\] \[= \sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\sum _{z\in\Lambda_{n}}\int\limits_{0}^{s}\left(P_{s-r}^{n}\tilde{u}_{r}^{n}(x)-P_ {s-r}^{n}\tilde{u}_{r}^{n}(y)\right)\] \[\times\left(p_{s-r}^{n}(x-z)-p_{s-r}^{n}(y-z)\right)\sqrt{\tilde{ u}_{r}^{n}(z)\tilde{v}_{r}^{n}(z)}dW_{r}(z)\] \[+\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x) \sum_{z\in\Lambda_{n}}\int\limits_{0}^{s}\left(P_{s-r}^{n}\tilde{v}_{r}^{n}(x) -P_{s-r}^{n}\tilde{v}_{r}^{n}(y)\right)\] \[\times\left(p_{s-r}^{n}(x-z)-p_{s-r}^{n}(y-z)\right)\sqrt{\tilde{ u}_{s}^{n}(z)\tilde{v}_{s}^{n}(z)}dB_{s}(z)\] \[=: \sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x) \sum_{z\in\Lambda_{n}}\tilde{I}_{1,1}^{n}(s,x,y,z)\] \[+\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x) \sum_{z\in\Lambda_{n}}\tilde{I}_{1,2}^{n}(s,x,y,z)\] \[=: I_{1,1}^{n}(s)+I_{1,2}^{n}(s).\]
Note that
\[\mathbb{E}\left(J_{1}^{n}(s)\right)\leq C\sqrt{\left(I_{1,1}^{n}(s)\right)^{2 }+\left(I_{1,2}^{n}(s)\right)^{2}}. \tag{6.13}\]
So let us bound \(\mathbb{E}\left[\left(I_{1,1}^{n}(s)\right)^{2}\right]\): for all \(s\leq\beta_{n}(T)\), we have
\[\mathbb{E}\left[\left(I_{1,1}^{n}(s)\right)^{2}\right]= \sum_{x_{1},y_{1}}\sum_{x_{2},y_{2}}\mathbb{E}\left(\xi_{\beta_{ n}(T)-s}^{n}(x_{1})\xi_{\beta_{n}(T)-s}^{n}(x_{2})\right)\] \[\times p_{x_{1},y_{1}}^{n}p_{x_{2},y_{2}}^{n}\sum_{z_{1}\in \Lambda_{n}}\sum_{z_{2}\in\Lambda_{n}}\mathbb{E}\left[\tilde{I}_{1,1}^{n}(s,x _{1},y_{1},z_{1})\tilde{I}_{1,1}^{n}(s,x_{2},y_{2},z_{2})\right]. \tag{6.14}\]
Note that for \(z_{1}\neq z_{2}\), \(\tilde{I}_{1,1}^{n}(r,x_{1},y_{1},z_{1})\) and \(\tilde{I}_{1,1}^{n}(r,x_{2},y_{2},z_{2})\) are orthogonal square integrable martingales for \(r\leq s\), and hence
\[\sum_{z_{1}\in\Lambda_{n}}\sum_{z_{2}\in\Lambda_{n}}\mathbb{E}\left[\tilde{I}_ {1,1}^{n}(s,x_{1},y_{1},z_{1})\tilde{I}_{1,1}^{n}(s,x_{2},y_{2},z_{2})\right]\]
\[=\sum_{z\in\Lambda_{n}}\mathbb{E}\left[\tilde{I}_{1,1}^{n}(s,x_{1},y_ {1},z)\tilde{I}_{1,1}^{n}(s,x_{2},y_{2},z)\right]\] \[=\sum_{z\in\Lambda_{n}}\mathbb{E}\left[\int\limits_{0}^{s}\left(P_ {s-r}^{n}\tilde{u}_{r}^{n}(x_{1})-P_{s-r}^{n}\tilde{u}_{r}^{n}(y_{1})\right) \left(p_{s-r}^{n}(x_{1}-z)-p_{s-r}^{n}(y_{1}-z)\right)\right.\] \[\quad\times\left(P_{s-r}^{n}\tilde{u}_{r}^{n}(x_{2})-P_{s-r}^{n} \tilde{u}_{r}^{n}(y_{2})\right)\left(p_{s-r}^{n}(x_{2}-z)-p_{s-r}^{n}(y_{2}-z) \right)\tilde{u}_{r}^{n}(z)\tilde{v}_{r}^{n}(z)dr\right]\] \[=\sum_{z\in\Lambda_{n}}\mathbb{E}\left[\int\limits_{0}^{s}\sum_{ z_{1}\in\Lambda_{n}}\left(p_{s-r}^{n}(x_{1}-z_{1})-p_{s-r}^{n}(y_{1}-z_{1}) \right)\tilde{u}_{r}^{n}(z_{1})\right.\] \[\quad\times\sum_{z_{2}\in\Lambda_{n}}\left(p_{s-r}^{n}(x_{2}-z_{ 2})-p_{s-r}^{n}(y_{2}-z_{2})\right)\tilde{u}_{r}^{n}(z_{2})\tilde{u}_{r}^{n}( z_{2})\] \[\quad\times\left(p_{s-r}^{n}(x_{1}-z)-p_{s-r}^{n}(y_{1}-z)\right) \left(p_{s-r}^{n}(x_{2}-z)-p_{s-r}^{n}(y_{2}-z)\right)\] \[\quad\times\left.\tilde{u}_{r}^{n}(z)\tilde{v}_{r}^{n}(z)dr\right]\] \[\leq\sum_{z}\int\limits_{0}^{s}\sum_{z_{1}}\sum_{z_{2}}\hat{J}_{ 1,1}(\vec{x},\vec{y},\vec{z},s-r)\mathbb{E}\left[\tilde{u}_{r}^{n}(z_{1}) \tilde{u}_{r}^{n}(z_{2})\tilde{u}_{r}^{n}(z)\tilde{v}_{r}^{n}(z)\right]dr.\]
where \(\vec{x}=(x_{1},x_{2}),\vec{y}=(y_{1},y_{2}),\vec{z}=(z_{1},z_{2},z)\) and
\[\hat{J}_{1,1}(\vec{x},\vec{y},\vec{z},s-r)=\Pi_{i=1,2}\left|p_{s-r}^{n}(x_{i}- z_{i})-p_{s-r}^{n}(y_{i}-z_{i})\right|\left|p_{s-r}^{n}(x_{i}-z)-p_{s-r}^{n}(y_{i} -z)\right|.\]
By Corollary 6.12 and assumption on the initial conditions of \((\tilde{u},\tilde{v})\),
\[\mathbb{E}\left[\tilde{u}_{r}^{n}(z_{1})\tilde{u}_{r}^{n}(z_{2})\tilde{u}_{r} ^{n}(z)\tilde{v}_{r}^{n}(z)\right]\]
is bounded by \(C\left|\Lambda_{n}\right|^{-4}\) uniformly on \(z,z_{1},z_{2}\in\Lambda_{n}\), \(r\leq\beta_{n}(T)\) and \(n\). Therefore,
\[\sum_{z_{1}\in\Lambda_{n}}\sum_{z_{2}\in\Lambda_{n}}\mathbb{E} \left[\tilde{I}_{1,1}^{n}(s,x_{1},y_{1},z_{1})\tilde{I}_{1,1}^{n}(s,x_{2},y_{2 },z_{2})\right]\] \[\leq C\left|\Lambda_{n}\right|^{-4}\sum_{z}\int\limits_{0}^{s} \sum_{z_{1}}\sum_{z_{2}}\hat{J}_{1,1}(\vec{x},\vec{y},\vec{z},s-r)dr. \tag{6.15}\]
Denote and
\[\tilde{J}_{1,1}(\vec{x},\vec{y},s-r) := \sum_{z\in\Lambda_{n}}\sum_{z_{1}\in\Lambda_{n}}\sum_{z_{2}\in \Lambda_{n}}\hat{J}_{1,1}(\vec{x},\vec{y},\vec{z},s-r).\]
Now we decompose the term on the right-hand side of (6.15) into two terms
\[C\left|\Lambda_{n}\right|^{-4}\int\limits_{0}^{(s-n^{\delta})_{+}}\tilde{J}_{1,1}(\vec{x},\vec{y},s-r)dr+C\left|\Lambda_{n}\right|^{-4}\int\limits_{(s-n^{ \delta})_{+}}^{s}\tilde{J}_{1,1}(\vec{x},\vec{y},s-r)dr \tag{6.16}\]
for some \(\delta\in(2,d)\).
By Lemma 6.13(a) we get
\[\lim_{n\to\infty}\sup_{t\to n^{\delta}}\sup_{z_{1},z_{2}\in\Lambda_{n}}\left| \Lambda_{n}\right|\left|p_{t}^{n}(z_{1},z_{2})-(2n)^{-d}\right|=0,\]
for any \(\delta>2\). This implies that, for any \(\delta>2\), there exists a sequence \(a_{n}=a_{n}(\delta)\), such that
\[\sup_{s\geq n^{\delta}}\sup_{w_{1},w_{2},w_{3}\in\Lambda_{n}}|p_{s}^{n}(w_{1},w_ {2})-p_{s}^{n}(w_{3},w_{2})|\leq\frac{a_{n}}{|\Lambda_{n}|}, \tag{6.17}\]
where \(a_{n}\to 0\), as \(n\to\infty\).
By (6.17) we immediately get
\[\tilde{J}_{1,1}(\vec{x},\vec{y},s-r)\leq|\Lambda_{n}|^{3}\,a_{n}^{4}\,|\Lambda _{n}|^{-4}=|\Lambda_{n}|^{-1}\,a_{n}^{4},\]
for \(s>n^{\delta}\) and \(r\leq s-n^{\delta}\). Hence for \(s\leq\beta_{n}(T)\), we get
\[|\Lambda_{n}|^{-4}\int\limits_{0}^{(s-n^{\delta})_{+}}\tilde{J}_{1,1}(\vec{x}, \vec{y},s-r)dr\leq|\Lambda_{n}|^{-4}\,a_{n}^{4}\,|\Lambda_{n}|^{-1}\int\limits_ {0}^{(s-n^{\delta})_{+}}1dr\leq C\,|\Lambda_{n}|^{-4}\,a_{n}^{4}, \tag{6.18}\]
where the last inequality follows since \(s\leq\beta_{n}(T)=|\Lambda_{n}|\,T\).
Let us treat the second term in (6.16). Note that
\[\sum_{z_{i}\in\Lambda_{n}}|p_{s}^{n}(x_{i}-z_{i})-p_{s}^{n}(y_{i}-z_{i})|\leq \sum_{z_{i}\in\Lambda_{n}}p_{s}^{n}(x_{i}-z_{i})+p_{s}^{n}(y_{i}-z_{i})\leq 2, \forall i=1,2,s\geq 0.\]
Also
\[\sum_{z\in\Lambda_{n}}\left|p_{s-r}^{n}(x_{1}-z)-p_{s-r}^{n}(y_{1 }-z)\right|\left|p_{s-r}^{n}(x_{2}-z)-p_{s-r}^{n}(y_{2}-z)\right|\\ \leq\sum_{z\in\Lambda_{n}}\left(p_{s-r}^{n}(x_{1}-z)+p_{s-r}^{n}( y_{1}-z)\right)\left(p_{s-r}^{n}(x_{2}-z)+p_{s-r}^{n}(y_{2}-z)\right)\\ =p_{2(s-r)}^{n}(x_{1}-x_{2})+p_{2(s-r)}^{n}(y_{1}-y_{2})+p_{2(s-r) }^{n}(x_{1}-y_{2})+p_{2(s-r)}^{n}(y_{1}-x_{2}),\]
where the last equality follows from the Chapman-Kolmogorov formula. Then
\[C\,|\Lambda_{n}|^{-4}\int\limits_{(s-n^{\delta})_{+}}^{s}\tilde {J}_{1,1}(\vec{x},\vec{y},s-\ r)dr\\ \leq C\,|\Lambda_{n}|^{-4}\int\limits_{0}^{n^{\delta}}\left(p_{2r} ^{n}(x_{1}-x_{2})+p_{2r}^{n}(y_{1}-y_{2})+p_{2r}^{n}(x_{1}-y_{2})+p_{2r}^{n}( y_{1}-x_{2})\right)dr. \tag{6.19}\]
By (6.15), (6.16), (6.18) and (6.19) we get
\[\sum_{z_{1}\in\Lambda_{n}}\sum_{z_{2}\in\Lambda_{n}}\mathbb{E} \left[\tilde{I}_{1,1}^{n}(s,x_{1},y_{1},z_{1})\tilde{I}_{1,1}^{n}(s,x_{2},y_{ 2},z_{2})\right]\] \[\leq C\,|\Lambda_{n}|^{-4}\left(a_{n}^{4}+\int\limits_{0}^{n^{\delta}} \left(p_{2r}^{n}(x_{1}-x_{2})+p_{2r}^{n}(y_{1}-y_{2})+p_{2r}^{n}(x_{1}-y_{2})+ p_{2r}^{n}(y_{1}-x_{2})\right)dr\right).\]
Use the above inequality, (6.14) and also Corollary 6.14 and Lemma 6.15 to get
\[\mathbb{E}\left[\left(I_{1,1}^{n}(s)\right)^{2}\right] \leq C\left(\sum_{x_{1},y_{1}}\sum_{x_{2},y_{2}}p_{x_{1},y_{1}}^{n }p_{x_{2},y_{2}}^{n}\left|\Lambda_{n}\right|^{-4}a_{n}^{4}+\left|\Lambda_{n} \right|^{-4}n^{\delta}\left|\Lambda_{n}\right|\right)\] \[\leq C\left(\left|\Lambda_{n}\right|^{-2}a_{n}^{4}+\left|\Lambda_{ n}\right|^{-3}n^{\delta}\right). \tag{6.20}\]
In the same way we handle \(I_{1,2}^{n}(s)\) and get
\[\mathbb{E}\left[\left(I_{1,2}^{n}(s)\right)^{2}\right] \leq C\left(\sum_{x_{1},y_{1}}\sum_{x_{2},y_{2}}p_{x_{1},y_{1}}^{n }p_{x_{2},y_{2}}^{n}\left|\Lambda_{n}\right|^{-4}a_{n}^{4}+\left|\Lambda_{n} \right|^{-4}n^{\delta}\left|\Lambda_{n}\right|\right)\] \[\leq C\left(\left|\Lambda_{n}\right|^{-2}a_{n}^{4}+\left|\Lambda_{ n}\right|^{-3}n^{\delta}\right). \tag{6.21}\]
By (6.20), (6.21) and (6.13), we have
\[\mathbb{E}\left[J_{1}^{n}(s)\right] \leq \sqrt{\mathbb{E}\left[\left(I_{1,1}^{n}(s)\right)^{2}+\left(I_{1,2}^{n}(s)\right)^{2}\right]}\] \[\leq C\sqrt{\left|\Lambda_{n}\right|^{-2}a_{n}^{4}+\left|\Lambda_{n} \right|^{-3}n^{\delta}}\] \[\leq C\left|\Lambda_{n}\right|^{-1}a_{n}^{2}+C\left|\Lambda_{n} \right|^{-3/2}n^{\delta/2}.\]
Thus
\[\int\limits_{0}^{\beta_{n}(T)}\mathbb{E}\left(J_{1}^{n}(s)\right)ds \leq C\int\limits_{0}^{\beta_{n}(T)}\left(\left|\Lambda_{n}\right| ^{-1}a_{n}^{2}+\left|\Lambda_{n}\right|^{-3/2}n^{\delta/2}\right)ds\] \[\leq C\left|\Lambda_{n}\right|\left(\left|\Lambda_{n}\right|^{-1} a_{n}^{2}+\left|\Lambda_{n}\right|^{-3/2}n^{\delta/2}\right)\] \[\leq Ca_{n}^{2}+C\left(\frac{n^{\delta}}{\left|\Lambda_{n}\right| }\right)^{1/2}\to 0,\ \ \text{as}\ n\rightarrow\infty, \tag{6.22}\]
where the last convergence holds since \(\delta<d\) and \(\left|\Lambda_{n}\right|\leq cn^{d}\).
Now we are ready to treat \(J_{2}^{n}(s)\) in a similar way.
By the Ito formula,
\[\left(M_{r}^{s}(x,y)\right)^{2}=\int\limits_{0}^{s}M_{r}^{s}(x,y)dM_{r}^{s}(x,y)+\left\langle M_{\cdot}^{s}(x,y)\right\rangle_{s}\]
and
\[\left(N_{r}^{s}(x,y)\right)^{2}=\int\limits_{0}^{s}N_{r}^{s}(x,y)dN_{r}^{s}(x,y)+\left\langle N_{\cdot}^{s}(x,y)\right\rangle_{s}.\]
Note that \(\left\langle M_{\cdot}^{t}(x,y)\right\rangle_{t}=\left\langle N_{\cdot}^{t}(x,y)\right\rangle_{t}\), and recall (6.12); therefore
\[J_{2}^{n}(s)=\sum_{x,y\in\Lambda_{n}}\frac{1}{2}p_{x,y}^{n}\xi_{\beta_{n}(T)-s }^{n}\left[\int\limits_{0}^{s}M_{r}^{s}(x,y)dM_{r}^{s}(x,y)-\int\limits_{0}^{s }N_{r}^{s}(x,y)dN_{r}^{s}(x,y)\right].\]
If we follow the steps of computations for \(J_{1}^{n}(s)\), we get that \(\lim_{n\to\infty}J_{2}^{n}(s)=0\).
Use this, (6.22) and (6.11) to finish the proof.
Now we treat the \(e_{br}(T,n)\) term (see (6.10)). To this end we need the following lemma.
**Lemma 6.17**.: _There exists \(0\!<\!\!\delta\!<\!\!1\) such that the following holds_
\[B_{T}^{n,\delta}:=\mathbb{E}\int\limits_{0}^{\beta_{n}(T)}\sum\limits_{x\in \Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^{n}(x)\left(\left(\tilde{u}_{\beta_{n}(T)-s }^{n}(x)\right)^{2+\delta}+\left(\tilde{v}_{\beta_{n}(T)-s}^{n}(x)\right)^{2+ \delta}\right)ds\longrightarrow 0\]
_as \(n\to\infty\)._
Proof.: Consider the process \((\hat{u}^{n},\hat{v}^{n})\) that solves (1.3) equations with initial conditions
\[\hat{u}_{0}^{n}(x)=|\Lambda_{n}|\tilde{u}_{0}^{n}(x),\ \ \hat{v}_{0}^{n}(x)=| \Lambda_{n}|\tilde{v}_{0}^{n}(x),\ \ \forall x\in\Lambda_{n}.\]
Then for any \(s>0\):
\[\hat{u}_{s}^{n}(x)=|\Lambda_{n}|\tilde{u}_{s}^{n}(x),\ \ \hat{v}_{s}^{n}(x)=| \Lambda_{n}|\tilde{v}_{s}^{n}(x),\ \ \forall x\in\Lambda_{n}.\]
Therefore we can rewrite the \(B_{T}^{n,\delta}\) in the following way
\[B_{T}^{n,\delta} = |\Lambda_{n}|^{-(2+\delta)}\mathbb{E}\int\limits_{0}^{\beta_{n}( T)}\sum\limits_{x\in\Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^{n}(x)\left(\left(\hat{u}_{ \beta_{n}(T)-s}^{n}(x)\right)^{2+\delta}+\left(\hat{v}_{\beta_{n}(T)-s}^{n}(x) \right)^{2+\delta}\right)ds\] \[\leq |\Lambda_{n}|^{-\delta}\theta_{1}\theta_{2}\sup\limits_{s\leq \beta_{n}(T)}\frac{1}{|\Lambda_{n}|}\mathbb{E}\left[\sum\limits_{x\in\Lambda_ {n}}\left(\left(\hat{u}_{\beta_{n}(T)-s}^{n}(x)\right)^{2+\delta}+\left(\hat{v }_{\beta_{n}(T)-s}^{n}(x)\right)^{2+\delta}\right)\right]\] \[\leq |\Lambda_{n}|^{-\delta}\theta_{1}\theta_{2}\sup\limits_{x\in \Lambda_{n}}\sup\limits_{s\leq\beta_{n}(T)}\mathbb{E}\left[\left(\left(\hat{u} _{\beta_{n}(T)-s}^{n}(x)\right)^{2+\delta}+\left(\hat{v}_{\beta_{n}(T)-s}^{n}( x)\right)^{2+\delta}\right)\right].\]
Therefore it is enough to show that
\[\sup\limits_{n\in\mathbb{N}}\sup\limits_{x\in\Lambda_{n}}\sup\limits_{s\leq \beta_{n}(T)}\mathbb{E}\left[\left(\left(\hat{u}_{\beta_{n}(T)-s}^{n}(x) \right)^{2+\delta}+\left(\hat{v}_{\beta_{n}(T)-s}^{n}(x)\right)^{2+\delta} \right)\right]<\infty.\]
However, this result was proved in Lemma 6.11(c).
**Corollary 6.18**.: \[|e_{br}(T,n)|\longrightarrow 0,\ \ \text{as}\,n\to\infty.\]
Proof.: Fix \(\delta\in(0,1)\) such that
\[B_{T}^{n,\delta}\to 0,\ \ \text{as}\,n\to\infty.\]
It is easy to check (by definition of \(o(z)\)) that there exists \(C_{\delta}\) such that
\[|e_{br}(T,n)|\leq C_{\delta}B_{T}^{n,\delta},\]
and the result follows.
_Remark 6.19_.: The case of large \(\gamma\) is more complicated since the moments may blow up. To treat this case one can try to use an \(x\log x\) moment technique as was done in [6]. We leave this case for future work. |
2305.14328 | Benchmarking LLM-based Machine Translation on Cultural Awareness | Translating cultural-specific content is crucial for effective cross-cultural
communication. However, many MT systems still struggle to translate sentences
containing cultural-specific entities accurately and understandably. Recent
advancements in in-context learning utilize lightweight prompts to guide large
language models (LLMs) in machine translation tasks. Nevertheless, the
effectiveness of this approach in enhancing machine translation with cultural
awareness remains uncertain. To address this gap, we introduce a new data
curation pipeline to construct a culturally relevant parallel corpus, enriched
with annotations of cultural-specific items. Furthermore, we devise a novel
evaluation metric to assess the understandability of translations in a
reference-free manner by GPT-4. We evaluate a variety of neural machine
translation (NMT) and LLM-based MT systems using our dataset. Additionally, we
propose several prompting strategies for LLMs to incorporate external and
internal cultural knowledge into the translation process. Our results
demonstrate that eliciting explanations can significantly enhance the
understandability of cultural-specific entities, especially those without
well-known translations. | Binwei Yao, Ming Jiang, Diyi Yang, Junjie Hu | 2023-05-23T17:56:33Z | http://arxiv.org/abs/2305.14328v2 | # Empowering LLM-based Machine Translation with Cultural Awareness
###### Abstract
Traditional neural machine translation (NMT) systems often fail to translate sentences that contain culturally specific information. Most previous NMT methods have incorporated external cultural knowledge during training, which requires fine-tuning on low-frequency items specific to the culture. Recent in-context learning utilizes lightweight prompts to guide large language models (LLMs) to perform machine translation, however, whether such an approach works in terms of injecting culture awareness into machine translation remains unclear. To this end, we introduce a new data curation pipeline to construct a culturally relevant parallel corpus, enriched with annotations of cultural-specific entities. Additionally, we design simple but effective prompting strategies to assist this LLM-based translation. Extensive experiments show that our approaches can largely help incorporate cultural knowledge into LLM-based machine translation, outperforming traditional NMT systems in translating cultural-specific sentences.
## 1 Introduction
MT systems have achieved remarkable success in recent years, thanks to multilingual pre-trained language models (Aharoni et al., 2019). However, their translation performance on culturally specific data is still poor, mostly due to the gap between the cultural norms associated with the languages (Akinade et al., 2023; Liebling et al., 2022). Languages and culture are highly intertwined (Gee, 2014; Kramsch, 2014), thus many cultural-specific concepts in various categories (e.g., food, clothing, art, religions) are not frequently used by users across other cultures (Woolford, 1983). As a result, parallel corpora from general domains contain a very small amount of culturally specific texts with fine-grained annotation for recognizing cultural-specific content, posing a challenge for training and evaluating MT systems.
One solution to this issue is integrating external cultural knowledge, such as lexicon translation, into MT systems through probability interpolation (Arthur et al., 2016; Khandelwal et al., 2021), data augmentation (Hu et al., 2019), and pre-training (Hu et al., 2022). However, these methods require further fine-tuning of the original NMT models, potentially resulting in new issues such as catastrophic forgetting (Thompson et al., 2019). Recently, a new translation paradigm has emerged, which employs prompts to guide large language models (LLMs) to perform machine translation (Brown et al., 2020) in a zero-shot or few-shot fashion. With this flexible paradigm, cultural knowledge can be seamlessly incorporated into LLM translation prompts. However, LLM translation is also sensitive to different prompting strategies and prone to generating hallucinations (Ji et al., 2023). For example, when prompting ChatGPT with basic instructions, as shown in Figure 1, the produced translations of cultural-specific items are fluent but incorrect. These mistranslations can be challenging to comprehend, as they may involve incorrect word order, like "\(\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel{\cancel
with an incorrect film title. Moreover, there is still a lack of systematic comparison between the LLM prompting strategies and existing NMT systems for translating culturally relevant texts, partly due to the absence of a rich-annotated culturally sensitive parallel corpus and reliable automated metrics for evaluation.
To tackle these challenges, we propose a novel data curation pipeline to construct a culturally specific parallel corpus, which serves as a valuable resource for evaluating the cultural awareness of MT systems. Our pipeline starts with constructing a cultural taxonomy that is used to identify culturally relevant texts from Wikipedia and then performs adversarial mining to further select culturally nuanced texts. The resulting parallel corpus contains rich cultural-specific items and their corresponding meta-data, **covering 18 Wikipedia categories across 63 countries**. In addition to the curated corpus, we introduce a set of simple yet effective prompting strategies to incorporate external or internal cultural knowledge into LLM-based machine translation. To faithfully assess the cultural nuance, we propose an automatic evaluation metric, focusing on the translation quality of cultural concepts, and we also perform a fine-grained human evaluation among prompt-based LLM translation and existing NMT systems. Our experimental results demonstrate that prompting LLMs with an external bilingual lexicon consistently enhances the quality of cultural-specific concepts. In summary, our contributions are:
* We construct a cultural-specific parallel corpus with rich cultural-related annotations, which enable a fine-grained evaluation of existing machine translation approaches in terms of their cultural awareness.
* We introduce simple yet effective prompting strategies using _external knowledge_ and _internal knowledge_ for LLM-based machine translation and systematically benchmark their performances on our culturally specific corpus.
* We offer new insights into enhancing the cultural awareness of LLM translation based on our automatic evaluation metric and fine-grained analysis.
## 2 Culturally Relevant Data Construction
In this section, we describe the construction of the culturally relevant dataset for MT. We first construct a cultural taxonomy for Wikipedia data based on the classic translation theory of cultural-specific items (SS2.1). This taxonomy enables us to identify fine-grained cultural categories. Using this taxonomy, we construct a parallel corpus for evaluating cultural awareness of translation systems through a three-step pipeline (c.f. Figure 2). In step 1, we collect parallel texts from Wikipedia that focus on describing any cultural-specific items (SS2.2). Next, we use an adversarial mining approach to identify hard examples that pose challenges for commercial translation engines (SS2.3). Finally, we perform knowledge augmentation for the items by scraping metadata from Wikidata (SS2.4).
### Cultural Taxonomy Extraction
Cultural-specific items (CSI) are defined as entity words or phrases that are unique to a specific culture, and are divided into five categories: 1) _ecology_; 2) _material culture_; 3) _social culture_; 4) _organizations, customs, ideas_; 5) _gestures and habits_(Newmark, 2003). To apply the CSI taxonomy to Wikipedia articles, we manually create a mapping table between the CSI categories and the Wikiproject categories1(Asthana and Halfaker, 2018) by categorizing 18 culture-related Wikiproject categories into 5 CSI categories. This table allows us to use the corresponding Wikiproject categories to locate CSIs in Wikipedia texts. The mapping table is presented in Table 5 (SSA).
Footnote 1: [https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Categories](https://en.wikipedia.org/wiki/Wikipedia:WikiProject_Categories)
### Cultural Parallel Text Collection
To construct a cultural parallel text corpus, we collect public text articles from Wikipedia that cover a wide range of cultural topics. Specifically, we use the bilingual Wikipedia articles translated through Wikipedia's content translate tool2. This tool allows confirmed editors to translate Wikipedia articles from a source language to a target language with a machine translation system. By tracking their editing logs, we obtain the text triples consisting of the original text in a source language, the machine-translated text, and the human post-edited text in a target language. We then use a sentence alignment tool bleu-align3(Sennrich and Volk, 2010) to obtain a sentence-level parallel corpus.
Footnote 2: [https://en.wikipedia.org/wiki/Wikipedia:Content_translation_tool](https://en.wikipedia.org/wiki/Wikipedia:Content_translation_tool)
To identify culturally relevant sentences, we perform entity-linking (Ringgaard et al., 2017) to
identify Wikipedia entities on the source texts, and use Wikiproject classification tool (Asthana and Halfaker, 2018) to classify these entities into cultural categories which are further mapped to our CSI categories using the cultural taxonomy (SS2.1). Finally, we only keep the texts that contain CSIs.
### Adversarial Mining on Culturally Nuanced Examples
Inspired by the adversarial benchmarking methods (Kiela et al., 2021), we also collect challenging cultural-specific examples that are frequently mistranslated by commercial MT systems. Our approach involves two steps of adversarial mining. In the first step, we utilize a word-alignment tool awesome-align4(Dou and Neubig, 2021) to align words between the machine-translated and human-edited texts in our corpus, from which we extract the aligned CSI translations. Then, we only retain the text triples where a disagreement exists between the machine-translated and human-edited CSIs, as these instances are likely mistranslated by the MT system. In the second step, we conduct an additional round of adversarial mining using commercial translation systems (e.g., Google Translate and ChatGPT). We employ these systems to translate the source sentences and compare their translations with human-edited translations, specifically focusing on CSI translation accuracy. By the 2-step automatic mining method, we retain a small set of challenging examples from 206,224 data points that continue to pose difficulties for commercial systems.
Footnote 4: [https://github.com/neulab/awesome-align](https://github.com/neulab/awesome-align)
### Cultural Knowledge Augmentation
Existing MT studies (Arthur et al., 2016; Hu et al., 2022) have been using external knowledge sources (e.g., Wikidata) to improve named entity translations. To enable future adaptations of these studies on our collected corpus, we parse Wikidata to extract the metadata of CSIs, which include their cultural labels, descriptions, and aliases in multiple languages. Furthermore, we collect information on the country of origin for each CSI and remove sentences containing high-frequency CSIs that do not have an associated origin country. This meticulous approach enabled us to enrich our dataset with supplementary information that can be utilized to evaluate the performance of machine translation models when handling culturally specific content.
### Data Characteristic Summary
Culture is intricately linked with specific regions, and its manifestations can exhibit substantial variations across diverse regions and categories. Therefore, our dataset encompasses cultural-specific elements sourced from a wide array of regions and categories, as shown in Table 1. This inclusive approach allows us to comprehensively evaluate the performance of machine translation models across
\begin{table}
\begin{tabular}{l l} \hline \hline Data & Count \\ \hline Parallel sentences & 1,729 \\ Countries (Region) & 63 \\ Wikiproject Categories & 18 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset Statistics
Figure 2: A data curation pipeline for constructing a cultural-specific parallel corpus from Wikipedia.
a broad spectrum of cultural contexts.
The Figure 3 shows the distribution of categories and regions on our dataset. Regarding the regions, we have classified 63 countries into 6 regions. Among these regions, except for _None_ which represents CSIs without a country of origin mentioned on Wikipedia, CSIs originating from North America (\(29.37\%\)) have the highest representation. For the categories, we classify 18 Wikiproject categories into 5 culture categories. Since the category of _gestures and habits_ has no corresponding Wikiproject categories, we collect data from the remaining 4 categories.
## 3 Cultural Knowledge Prompting
In this section, we focus on machine translation using instruction-tuned LLMs such as ChatGPT, specifically exploring various prompting strategies. Traditionally, a prompt for LLMs consists of a natural language task description accompanied by a few-shot demonstration utilizing examples. Here, we first investigate the language used for crafting the prompt. We then elucidate our strategies for generating demonstration examples from _external knowledge_, which involve employing CSI translation pairs and CSI explanations. Additionally, we will delve into several prompting strategies eliciting from LLMs' _internal knowledge_. Table 2 shows examples of various prompting strategies.
### Source v.s. Target Language Instructions
As LLMs have been pre-trained on massive texts written in multiple languages, their capability of interpreting instructions in different languages has not been comprehensively studied for machine translation. We hypothesize that the language used in the prompt has a significant impact on the language model's understanding of the instructions and the demonstrations, and the target language will bias the model to better leverage the information in the prompt for machine translation tasks. Therefore, we propose to construct prompts written in the source and target languages and compare the LLMs' translation performance of CSIs.
### External CSI Translation (CT)
Next, we investigate the external knowledge for machine translation. In particular, bilingual translation dictionaries play a vital role in the workflow of human translators. Professional translators often compile domain-specific terminology dictionaries, enabling them to maintain consistency while translating the terminologies within a specific domain. Historically, it is non-trivial to integrate the translation dictionary into neural machine translation models Arthur et al. (2016); Hu et al. (2019), compared to phrase-based machine translation systems. However, with the advent of large language models (LLMs) for machine translation, incorporating entity information into prompts has become much more feasible. Here, we assess the impact of incorporating a CSI dictionary within the prompts. Specifically, we incorporate CSIs along with their corresponding translations prior to a basic translation instruction. This methodology allows us to investigate the potential benefits of leveraging a CSI dictionary within the context of LLM-based translation models.
### External CSI Explanation (CE)
CSIs may not have a direct equivalent in the target language's culture when the concepts are not commonly used by the target-language speakers. Therefore, when translating CSIs, a direct translation may not be always available, and it becomes necessary to translate based on the explanation of the CSI to assist the target audience in better understanding the content. To assess the impact of providing explanations on translation performance, we include the CSI description obtained from Wikipedia in the prompt before the basic translation instructions. This enables us to investigate whether offering additional explanations of CSIs can enhance the performance of machine translation.
Figure 3: Data characteristics on regions (**Outside**) and categories (**Inside**).
### Self-Explanation (SE)
In addition to _external knowledge_, we also examine the _internal knowledge_ of LLM for explaining the meaning of CSIs. Notably, chain-of-Thought (CoT) prompting has shown effective to elicit LLMs' internal knowledge for complex reasoning tasks Wei et al. (2022); Kojima et al. (2022). Inspired by this prompting strategy, we treat the explanation of CSIs in a source sentence as intermediate reasoning steps before translating the whole sentence. Therefore, we investigate a zero-shot explanation prompting strategy to elicit LLMs' internal knowledge about CSIs in two steps for machine translation. In the first step, we prompt the LLM to explain the meaning of all CSIs in the source sentence by using a prompt template, i.e., "_Please explain [CSI] in [Source Sentence]._" In the second step, we ask the LLM to translate the whole sentence by combining the LLM's explanation with another prompt instruction, i.e., "_According to your understanding to [CSI], only translate the following [Source Language] text to [Target Language]: [Source Sentence]._" By comparing self-explanation with external CSI explanations (SS3.3), we aim to examine how well LLMs can understand the cultural nuances of the CSIs in the sentence context.
### Self-Correction (SC)
It has been shown that LLMs can correct their own mistakes if instructed to perform generations with specified constraints Ganguli et al. (2023). Building upon this finding, we investigate the potential benefits of self-correction in the translating of CSIs within sentence content. Specifically, to encourage the LLM to identify the target-language culture for native speakers, we design the following instruction: "_Please firstly think about how to ensure that the culturally relevant words in the sentence are translated into [Target Language] words that [Target Language] readers can understand._". In the first step, we prompt the LLM with the basic translation instruction plus the instruction to enable the LLM to generate analysis to achieve the goal. Next, we ask the LLM to translate the whole sentence according to its analysis, i.e., "_According to your analysis, please translate the sentence._" This allows LLM to analyze the translation approach for culturally related content and adjust translation results based on its analysis.
### Self-Ranking (SR)
Finally, as LLMs are probabilistic generative models, they can be prompted to sample different trans
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Strategy** & **Prompt** & **Target Outputs** \\ \hline \multirow{2}{*}{Basic Instruction (BI)} & Translate the following English text to Chinese & \(\uparrow\) canonli@fisfoliatelle@ \\ & & \\ \hline \multirow{2}{*}{CSI Translation (CT)} & The Chinese translation of the item is as following: & \(\uparrow\)\(\downarrow\
lation outputs. Furthermore, even for multiple semantically equivalent prompts, the word choice in the prompts can have a significant impact on the performance of LLM models in translation tasks. In order to achieve more consistent and reliable results, we examine a self-ranking method Wu et al. (2023) to instruct LLMs for ranking their translation outputs. Specifically, we prompt the model to first generate multiple potential translations, i.e, "_Please give [Generated Number] most likely translations, and ensure [CSI] in each result to correspond to different translations_". The Generated Number is a hyper-parameter for the number of translations to generate. Subsequently, we ask the LLM select the most optimal one according to our predetermined optimization goal for cultural awareness, i.e, "_Please select the the best translation result. The translation of the word [CSI] in the sentence should be the closest to its explanation, and the meaning of the word is most likely to be understood by [Target Language] readers"_.
## 4 Experimental Settings
Methods in Comparison:To fully evaluate the efficacy of LLM translations for cultural nuances, we compare the different prompting strategies (SS3) on a tuning-free LLM, as well as an in-house fine-tuned MT model and a commercial MT system.
* **Prompting LLMs**: We examine the different prompting strategies on OpenAI's ChatGPT which is fine-tuned from GPT-3.5 with instructions. We also include a vanilla GPT-3.5 (text-davinci-003) for comparison.
* **Fine-tuned MT**: We fine-tune a pre-trained mBART model (mabart-large-cc25) on 16 million English-Chinese parallel data from WMT 2019, consisting of data from the news commentary v15 and the United Nations parallel corpus v1.0. We use a learning rate of 3e-5, a polynomial decay learning rate scheduler, and 2500 warmup steps to train the mBART model with an Adam optimizer for 100,000 updates.
* **Commercial MT**: We use the Google Translate engine in our comparison.
Automatic Evaluation Metrics:We first evaluate the translation outputs using traditional automatic metrics such as BLEU Papineni et al. (2002), BLEURT Sellam et al. (2020), and COMET Rei et al. (2020). However, as we focus on the translation quality of CSIs, the existing course-grained metrics may not identify the subtlety in translating these CSIs. Therefore, we propose a fine-grained evaluation metric called CSI-Match, which first identifies translated CSIs in the system outputs by a word-alignment tool awesome-align5 Dou and Neubig (2021) and uses a fuzzy string match tool FuzzyWuzzy6 to compare against the reference CSI translations from Wikidata by caculating Levenshtein distance Levenshtein et al. (1966).
Footnote 5: [https://github.com/neulab/awesome-align](https://github.com/neulab/awesome-align)
Footnote 6: [https://github.com/seatgeek/fuzzywuzzy](https://github.com/seatgeek/fuzzywuzzy)
Human Evaluation SettingWe randomly select 10% of samples from our collected dataset and engage a bilingual annotator, who is also a native Chinese speaker, to assess the accuracy of CSI translations. To facilitate a thorough analysis, we categorize the level of accuracy into the following four distinct groups. We also provide a few examples to educate the annotator before evaluation
* **Correct**: The system translation precisely matches the reference translation;
* **Copy**: The system does not translate the CSIs and copy the source string of CSIs in the output;
* **Understandable**: Although not a perfect match, the system translation remains understandable to the native speaker and convey the key meaning of the source sentence;
* **Wrong**: The translation is entirely incorrect.
## 5 Results and Analysis
Our study of LLM-based MT systems includes four parts. First, inspired by the intricate connection between language and culture, we start exploring the influence of prompting languages on LLM translation performance (SS5.1). Our goal is to investigate if the target language benefits LLM in understanding the target language culture. After identifying the optimal prompting language, we then apply automatic evaluations to compare the LLM translation with existing popularly-used MT systems (SS5.2). To further provide a fine-grained assessment of the translation quality regarding cultural-relevant concepts, we conduct a human evaluation of LLM translation with different prompting strategies and traditional NMT systems (SS5.3). Finally, we show the breakdown results of the best LLM translation strategy on our cultural parallel dataset in terms of cultural regions (SS5.4) and categories (SS5.5). Each part is described in detail below.
### Impacts of Prompting Languages
Figure 4 shows the fine-grained human evaluation of the basic instruction and external knowledge prompting strategies using the source and target languages to construct the prompts. We find that using the target language is consistently better than the source language for constructing prompts in LLM translation performance. This is further validated by the automatic evaluation in Table 6 (SSB). Therefore, we stick to the target language for constructing prompts in the following experiments.
### Overall Automatic Evaluation
Table 3 displays the results of automatic evaluations among various MT systems. We find that Google Translate obviously outperforms mBART and GPT-3.5 on all metrics with a large margin. However, when further fine-tuned with natural language instructions, ChatGPT with basic instruction prompting significantly improves over GPT-3.5, narrowing the gap with Google Translate or even outperforming Google Translate by 1.8 COMET points. Comparing the strategy of using external knowledge in prompts (i.e., CT and CE), we observe that LLMs can easily leverage external knowledge in the target-language prompts, significantly boosting the accuracy of CSI translation. Notably, a direct addition of the CSI translation in CT is performing better than the CSI explanation in CE. Lastly, we compare prompting strategies elicit-441 ing LLMs' internal knowledge (i.e., SE, SC, SR). We find that SE, SC, and SR all fail to improve over BI. This indicates that ChatGPT itself lacks accurate internal knowledge of low-frequency CSIs and still struggles with understanding the cultural nuances even with the complex prompting strategies examined in this study.
### Human Evaluation on Prompt Strategies
The overview of human evaluation of MT systems is shown in Figure 5. In general, we observe the consistent performance differences among MT systems between human and automatic evaluation (SS5.2 E.g., LLM with the CT prompts achieved the best translation performance). Further looking into the different levels of translation quality, we find that external prompting strategies (i.e., CT and CE) significantly outperform Google Translate in translating CSIs, although their performances on traditional automatic evaluation metrics (i.e., BLEU, BLEURT, COMET) are similar.
To examine the correlation between automatic metrics and human evaluation metrics, we compute the coefficients between automatic metrics and human-annotated CSI translation accuracy. Table 4 demonstrates that CSI-Match exhibits the strongest relationship with the human judgment on CSI translation quality, indicating its potential as an automatic metric for evaluating CSI accuracy.
### Performance Across Cultural Regions
In Figure 5(a), we compare the basic prompting strategy (BI) with the external knowledge prompting strategies (CT, CE) and break down the translation accuracy of CSIs (i.e., #Correct/#Total Count) in
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Metrics** & **Kendall’s** & **Pearson’s** \\ \hline BLEU & 24.5 & 44.4 \\ BLEURT & 45.8 & 44.4 \\ COMET & 54.9 & 50.0 \\ CSI-Match & **81.2** & **72.2** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Coefficients between automatic metrics and human evaluation metric - CSI translation accuracy
Figure 4: Source vs target (En vs Zh) language prompts (**Up**) and prompt-based LLM translation (**Bottom**).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **BLEU** & **BLEURT** & **COMET** & **CSI-Match** \\ \hline mBART & 15.0 & 0.441 & 67.9 & 12.0 \\ Google & **32.2** & **0.561** & **77.1** & **16.3** \\ GPT-3.5 & 21.6 & 0.482 & 72.0 & 12.1 \\ \hline
**BI** & 26.2 & 0.550 & 78.9 & 29.1 \\
**CT** & **28.7** & **0.569** & **79.5** & **37.5** \\
**CE** & 26.8 & 0.554 & 77.8 & 32.7 \\
**SE** & 21.2 & 0.542 & 75.3 & 26.3 \\
**SC** & 23.7 & 0.540 & 76.6 & 30.2 \\
**SR** & 19.3 & 0.490 & 70.8 & 29.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Automatic evaluation on basic MT methods (**Up**) and prompt-based LLM translation (**Bottom**).
human evaluation over geographical regions. First, we find that BI performs worst on CSIs in the Asia culture, which may be less represented than Western cultures in the pre-training data of LLMs. Besides, the CT prompting, with the integration of CSI translation, helps the model eliminate differences between geographical regions.
### Performance Across Cultural Categories
Finally, we also compare the BI, CT, and CE prompting strategies regarding the CSI translation accuracy in human evaluation across various categories. The results in Figure 5(b) show that the CT prompting strategy outperforms BI in all four categories and underperforms CE only in the social category. Note that the social category has only 10 CSIs, and the performance in this category may be affected by selection randomness.
## 6 Related Works
Cultural-aware machine translation:As languages and cultures are highly intertwined, there is a growing desire towards empowering cultural awareness of machine translation systems Nita (1986); Ostler (1999); Hershcovich et al. (2022). However, as cultural nuances are subtle, collecting culturally sensitive data Akinade et al. (2023) remains costly and time-consuming. Besides, it is also challenging to perform a human-centered evaluation of the cultural nuances Liebling et al. (2022). Existing studies have proposed strategies to evaluate cultural awareness of traditional MT systems by grounding images Khani et al. (2021), adapting entities Peskov et al. (2021) or targeting at dilates Riley et al. (2022). Different from evaluating traditional MT systems, we focus on evaluating the cultural awareness of LLM-based translation.
LLM-based machine translation:Large language models, such as GPT-3 Brown et al. (2020), have proven effective in machine translation for var
Figure 5: Overall Human Evaluation
Figure 6: CSI Accuracy (#Correct/#Total Count) in Human Evaluation
ious high-resource languages Hendy et al. (2023); Jiao et al. (2023). Particularly, a few recent studies have investigated the performance of LLM-based MT, including formality control of translation outputs Garcia and Firat (2022), in-context translation ability during pre-training Shin et al. (2022), and multilingual translation Scao et al. (2022). However, the exploration of LLM-based MT on culturally sensitive texts is still missing.
External knowledge for MT:There have been multiple threads of research efforts on integrating external knowledge such as bilingual translation lexicons for neural machine translation systems, including probability interpolation of lexicons Arthur et al. (2016); Khandelwal et al. (2021), data augmentation by back-translations Hu et al. (2019), decoding with a phrase memory Wang et al. (2017), and pre-training with an entity-based denoising objective Hu et al. (2022). Despite the effectiveness, these methods require further fine-tuning of the original MT systems, while we focus on tuning-free methods for integrating external knowledge for LLM-based MT in this study.
## 7 Conclusion
In this paper, we propose a novel data curation pipeline to construct a culturally sensitive parallel corpus for evaluating the cultural awareness of MT systems. Our corpus provides rich annotations of cultural-specific entities as well as their annotations. We design simple but effective prompting strategies to enhance LLM-based translation. Despite the effectiveness, several challenging questions remain open. First, it is non-trivial to incorporate cultural-specific information beyond a single entity such as discourse information. Besides, our prompting strategies leverage mostly external cultural knowledge in the form of texts. How to leverage multimodal knowledge from images and structured knowledge graphs to resolve cultural ambiguity deserves further investigation. Finally, automatic evaluations of cultural nuances for LLM-based machine translation are also challenging, as LLMs tend to generate lengthy target-language explanations which may be different from translation references in parallel corpora.
## Limitations
Our work provides one step further toward understanding and evaluating the cultural awareness of LLM-based machine translation. We provide a culturally sensitive parallel corpus with rich annotations on cultural-specific items. Thereby, our study is limited to the language pair (i.e., English-Chinese) used in this corpus. We will provide our code repository to facilitate the future adaptation of our pipeline for more language pairs in diverse language families. Besides, as we focus on the evaluation of cultural-specific items in this study, the evaluation of cultural awareness beyond a single entity deserves further investigation.
## Ethical Considerations
Although our study designs a suite of simple but effective prompting strategies to enhance the cultural awareness of LLM-based machine translation, we still observe the weakness of LLM-based translation on cultural concepts in certain regions (e.g., Asia) and hallucinations on low-frequency entities. Potential usage of these LLM translation outputs may still result in misinformation spread. Before deploying our methods to create reliable content such as creating translations of Wikipedia articles, practitioners should ensure another round of human post-editing.
|
2306.15566 | MTFS: a Moving Target Defense-Enabled File System for Malware Mitigation | Ransomware has remained one of the most notorious threats in the
cybersecurity field. Moving Target Defense (MTD) has been proposed as a novel
paradigm for proactive defense. Although various approaches leverage MTD, few
of them rely on the operating system and, specifically, the file system,
thereby making them dependent on other computing devices. Furthermore, existing
ransomware defense techniques merely replicate or detect attacks, without
preventing them. Thus, this paper introduces the MTFS overlay file system and
the design and implementation of three novel MTD techniques implemented on top
of it. One delaying attackers, one trapping recursive directory traversal, and
another one hiding file types. The effectiveness of the techniques are shown in
two experiments. First, it is shown that the techniques can delay and mitigate
ransomware on real IoT devices. Secondly, in a broader scope, the solution was
confronted with 14 ransomware samples, highlighting that it can save 97% of the
files. | Jan von der Assen, Alberto Huertas Celdrán, Rinor Sefa, Gérôme Bovet, Burkhard Stiller | 2023-06-27T15:44:21Z | http://arxiv.org/abs/2306.15566v2 | # _Mtfs_: a Moving Target Defense-Enabled
###### Abstract
Ransomware has remained one of the most notorious threats in the cybersecurity field. Moving Target Defense (MTD) has been proposed as a novel paradigm for proactive defense. Although various approaches leverage MTD, few of them rely on the operating system and, specifically, the file system, thereby making them dependent on other computing devices. Furthermore, existing ransomware defense techniques merely replicate or detect attacks, without preventing them. Thus, this paper introduces the _MTFS_ overlay file system and the design and implementation of three novel MTD techniques implemented on top of it. One delaying attackers, one trapping recursive directory traversal, and another one hiding file types. The effectiveness of the techniques are shown in two experiments. First, it is shown that the techniques can delay and mitigate ransomware on real IoT devices. Secondly, in a broader scope, the solution was confronted with 14 ransomware samples, highlighting that it can save 97% of the files.
Ransomware, Moving Target Defense, IoT
## I Introduction
Malware is one of the most prominent attack vectors in the still highly active threat landscape that today's enterprises find themselves in. The frequency at which malware-based attacks are launched has increased by 358% in 2020 [1]. Among malware-based attacks, ransomware is a frequent threat, leading to devastating impacts. In 2022, 20% of all cybercrime were attributed to ransomware attacks, where the average cost ranges between 1 and 8 Million USD [1, 2].
Fixing the vulnerabilities that are exploited in these security breaches is touted as the most important measure to prevent such threat events. However, the reality with respect to how connected devices are operated and administrated draws a dramatically different picture, with security patches either not being targeted, developed, or deployed. With respect to ransomware, it was observed that many attacks exploit system flaws that were known since many years, some vulnerabilities dating back to 2012 [3]. All of this magnifies the importance of a defense in depth approach, where multiple security control layers are deployed on a device. For example, an industrial IoT device may not allow the modification of the firmware by the operator without voiding the manufacturer's warranty. In this case, there is no option to fix the vulnerability, and the only way to improve security is to add additional defense layers.
One promising paradigm to develop additional security controls is Moving Target Defense (MTD). By creating dynamicity (_i.e.,_ moving) in the elements comprising the attack surface (_i.e.,_ target), the complexity for the attacker is increased, thereby decreasing the likelihood of a threat event being successful (_i.e.,_ defense). However, as identified by a recent survey, MTD for IoT is a promising yet immature defense paradigm. The key limitations identified include the lack of solutions that employ MTD at the operating system (OS) level, since most solutions rely on network-based approaches, a dependency that may not be applicable for every defense and threat model. For example, a ransomware attack may not be mitigated from the network point of view, once encryption keys are delivered and the encryption is ongoing. Furthermore, existing MTD approaches are considered to be of limited maturity, since only few approaches have actually been implemented and tested in real scenarios.
To cover the previous challenges, this paper introduces _MTFS_, a file system specifically created as a platform to home MTD techniques on the OS-level. Secondly, to present the effectiveness of _MTFS_, three deceptive MTD techniques (_i.e.,_ combating directory traversal, file access, and file type identification) are implemented into _MTFS_ to mitigate ransomware breaches in Linux devices. The defense layer provided by _MTFS_ and the three proposed MTD techniques have been evaluated in a real resource-constrained device affected by a ransomware. The selected device is a Raspberry Pi 3, acting as a radio frequency spectrum sensor of the well-known ElectroSense platform. Then, to highlight that the file system and the techniques are portable to other Linux devices, while still being effective, a container-based testbed is specifically created to evaluate the system. With this testbed, _MTFS_ is evaluated against a large set of real-world ransomware samples. The experiments conducted in the IoT device and the testbed showed that the _MTFS_-based MTD approach can successfully operate on a limited number of resources, that the defense model holds up in a realistic deployment scenario, and that it can mitigate numerous real-world ransomware samples.
The remainder of the paper is structured as follows. Section II introduces the MTD field and highlights the lack of works in the described direction. Section III introduces the _MTFS_ file system and the prototypical mitigation techniques, which are evaluated in two experiments presented in Section IV. Finally, Section V summarizes the findings.
## II Background and Related Work
Moving Target Defense (MTD) aims to decrease the attack success probability by increasing the complexity of the attack surface by making it more dynamic. The underlying principle tackles the attack on static targets, which exhibits an asymmetry where attackers have all the time they need to attack a target. A recent review of the MTD field for IoT devices revealed a lack of approaches that do not rely on third-party actors while presenting a mature implementation and evaluation [16]. Aside from concluding on the immaturity of existing approaches, this survey introduced the key design decisions of _What_, _How_, and _When_ to apply a transformation to the attack surface. _What_ refers to the element of the attack surface that will be changed dynamically. _How_ defines a transition strategy (_i.e.,_ making it more diverse, shuffling between sets of parameters, and increasing redundancy) which can be invoked based on a specific interval or event, which constitutes the _When_. As highlighted in Table I, numerous approaches have been presented that all focus on specific attack types, while defining _What_, _When_, and _How_ to alter to mitigate the attack. For all of these approaches, the implementation is crucial to understand their applicability in a real-world setting.
From the attack vector perspective, [4, 5] introduce diversification-based approaches that aim to increase the defense capabilities without specifying a specific attack vector. Interestingly, they are implemented in different layers of the defense host. While [4] focuses on the software stack in the browser, [5] aims to implement execution platform diversity on the hardware and OS-level. [6, 7, 8] all rely on a shuffling strategy. They tackle different attack types such as Denial-of-Service, Botnet, and browser-based ones, but without implementing an approach in the operating system. Specific, application-layer MTD approaches that rely on event-based diversification are [9, 10]. As such, they both address common threat vectors launched against machine learning based systems. Finally, [12, 13, 14] present approaches to combat reconnaissance attacks, while leveraging specific network devices, such as Access Points and Controller Area Network Buses. Out of them, only [12] relies on the operating system. However, in [12], the whole operating system (or distributions of the same operating system) is exchanged. Most closely related to the work at hand, [15] presents an MTD framework that demonstrated how ransomware can be mitigated by shuffling between a set of (real) files, that are created on the application-level in user space.
Beyond the field of MTD, the advantages of a file system approach have been explored more clearly. As shown in Table II, a number of papers explored the suitability of using a file system as a security measure. More specifically, related work in this area demonstrates that cyberattacks, such as ransomware, can be efficiently detected from the file system perspective. Furthermore, some approaches combine this detection with a file recovery component. This enforces the overall view of file system-based approaches, posing an opportunity to implement MTD. In that sense, it can be demonstrated that it is possible to also actively and dynamically mitigate these attacks and not just detect and recover their effects.
In summary, related work in the MTD field, does not cover approaches leveraging the file system. In the broader sense, OS-level implementations like [5, 12] have explored the same abstraction level and trust model as this work. However, there are no MTD approaches implemented in the file system-abstraction of an OS. Although approaches such as [15] show the applicability of using file sets to combat malware. Since approaches outside the MTD paradigm have shown that the file system is a viable anchor to implement security mechanisms, there is an opportunity to explore this aspect of the operating system for dynamic, proactive defense. This is critical, since the operating system is, in many scenarios, the only dependable area of attack surface to implement MTD without relying on other actors, that may not be connected to the threat model.
## III The MTFS Approach
Due to the limitations outlined in the previous section, this paper tries to solve the problem of detecting and mitigating ransomware attacks as close to the target asset (_i.e.,_ the data held by the files) without losing the semantics (_e.g.,_ calling process, operation type) of the attack behavior. For example, implementing such an approach on the block storage abstraction would ignore the semantics of the file abstraction.
To achieve the goals expressed through this problem statement, _MTFS_ stands as a novel file system specifically created to detect and prevent or mitigate malware-based attacks. Fig. 1
presents the architectural elements that underlies _MTFS_, which follow the Linux file system architecture. The key elements added by _MTFS_ are a set of _MTD techniques_ which can be deployed based on _Detectors_, which are all implement in an _Overlay File System_ that uses the _FUSE_ library.
To introduce the interaction between these components, _MTFS_ makes use of the FUSE library to receive and respond to system calls, enabling full visibility into all operations carried out on the mount point of the file system. In addition, fine-grained control over how a process is being serviced can be implemented. This is advantageous for multiple reasons. First, this allows analysis of the call, before executing it. Secondly, it allows that a file operation is controlled to decide how to proceed with the call. In contrast to other file-based approaches, damage can theoretically be prevented before it is done. As such, _MTFS_ supports the integration of a set of _Detectors_ and _MTD techniques_. The former are simply a set of strategies that receive for each process the system call specifications. These components can then evaluate in a bespoke manner how to analyze the call, for example, by evaluating them against a set of policies or by classifying them using a Machine-Learning model. _MTD techniques_, on the other hand, are strategies that define specific actions to be invoked. With the architecture presented herein, they can also be invoked by outside components. This is especially useful when considering the plethora of existing classifiers and anomaly detectors that can provide insight from other sources (_e.g.,_ performance metrics, non file system-related system calls). To achieve this integration and to provide users with a simple user space implementation, _MTFS_ uses the FUSE library, which provides client libraries in many high-level programming languages (_e.g.,_ Python, Java, Go).
The remainder of the architecture consists of the Linux file system abstraction. Here, all user space applications, including malicious ones like ransomware, access data through the virtual file system, which is an abstraction present in the Linux kernel. Essentially, this layer abstracts the complexity of different file systems and storage media. Communication between user and kernel space is achieved with system calls, that allow operations (_e.g.,_ OPEN, CLOSE, STAT, READ, WRITE) to be carried out. Based on the files referenced by the file descriptors, the operations can be routed to the underlying implementation of the file system (_e.g.,_ ext4, FAT, FUSE). These implementations then provide access to the actual data.
To demonstrate the feasibility of the proposed file system-based approach and the effectiveness of building a security control within the file system, _MTFS_ was implemented as an overlay file system. Furthermore, three novel MTD techniques were designed and implemented on top of _MTFS_ to enable experiments with real ransomware samples. The overlay file system is a stacked file system that proxies requests to any underlying file system. _Ext4_ is the default file system in the majority of Linux distributions, however, any mountable file system works as an underlay file system. The overlay layer is a file system that forwards both the request and the response from the base file system to the process. In addition to forwarding requests, the three MTD techniques represent the potential modifications that can be carried out in-place. The advantage of such an overlay file system is that all operations can be intercepted and modified at will without inheriting the complexity of a native file system. To implement a prototype, the go-fuse library written in the Go programming language was used.
The first of the integrated techniques (MTD_DELAY) presents the simplest deception mechanism. Here, any requests to the overlay file system are serviced by the underlying file system. However, different ranges of delays can be specified to delay or hinder the attacker. Naturally, this does not completely prevent any attacks by itself. However, combined with another detection or prevention system, it can extend the time needed for defense. The mean time to attack (MTTA), is therefore decreased, which is an important metric when discussing MTD techniques [22].
The second MTD technique MTD_INF presents malware samples with an _Infinite Directory Graph_. This is based on the analysis of multiple ransomware samples, which all conduct a depth-first search to recursively find all files in a directory to be encrypted. Here, the technique shuffles between a set of real files and a self-linking directory. If a malware sample invokes the opendir() command, the technique responds with a subset of the underlay file system and a special directory names!, which is the first letter of the ASCII table. Invoking opendir() on this directory, in turn, yields the directory itself. Thus, conducting a depth-first search on this directory leads to a vicious circle, potentially disrupting or even preventing the encrypting process.
The third and final technique MTD_SUFFIX is the most elaborated since it inspects responses from the underlay file system and modifies certain patterns. This technique is based on the observation that many ransomware samples adapt their behavior to the file type. [15] demonstrated that it is possible to prevent encryption by changing file suffixes. However, this approach has two limitations. First, changing entries in the real file system is a costly operation. Secondly, malicious software can still detect the type of file by looking at the first bytes (_i.e.,_ magic numbers) of a file. This is where the third technique intercepts the read() system call to randomize this signature. Ransomware, relying on the system to work to achieve encryption, must therefore ignore unknown file types, focusing only on known file types (_e.g.,_ plain text, PDF files).
Fig. 1: User Space Implementation of _MTFS_
The three MTD techniques described here do not implement an intelligence component by themselves, they expect that the malware sample can be detected on the device. Nevertheless, all techniques can be deployed proactively (_i.e.,_ without an event). Still, an existing ML-based classification from [15] was integrated to conduct experiments with the file system. This classification system monitors the host device's behavior with respect to resource consumption, expressed by roughly 80 parameters collected from different event families such as network, memory, file system, CPU, process scheduler, or device drivers [15].
## IV Evaluation
_MTFS_ was assessed in two rounds of experiments focusing on ransomware. First, the performance of the platform was evaluated by deploying it in a real sensor of the ElectroSense crowdsensing platform. Secondly, _MTFS_ was assessed by crafting a tailored, server based testbed to evaluate as many ransomware samples available.
### _Real-World Scenario_
In the real-world scenario, a Raspberry Pi 4 Model B Rev 1.4 running the official ElectroSense image, which enables the data collection from the radio equipment [15]. The Raspberry Pi board includes a 64 GB class 10 microSD card with a performance rating of 10. A subset of 1000 files from the govdocs [23] dataset were deployed in the home directory of the device. With _RansomwarePoC_ and _DarkRadiation_, two ransomware samples were used to test in the real environment. For each sample, the encryption performance in the real file system and in each of the three MTD techniques was assessed, as shown in Table III. Without any protection, all files are encrypted by the two samples, although _DarkRadiation_ is substantially faster. When mounting the file system, _RansomwarePoC_ was not able to encrypt any files in all the three cases. For the latter two strategies, the samples crashes, while for the first one it spends significant time trying to traverse the file system. _DarkRadiation_ was able to encrypt files in the delaying strategy, although the encryption duration was prolonged. For the other strategies, no files were lost.
### _Testbed Scenario_
To provide evidence of the effectiveness of _MTFS_ considering heterogeneous ransomware samples, a more efficient testbed is needed that allows experiments to be executed in a highly observable, reproducible, controllable, and explicit thereby accountable manner. Thus, Fig. 2 shows the virtualized testbed developed in this work to execute ransomware samples and security controls.
In this configuration of the testbed, each experiment takes as input an Ubuntu 22.04 base image, and a set of benign files mounted in the home directory. [23] provides a file corpus of various file types. The files are mounted in a flat file system, where each of the twenty folders in the home directory holds a few hundred megabytes of data. The file corpus is implemented as a ZFS dataset, which enables rapid snapshot creation and comparison later on. To run an experiment, a container of the base image is spawned, the ZFS dataset is mounted, and the ransomware sample is executed for a maximum of five minutes. Five minutes was established as a suitable duration after implementing a simplified ransomware, which encrypts files in a consecutive and single-threaded execution. After the experiment is concluded, a snapshot from the ZFS dataset is created and compared on a per-file basis to the initial state of the dataset. This allows to measure accurately for each of the files when and how it was changed (_i.e.,_ modified, moved, deleted, or unmodified). For each malware sample, two experiments can be executed. A baseline experiment where the ransomware is executed without
Fig. 2: Architecture for the Ransomware Testbed
interruption, and one where MTFS is mounted on top of the ZFS dataset. Finally, for each sample, the number of modified bytes per second is computed.
To gather results, all components of the testbed can be recreated with a single command that creates and provisions a virtual machine. Here, all experiments have been deployed in an Ubuntu 22.04 machine provided by Virtualbox on an Arch Linux host. The host operating system runs an AMD Ryzen 5700G processor running at 4.7 GHz. To ensure that the host storage is not the bottleneck during encryption, the ZFS dataset is executed in a ZFS pool configured as a RAID 1 pool, using two NVMe storage devices (WesternDigital SN550 NVMe SSD). From a file-system perspective, all 20741 files (10460 MB) can be traversed and read in 9 seconds (\(\approx\)10 Gbit/s) and overwritten in 27 seconds (\(\approx\)3.2 Gbit/s).
Finally, the source code and binaries of 24 ransomware families were obtained from open-source platforms and from the malware database _MalwareBazaar_. Each sample is embedded in a script that allows repeatable execution. From the various samples of 24 ransomware families, only 14 samples are runnable in the server, since some samples focus on encrypting volumes from the _ESXi_ hypervisor and others require connection to the C&C server. Three ransomware samples are implemented in the Go Programming language and another as a simple Bash script. Six samples leverage the Python runtime environment and one runs on the Java Virtual Machine. The remaining eleven samples are available as an Executable and Linkable Format (ELF) binary. This category contains samples involved in recent attacks, such as Hive, Blackbasta, Lockbit, or Clop.
After running each sample against two configurations (_i.e.,_ uninterrupted and with _MTFS_), the results shown in Fig. 3 are obtained. Looking at these results, two aspects can be highlighted. The first and more important conclusion that can be drawn is that for all the samples, _MTFS_ is able to protect the majority of the data present in the dataset. The worst performance was achieved from _RansomwarePoC_ which started to encrypt files before the full set of target files was established. Over all samples considered, 97.01% of the bytes were saved by deploying the file system. Importantly, it has to be noted that in the testbed scenario outlined, _MTFS_ operates under conditions where no detection system is available and where files are structured in a flat file system. In that sense, it can be concluded that _MTFS_ can be used both as a honeypot to attract ransomware attacks, while actively mitigating the encryption behavior. In a second observation, by comparing the datasets before and after deploying the ransomware it can be seen that there are behavioral differences between samples. Most samples differ in terms of encryption rate. Six samples achieved full encryption. Furthermore, the file exploration strategy of different samples varies. Finally, the modification strategy differs since some encrypt the file in place (_i.e.,_ execution only write() system calls), while others do the same with an additional renaming step. For example, _GonnaCry_ adds the _GNNCRY_ file extension to highlight encryption. Other ransomware implementations create a new file based on the encrypted content and delete the old file.
## V Summary and Future Work
This paper introduced _MTFS_, a file system-based platform which serves as an attack mitigation platform for Moving Target Defense on the file-system abstraction. _MTFS_ follows the Linux virtual file system architecture to implement an overlay file system, relying on _FUSE_ to implement the actual file operations in user space. With that, full control over file operations can be obtained. This allows to implement detection and mitigation strategies in a user space application, enabling tight integration with existing detection approaches that rely on other data sources (_e.g.,_ hardware counters, performance metrics). Based on this file system, three file-system-based MTD techniques (MTD_INF, MTD_SUFFIX, and MTD_DELAY) are proposed - one focusing on delaying operations, one presenting a recursive directory graph, and one modifying the magic numbers of files on the fly so that malicious software cannot determine the file type. To highlight the feasibility of a file system-based approach, _MTFS_ was implemented as an overlay file system over existing systems using the Go programming language.
Fig. 3: File Content Encrypted by Various Ransomware Samples
To assess the actual effectiveness, _MTFS_ and its MTD techniques were evaluated in two heterogeneous scenarios. Experiments performed in the first scenario have shown that the techniques can successfully operate on a real IoT device and mitigate two ransomware samples. In the second scenario, _MTFS_ was deployed in a testbed that was specifically created to test ransomware mitigation systems. The testbed enables execution against many ransomware samples, including 14 samples that were found in real-world malware databases. The results show that the file-based approach can successfully save a large amount of the data to be protected. While ransomware is a continuously evolving threat vector, the testbed developed here is the most extensive one in terms of number of samples.
To further develop this work, a ML-based classification system will be investigated. Specifically, it will be analyzed whether it is possible to detect malware-based attacks on the granularity of individual system calls while buffering any modifying operations.
## Acknowledgment
This work has been partially supported by _(a)_ the Swiss Federal Office for Defense Procurement (armasuisse) with the CyberForce project (CYD-C-2020003) and _(b)_ the University of Zurich UZH.
|
2310.16951 | The Teenager's Problem: Efficient Garment Decluttering With Grasp
Optimization | This paper addresses the ''Teenager's Problem'': efficiently removing
scattered garments from a planar surface. As grasping and transporting
individual garments is highly inefficient, we propose analytical policies to
select grasp locations for multiple garments using an overhead camera. Two
classes of methods are considered: depth-based, which use overhead depth data
to find efficient grasps, and segment-based, which use segmentation on the RGB
overhead image (without requiring any depth data); grasp efficiency is measured
by Objects per Transport, which denotes the average number of objects removed
per trip to the laundry basket. Experiments suggest that both depth- and
segment-based methods easily reduce Objects per Transport (OpT) by $20\%$;
furthermore, these approaches complement each other, with combined hybrid
methods yielding improvements of $34\%$. Finally, a method employing
consolidation (with segmentation) is considered, which manipulates the garments
on the work surface to increase OpT; this yields an improvement of $67\%$ over
the baseline, though at a cost of additional physical actions. | Aviv Adler, Ayah Ahmad, Shengyin Wang, Wisdom C. Agboh, Edith Llontop, Tianshuang Qiu, Jeffrey Ichnowski, Mehmet Dogar, Thomas Kollar, Richard Cheng, Ken Goldberg | 2023-10-25T19:36:28Z | http://arxiv.org/abs/2310.16951v1 | # The Teenager's Problem:
###### Abstract
This paper addresses the "Teenager's Problem": efficiently removing scattered garments from a planar surface. As grasping and transporting individual garments is highly inefficient, we propose analytical policies to select grasp locations for multiple garments using an overhead camera. Two classes of methods are considered: _depth-based_, which use overhead depth data to find efficient grasps, and _segment-based_, which use segmentation on the RGB overhead image (without requiring any depth data); grasp efficiency is measured by _Objects per Transport_, which denotes the average number of objects removed per trip to the laundry basket. Experiments suggest that both depth- and segment-based methods easily reduce Objects per Transport (OpT) by \(20\%\); furthermore, these approaches complement each other, with combined _hybrid_ methods yielding improvements of \(34\%\). Finally, a method employing _consolidation_ (with segmentation) is considered, which manipulates the garments on the work surface to increase OpT; this yields an improvement of \(67\%\) over the baseline, though at a cost of additional physical actions.
## I Introduction
We introduce the "Teenager's Problem": removing a large number of scattered garments from a surface (e.g. the floor of a teenager's room, or a work surface) in the shortest time. This problem has applications in hotels, retail dressing rooms, garment manufacturing, and other domains where heaps of garments must be manipulated efficiently.
In this paper, we first formalize the Teenager's Problem and then consider several methods to solve it. Consider Fig. 1 with multiple garments on a work surface. Given an overhead RGB or RGBD image, what robot pick-and-place actions would minimize the total time to remove all of the garments? Removing individual garments, one at a time, would be inefficient. Therefore the robot should use the deformable nature of garments and pick multiple garments at a time.
Given a scene like the one in Fig. 1, one approach is to ignore the separation between individual garments and to treat the whole scene as a homogeneous volume to be removed. This motivates _depth-based_ methods, i.e., methods that use the depth image to infer grasp points that would then remove as much volume as possible. We consider two depth-based methods in this paper. The first method uses _height_ and grasps at the highest point of the scene. The second method estimates a _volume_, by integrating the depth data within a grasp radius, and grasps at the point in the scene that gives the largest estimated volume.
The depth-based methods are able to identify large heaps and grasp multiple garments at a time, even when some of these garments are completely buried under others and therefore not individually visible to the camera. However, the depth-based methods also miss some good grasps. Particularly, since they do not detect individual garment positions and boundaries, they miss grasp points that have lower height/volume but would still pick multiple garments simultaneously, e.g., points where multiple garment boundaries meet.
Therefore, a second approach to solving the Teenager's Problem is to distinguish between individual garments and
Fig. 1: An instance of the _Teenager’s Problem_ in the experimental setup; the scale automatically records weight data as experiments are run.
optimize grasps to pick as many of these garments as possible. This motivates _segment-based_ methods, i.e., methods that use the RGB image to segment the individual garments. We consider one such method in this paper. This method uses the Segment Anything Model (SAM) [11] to distinguish the individual garments. Then, given a set of garments and a candidate grasp point, it predicts the probability that those garments will be picked by that grasp. The method then uses these predictions to optimize the grasp to pick the largest number of garments.
While the segment-based method is able to identify grasps that would pick multiple garments simultaneously, it can also miss some good grasps. Particularly, since only the top surface of the garment pile is visible to the camera, garments that are under others are ignored by the segment-based method.
Therefore, in this paper, we also consider a _hybrid_ method that attempts to combine the complementary strengths of the two approaches. The hybrid method, given the current state of the scene, uses either a depth-based method or a segment-based method, depending on the maximum height available.
Finally, we consider methods that make use of _consolidation_ actions, which are movements within the workspace to gather the garments into heaps, before removing them. This improves the efficiency of grasps to transport the objects to the bin at the cost of the time used in consolidation, which can improve efficiency in cases where the removal bin is located far from the work surface.
Experiments suggest that both depth- and segment-based methods significantly reduce robot trips to the basket by \(20\%\); furthermore, these approaches complement each other, with combined _hybrid_ methods yielding improvements of \(34\%\); finally, OpT can be further increased to \(67\%\) by taking additional _consolidation_ actions within the workspace in order to set up extremely efficient transport actions.
We make the following contributions:
* A formalization of the Teenager's Problem.
* Five methods (two depth-based, one segment-based, and two hybrid) to generate effective multi-garment grasps.
* A method that uses heap consolidation along with the segment-based grasp generation method to efficiently solve the Teenager's Problem.
* Physical experiments and data from grasping 2000 garments, that compare the performance of the various methods above, as well as a random baseline.
## II Related work
Our work is related to two lines of work: _manipulation of deformable objects_ and _multi-object manipulation_.
### _Manipulation of Deformable Objects_
Prior work on deformable object manipulation includes folding [3, 8], fabric smoothing [7, 19, 21], bed-making [20], untangling ropes [22], and singulating clothes from a heap [23, 27]. Several works aimed to detect specific features, such as the corners and edges of fabrics, and to identify optimal grasp points [6, 13, 15]. Other techniques employ deep learning to identify successful grasps [5, 12]. Some studies have focused on determining optimal grasp points by considering not only the depth of the cloth but also targeting wrinkles as highly graspable regions [16, 17, 26]. These prior works focus on manipulating a single deformable object at a time or insulating a deformable object from among others. Our work, on the other hand, concentrates on grasping multiple garments simultaneously.
### _Multi-object Manipulation_
Multi-object grasping can improve decluttering efficiency [1]. It has been studied, with analytic methods [28], learning-based methods [2, 4], and with special gripper designs [14]. The focus, however, has remained on rigid objects. Instead, we consider the problem of grasping multiple deformable objects at a time, and using such grasps to efficiently clear a surface.
Multi-object manipulation scenarios can encompass cluttered [10] environments, which can include both deformable and rigid objects [25], however, the goal is again to singulate the objects to grasp them individually. Prior work on manipulating multiple rigid objects used methods such as pushing, stacking, and destacking [1, 9, 18]. In cluttered scenes with multiple rigid objects, one method for determining how, or where, to grasp is by using image segmentation [24], detecting and isolating individual objects in the scene. We also use a segmentation approach but for deformable objects.
## III The Teenager's Problem
We formulate the _Teenager's Problem_ as an abstract framework for studying garment decluttering, where deformable objects rest on a planar work surface. Alongside this cluttered work surface, a fixed target bin is provided, and the goal is to transfer all the garments efficiently from the workspace to the bin in the shortest possible time. This problem fundamentally translates into executing a sequence of pick-and-place motions, which fall into two categories: first, movements from the workspace to the target bin ("transports"), and second, optional movements within the workspace ("rearrangements") to rearrange the garments to simplify future operations. The importance of this distinction is that the target bin may lie at some distance from the workspace; when the bin is far, transports become more expensive relative to rearrangements, which affects the optimal policy. We assume the availability of a predefined pick-and-place method, from a designated pick grasp pose to a designated place grasp pose. Consequently, the central question becomes: for any given scene, which grasp pose should be chosen for picking, and which for placing? Each pair of grasps, encompassing the pick and place actions, is denoted as a "move".
To formalize this problem, let \(n\) garments rest on a 2D surface \([x_{\min},x_{\max}]\times[y_{\min},y_{\max}]\), with the target bin located at the origin \((0,0)\); at any step \(k\), garment \(i\)'s pose is denoted as \(G^{i}(k)\), and the set of all garment poses is denoted \(\mathcal{G}(k)=\{G^{1}(k),\ldots,G^{n}(k)\}\) (starting at \(\mathcal{G}(0)\) at step \(0\)); a garment which has been removed from the work surface is also removed from \(\mathcal{G}(k)\). Let
the \(k\)th _move_ be defined by a _pick_ tuple \(\mathbf{p}_{\text{pick}}(k)=(x_{\text{pick}}(k),y_{\text{pick}}(k),\theta_{\text{pick} }(k))\) (representing the grasp) and a _place_ tuple \(\mathbf{p}_{\text{place}}=(x_{\text{place}}(k),y_{\text{place}}(k),\theta_{\text{place} }(k))\), where \((x_{\text{pick}}(k),y_{\text{pick}}(k))\in[x_{\min},x_{\max}]\times[y_{\min},y_{ \max}]\), \((x_{\text{place}}(k),y_{\text{place}}(k))\in[x_{\min},x_{\max}]\times[y_{\min},y_ {\max}]\cup(0,0)\), and \(\theta_{\text{pick}}(k),\theta_{\text{place}}(k)\in[-\pi/2,\pi/2]\) (since parallel-jaw grippers are symmetric). Moves with \((x_{\text{place}}(k),y_{\text{place}}(k))=(0,0)\) represent transports. Note that transport cost can be adjusted by moving the workspace.
Each move takes time proportional to the Euclidean travel distance, including the distance from the previous place location to the current pick location, plus a fixed time \(t_{\text{grasp}}\) needed to execute the grasp. The time to execute the \(k\)th move is:
\[T(\mathbf{p}_{\text{pick}}(k),\mathbf{p}_{\text{place}}(k))=\] \[\qquad\|(x_{\text{pick}}(k),y_{\text{pick}}(k))-(x_{\text{place}} (k-1),y_{\text{place}}(k-1))\|_{2}\] \[\qquad+\|(x_{\text{place}}(k),y_{\text{place}}(k))-(x_{\text{pick} }(k),y_{\text{pick}}(k))\|_{2}+t_{\text{grasp}}\]
Each move then changes the garment poses according to an update function \(f(\cdot)\): given a state \(\mathcal{G}(k-1)\) and a move \((\mathbf{p}_{\text{pick}}(k),\mathbf{p}_{\text{place}}(k))\),
\[\mathcal{G}(k)=f(\mathcal{G}(k-1),(\mathbf{p}_{\text{pick}}(k),\mathbf{p}_{\text{ place}}(k)))\]
Then, given starting state \(\mathcal{G}(0)\) and gripper location \((x_{\text{place}}(0),y_{\text{place}}(0),\theta_{\text{place}}(0))\) the goal is to find moves \((\mathbf{p}_{\text{pick}}(k),\mathbf{p}_{\text{place}}(k))\) for \(k=1,\ldots,k_{\text{finish}}\) to solve:
\[\text{minimize}\ \sum_{k=1}^{k_{\text{finish}}}T(\mathbf{p}_{\text{pick}}(k ),\mathbf{p}_{\text{place}}(k))\] \[\text{subject to}\ \mathcal{G}(k_{\text{finish}})=\emptyset\]
where \(k_{\text{finish}}\) is also a variable.
However, the above problem formulation does not consider the real-world complications associated with estimating garment poses and properties. Garments are deformable objects capable of adopting complex poses and may obscure one another, leading to partial information about their positions; furthermore, they behave in complex ways, making accurate state update predictions very challenging.
Our investigation of the Tenager's Problem is conducted under the following conditions:
* The scene is captured using a fixed overhead camera.
* The robot has one parallel-jaw gripper.
The central question remains: When presented with an overhead RGBD image of garments scattered on a work surface, what should be the next move? For simplicity, we consider grasps with the gripper in a fully vertical orientation, just above the work surface, aiming to maximize the number of garments gripped simultaneously. Each grasp is defined by a tuple \((x,y,\theta)\), where \((x,y)\) denotes its location on the 2D work surface (referred to as \(\mathcal{X}\)), and \(\theta\) represents the angle of the gripper jaws.
## IV Tenager's Problem Methods
This study explores various strategies for efficiently grasping multiple garments concurrently, categorized broadly into two types: _depth-based_ and _segment-based_ approaches.
All the methods described below use a pre-processing of the RGB pixels to determine the _garment points_, denoted \(\mathcal{X}_{g}\). Since we assume the system knows the color and/or pattern of the background, this is achieved with color thresholding.
### _Depth-Based Methods_
Depth-based methods use the depth output of the RGBD overhead camera to select the next grasp. To solve the Tenager's Problem, these methods are used repetitively until the workspace is clear: at each step, we capture a new depth image of the scene, use one of the methods below to generate a new grasp, and then execute that grasp. We examine two variations:
Fig. 2: An example of the segment-based grasp point selection algorithm (before orientations are chosen). **From left to right:**_(1)_ The original overhead RGB image. _(2)_ The cleaned segmentation \(\mathcal{M}\). _(3)_ The segments expanded by gripper radius to include ‘nearby’ garment points (all grasps must be done on garment points as a sanity check); overlapping regions thus correspond to points that are near multiple segments. _(4)_ The maximal area (points near a maximal set of segments) with the chosen grasp point shown in yellow.
#### Iii-A1 Height
This method selects the highest point in the scene and chooses the orientation to be the orientation of the major axis of a local principal component analysis (PCA) around the grasp point.
#### Iii-A2 Volume
This method considers the total volume of garments in a disc of radius \(R\) around a candidate grasp point \((x,y)\), which is estimated by summing the heights of all the pixels within that radius, and then selects the point with the largest total volume. As in the Height method, orientation is selected using a local PCA.
### _Segment-Based Methods_
The segment-based approach divides the task into _cycles_; a segmentation is generated at the start of each cycle and a sequence of grasps is carried out based on that segmentation (however, an overhead RGB image is still taken between each grasp). Each cycle relies on four subroutines:
(1) Segmentation and cleanup; (2) prediction; (3) grasp selection; (4) execution. Once all the planned moves have been performed the next cycle begins. The steps of a cycle are detailed below.
#### Iii-B1 Segmentation and cleanup
The method uses Meta's Segment Anything Model (SAM) [11] with the _vit-b_ weights, and prompts the image as a whole. However, the initial segmentation often contains multiple overlapping segments, gaps, and regions corresponding to the work surface itself. The method then performs redundant segment removal, gap filling, and color thresholding to eliminate segments representing the work surface.
#### Iii-B2 Prediction
Given a set of segments \(\mathcal{M}\) on the workspace and a proposed grasp point \((x,y,\theta)\), predicting which segments \(M\in\mathcal{M}\) will be removed by the grasp is essential. To achieve this, we designed an analytic predictor. We denote the predictor algorithm as \(p\), which takes a grasp \((x,y,\theta)\) and a segment \(M\) and returns a probability
\[p((x,y,\theta),M)\in\left[0,1\right].\]
The predictor models a grasp \((x,y,\theta)\) as affecting an elliptical region of the workspace, denoted \(E(x,y,\theta)\), which is centered at \((x,y)\) with its major axis oriented along \(\theta\); the major and minor axes lengths (\(r_{1},r_{2}\)) are scaled to be the length and width of the parallel-jaw gripper. The probability that a segment \(M\) is removed by \((x,y,\theta)\) is then estimated as
\[p((x,y,\theta),M)=\frac{\text{area}(E(x,y,\theta)\cap M)}{\text{area}(E(x,y, \theta)\cap M)+b}\]
where \(b\) is a normalization constant. This predictor is intended to capture the following intuition: the gripper directly affects the area under it, and the more any segment falls in that area, the more likely it is to be removed by the grasp (but obviously cannot have probability \(>1\) of being removed). Note that this predictor can never be 100% certain that a given segment will be removed, which is realistic.
#### Iii-B3 Grasp selection
With a clean segmentation and predictor, the method faces the task of determining which grasps to execute from an infinite number of potential options. To simplify the problem, the method first selects a set of grasp points (excluding orientations) and then selects an orientation for each point.
The grasp point selection algorithm does the following (given a radius \(r>0\) in pixels, and segmentation \(\mathcal{M}\)):
* For each pixel \((x,y)\in\mathcal{X}_{g}\), determine the set \(S(x,y)\subseteq\mathcal{M}\) of segments which are within distance \(r\) from \((x,y)\).
* Construct the set \(\mathcal{S}\) of _maximal sets_\(S(x,y)\), i.e. \(\mathcal{S}\) consists of every \(S(x,y)\) which is not a proper subset of some other \(S(x^{\prime},y^{\prime})\).
* For each maximal set \(S_{i}\in\mathcal{S}\), choose a (uniformly) random \((x_{i},y_{i})\) such that \(S(x_{i},y_{i})=S_{i}\).
* Return \(\{(x_{i},y_{i})\}_{i=1}^{m}\), where \(m\) is the number of maximal sets.
The rationale behind this algorithm is that a grasp point should be near as many segments as possible but not too similar to other grasp points and that for any grasp point that is not in a maximal set, there is another that is near strictly more segments than it.
Then the algorithm chooses an orientation for each grasp point using a greedy heuristic. First, it enumerates a list of \(\ell\) equally-spaced orientations in \([-\pi/2,\pi/2]\) and runs the predictor for each orientation on all the segments; the orientation which is predicted to remove the largest number of masks (i.e. the sum of the predicted probability of removal over all the masks) is then chosen for each grasp. For a balance between efficiency and thoroughness, we used \(\ell=6\).
#### Iii-B4 Grasp execution
The algorithm then attempts all grasps in sequence, in increasing order of distance to the bin; this is to prevent, as far as possible, dragging garments from disturbing the positions of the garments that remain (which may cause garments to fall off the work surface).
An RGB image is also captured after each transfer to the bin (when the arm is out of frame), although a new segmentation is _not_ generated (until the next cycle). Instead, for each planned grasp remaining, the difference between its current state and the state at the beginning of the cycle (when the segmentation was generated) is estimated using the squared difference between the pixel values within a small square neighborhood around the grasp point; if the difference is too large, the grasp is deemed to be in a different state from when it was planned and is not performed. This ensures that grasps are not performed unless the system knows that the local configuration of the garments is approximately the same as when it was planned.
### _Hybrid methods_
One observation borne out by the experiments was that depth-based methods and segment-based methods have different strengths - in particular depth-based methods excel at picking occluded garments (which generally result in taller piles) while segment-based methods excel at simultaneously picking adjacent garments. This motivated the idea of con
sidering _hybrid_ methods which make use of both depth data and segmentation data.
To take advantage of how these methods complement each other, hybrid methods do the following: given a height threshold, if the tallest pile is taller than the threshold, execute a (single) grasp as given by the depth-based method; if all piles are below the threshold, execute one cycle of the segment-based method.
### _Segment-based method with consolidation_
Another avenue to improving OpT is to first consolidate the garments into large piles for transport to the bin; this can improve overall efficiency in cases where the bin is located at some distance from the work surface, making transports costly relative to manipulations within the workspace. An efficient primitive for consolidation is the _grasp sequence_ where each pick-and-place movement picks up where the last one placed; this both saves on robot movement time (no travel distance to the next pick point) and, ideally, allows the robot to accumulate more garments as it goes before depositing them in the bin.
An extension of the segment-based method above to include consolidations is the following:
1. generate the grasp points as in the no-rearrangement segment-based method;
2. starting from the furthest grasp point, estimate the _expected area_ of grasped garments, and execute the next available grasp which does not exceed a given expected grasped area threshold;
3. if no such grasp exists, transport the currently-held garments to the bin.
Going from the furthest grasp point to the closest follows the intuition that a method using consolidation should consolidate towards the bin since this will always shorten the distance between the grasped garments and the bin, even if some are dropped along the way - and this will also tend to compress them into a smaller space, facilitating later multi-object grasps.
The grasp area threshold corresponds to the intuition that the gripper has a limit to the amount of fabric it can hold and thus trying to accumulate more than that limit in one grasp is counterproductive.
### _Baseline_
Finally, as a baseline, we use the _random_ method, which uniformly randomly selects a garment point \((x,y)\in\mathcal{X}_{g}\) with a uniformly random orientation \(\theta\in[-\pi/2,\pi/2]\), accounting for the gripper's symmetry.
## V Experiments
We tested all algorithms on the test set of 10 garments (see Fig. 3) with 25 sample runs. Each sample run begins with a randomized scene containing all 10 test set garments, and ends when the workspace is cleared of garments.
### _Data collection pipeline_
To run the experiments, we used a semi-autonomous data collection pipeline, in which experimental scene reset, randomization and data recording are done automatically, with the experimenter only needing to correct problems when they arise (for instance, if a garment falls off the work surface, the experimenter must return it for the next sample). The system uses the recorded weight data to automatically notify the experimenter when such a problem occurs, to minimize the amount of human attention necessary for data collection.
The scene is automatically reset in the following way:
1. The robot grasps the bin and empties it over the work surface to deposit the garments, then places the bin back to its original position.
2. The robot executes a sequence of random pick-and-place actions on the surface to shuffle the garments; in our experiments, 10 such moves were performed for every scene reset.
Then the experiment is performed with the selected method, recording at each step the overhead RGBD output, grasp location and orientation, and weight of the garments in the bin. The experiment is paused for 3 seconds after every transport to allow the scale's output to settle.
### _Metrics evaluated_
Our metric for evaluating the methods is _Objects per Transport_ (OpT), which denotes the average number of objects taken during each transport and measures the general effectiveness of the performed grasps. We use OpT (as opposed to Picks Per Hour (PPH)) as our metric as OpT directly measures grasp quality and PPH depends heavily on implementation details (particularly concerning computation time) which reduces its reliability as a metric in this setting.
Fig. 3: The test set of 10 garments, representing a variety of different sizes, weights, textures, colors, patterns, flexibility, and garment classes.
For each algorithm tested, OpT was evaluated on all 25 sample runs, which were then averaged to yield the final result and \(95\%\) confidence bounds.
## VI Results
The results of the experiments are given in Table I, show that both depth-based and segment-based methods yield clear improvements (approximately \(20\%\) additional OpT) over the baseline. Furthermore, these approaches complement each other: hybridizing them yielded \(26\%\) and \(34\%\) more OpT (for volume/segment and height/segment respectively) as compared to the baseline. However, it is important to note that only the segment-based method achieves this without the use of depth information, which may not be available on all systems.
It should be noted that while grasp quality is the focus and OpT is the most meaningful metric for this work, the ultimate goal remains improving pick efficiency as measured by PPH. The depth-based methods, which do not perform significant computations to find grasps, improve PPH from \(477\) for the baseline to \(526\) and \(525\) for max-volume and max-height grasps respectively. While the segmentation method registers a slight decrease in PPH (to \(453\)), optimizing the methods' speed may increase the PPH up to a comparable \(523\) (determined by subtracting computation time in the experiments) with no need for depth data.
Finally, at the cost of both computational overhead and additional physical actions, the segmentation with consolidation method improved OpT by \(67\%\) over the baseline.
### _Comparison of Methods_
What are the inherent advantages and disadvantages of the two approaches outlined above?
* Depth-based methods require both RGB and depth images to compute grasps. In contrast, segment-based methods solely rely on RGB images, rendering them suitable for systems lacking depth cameras.
* Segment-based methods often demand more computational resources, as they involve neural network-driven segmentation and subsequent cleanup. To ensure efficient computation without compromising speed, it may be necessary to deploy GPUs or opt for segmentation methods optimized for CPU processing.
* A notable challenge faced by segment-based methods is their limited ability to detect grasps that remove occluded garments. In contrast, depth-based methods more often grasp over occluded garments due to their utilization of depth information, which provides an enhanced perception of garment depth.
* Conversely, segment-based methods explicitly choose grasps to simultaneously capture garments situated closely together, whereas depth-based methods cannot determine which points are in proximity to multiple visible garments.
## VII Conclusion
In this work, we tackle the challenging problem of robotic garment decluttering, by formalizing the Teenager's Problem and developing both depth- and segment-based methods to solve it. We used recent advances in image segmentation [11] to explore an approach that uses it to distinguish garments in the image and find grasps that are likely to capture as many as possible.
### _Dirty Laundry and Future Work_
However, this work has certain limitations and leaves a number of areas open for improvement:
* All the proposed methods rely on accurately separating the garments from the work surface using the RGB image, which is done here via color thresholding. While this was reliable in our experimental setup, a different system may be needed in practice.
* While all the methods considered here grasp at a fixed height above the work surface with a perfectly vertical gripper, the most efficient grasp may not share those characteristics. Additional improvements might be obtained by optimizing the grasp height or angle.
* While the \(67\%\) OpT increase from the segmentation with consolidation method is large, an average of roughly \(4.0\) rearrangement actions were performed per transport saved over the baseline. Nevertheless, such methods may increase efficiency in cases where transports are relatively costly, e.g. when the target bin is far from the workspace.
Extensions such as sorting of clothes (for instance, separating clothing by type or color) are a natural fit for the techniques discussed here, especially the segment-based approach. Additionally, although we present only the analytic grasp predictor, the segment-based method described in Section IV-B is designed to be compatible with any predictor. Thus, a possible direction for future work is to improve the predictor.
|
2305.12981 | Covariance Estimation under Missing Observations and $L_4-L_2$ Moment
Equivalence | We consider the problem of estimating the covariance matrix of a random
vector by observing i.i.d samples and each entry of the sampled vector is
missed with probability $p$. Under the standard $L_4-L_2$ moment equivalence
assumption, we construct the first estimator that simultaneously achieves
optimality with respect to the parameter $p$ and it recovers the optimal
convergence rate for the classical covariance estimation problem when $p=1$ | Pedro Abdalla | 2023-05-22T12:42:05Z | http://arxiv.org/abs/2305.12981v2 | # Covariance Estimation under Missing Observations and \(L_{4}-L_{2}\) Moment Equivalence
###### Abstract
We consider the problem of estimating the covariance matrix of a random vector by observing i.i.d samples and each entry of the sampled vector is missed with probability \(p\). Under the standard \(L_{4}-L_{2}\) moment equivalence assumption, we construct the first estimator that simultaneously achieves optimality with respect to the parameter \(p\) and it recovers the optimal convergence rate for the classical covariance estimation problem when \(p=1\).
## 1 Introduction
High-dimensional covariance estimation is one of the most fundamental problems in the intersection of probability and statistics. On the applied side, it is a fundamental task for PCA or linear regression [30]. On the theoretical side, the non-asymptotic properties of isotropic sample covariance matrices have been extensively studied [2, 19, 29, 28, 27, 10] due to a famous question by Kannan, Lovasz and Simonovits [11] and further generalized to the anisotropic case [15, 14, 1]. Although the sample covariance matrix seems to be the most natural choice of estimator, it performs poorly when the input data does not have strong decay in the tail, in the sense that the convergence rate with respect to the confidence level \(\delta\) is quite slow.
Motivated by this fact, a line of work in robust statistics pioneered by Catoni [5] studied the so-called sub-Gaussian estimators defined to be estimators that perform as good as the empirical mean under the Gaussian distribution. Many estimators have been proposed for the covariance estimation problem (see [12] for survey), in particular, there are now sub-Gaussian estimators under minimal assumptions on the data distribution [1, 22].
On the other hand, data may be corrupted by noise. In [17] Lounici, motivated by applications in climate change, gene expression and cosmology, tackled the covariance estimation problem under the so-called missing observations model in which we "miss" each entry of the sampled vector with probability \(p\). We highlight that the missing observations model is a standard notion in the literature that goes beyond the covariance estimation setting, see [9, 16] and the references therein. The goal of this work is to design an estimator that achieves simultaneously the following properties:
* **Missing Observations:** We allow the data to present missing observations and heavy tails. We construct an estimator with minimax optimal convergence rate without assuming any knowledge of \(p\). Remarkably, we show that dependence on \(p\) is universal in the sense that it does not depend on the distribution of the data.
**Dimension-Free:** The convergence rate scales with the effective rank \(\mathbf{r}(\Sigma)\) rather than the dimension \(d\), \[\mathbf{r}(\Sigma):=\frac{\operatorname{Tr}(\Sigma)}{\|\Sigma\|}.\] This is an important aspect in high dimensional settings when the dimension \(d\) is at least the sample size \(N\).
* **Heavy-Tails:** We allow the distribution to have heavy tails only requiring the existence of four moments satisfying minimal assumptions. Moreover, the result is as sharp as if the data were Gaussian up to an absolute constant.
We begin with rigorous definition of the model. We say that the distribution of a zero mean random vector \(X\) satisfies the \(L_{4}-L_{2}\)_norm equivalence_ (_hypercontractivity_) with constant \(\kappa\geqslant 1\), if for all \(v\in S^{d-1}\),
\[(\mathbb{E}|\langle X,v\rangle|^{4})^{1/4}\leqslant\kappa(\mathbb{E}|\langle X,v\rangle|^{2})^{1/2}.\]
Here we always assume that the data satisfies \(L_{4}-L_{2}\) with an absolute constant \(\kappa>0\), i.e, the constant is a fixed real number that does not depend on any other parameter. A huge class of distributions satisfy the norm equivalence assumption above with \(\kappa\) being a small absolute constant. Examples include sub-gaussian random vectors and sub-exponential random vectors with bounded \(\psi_{\alpha}\) norm, as well \(t\)-student distributions with sufficiently large degree of freedom [20].
We say that the sample \(Y_{1},\ldots,Y_{N}\) is \(p\)-_sparsified_ if it is obtained from the sample \(X_{1},\ldots X_{N}\) of independent copies of \(X\) by multiplying each entry of the \(X_{i}\)'s by an independent \(0/1\) Bernoulli random variable with mean \(p\). In a short terminology, we say that the data is sampled from \(X\odot\mathbf{p}\), where \(\mathbf{p}\in\{0,1\}^{d}\) is a random vector with i.i.d entries Bernoulli \(p\) and the notation \(\odot\) simply means the standard entrywise product. The use of the value zero to represent the missing information is just for convenience, could be any other value. We present the main result of this manuscript.
**Theorem 1** (Main result).: _Assume that \(X\) is a zero mean random vector in \(\mathbb{R}^{d}\) with covariance matrix \(\Sigma\) satisfying the \(L_{4}-L_{2}\) moment equivalence assumption with an absolute constant \(\kappa\). Fix the confidence level \(\delta\in(0,1)\). Suppose that \(Y_{1},\ldots,Y_{N}\) are i.i.d samples distributed as \(X\odot p\), where \(\mathbf{p}=(p_{1},\ldots,p_{d})\in\mathbb{R}^{d}\) is a random vector with i.i.d Bernoulli entries with parameter \(p\). Then there exists an estimator \(\widehat{\Sigma}(N,\delta)\) depending only on the sample \(Y_{1},\ldots,Y_{N}\) and \(\delta\) such that, with probability at least \(1-\delta\),_
\[\|\widehat{\Sigma}-\Sigma\|\leqslant\frac{C(\kappa)}{p}\|\Sigma\|\left(\sqrt {\frac{r(\Sigma)+\log(1/\delta)}{N}}\right).\]
_Here \(C(\kappa)>0\) is an absolute constant depending only on \(\kappa\)._
Literature Review:We remark that several results for covariance estimation under missing observations were obtained in the literature, for example [17, 26, 25, 24, 13, 3], but it happens that none of the previous results were able simultaneously scale correctly with the factor of \(p\) and to recover a sub-Gaussian estimator as in [1, 22] when \(p=1\), even if we assume that the data is Gaussian. Moreover, the convergence rate is optimal up to an absolute constant: When \(p=1\), a classical result by Lounici and Koltchinskii [14, Theorem 4] states that if \(G_{1},\ldots,G_{N}\) are i.i.d mean zero Gaussian vectors with covariance matrix \(\Sigma\) and \(N\geqslant r(\Sigma)\), then
\[c\|\Sigma\|\sqrt{\frac{r(\Sigma)}{N}}\leqslant\mathbb{E}\left\|\frac{1}{N} \sum_{i=1}^{N}G_{i}\otimes G_{i}-\Sigma\right\|\leqslant C\|\Sigma\|\sqrt{ \frac{r(\Sigma)}{N}}.\]
This shows optimality with respect to the effective rank. They also showed that that the expectation is tightly concentrated around the mean and our quantitative convergence rate with respect to \(\delta\) matches their result up to an absolute constant. This is indeed optimal, see [20, 1] for a more technical discussion, but it should be not a surprise that we cannot obtain a stronger concentration than the Gaussian tail. In addition to this, the dependence with respect to \(p\) is also optimal due to a lower bound from Lounici [17, Theorem 2]. In a nutshell, his result shows that there exists absolute constants \(c_{1},c_{2}>0\) such that
\[\inf_{\widehat{\Sigma}}\sup_{\mathbb{P}}\mathbb{P}\left(\|\widehat{\Sigma}- \Sigma\|\geqslant\frac{c_{1}}{p}\|\Sigma\|\sqrt{\frac{r(\Sigma)}{N}}\right) \geqslant c_{2}.\]
Here the infimum is taken with respect to all estimators that depend only on the data and the supremum is taken over all possible distributions with covariance matrix \(\Sigma\). This implies that our main result captures the optimal dependence with respect to \(p\) as well.
To conclude, we mention that we only focus on the information theoretic limits, without considering any computational constraint. To the best of our knowledge, there are no computable estimators for the covariance matrix under heavy tails, even in the case without missing observations.
Proposed Estimator:The startpoint to construct our estimator is the following observation: The expectation of the covariance matrix of \(Y\) scales differently for the diagonal part and the off-diagonal part, in fact
\[\mathbb{E}Y\otimes Y=p\operatorname{Diag}(\Sigma)+p^{2}\operatorname{Off}( \Sigma).\]
We can "invert" the equality above to get the dependence between the true covariance and the data, precisely we have
\[\Sigma=p^{-1}\operatorname{Diag}(\mathbb{E}Y\otimes Y)+p^{-2}\operatorname{ Off}(\mathbb{E}Y\otimes Y).\]
We should replace the unknown term \(\mathbb{E}Y\otimes Y\) by its sample covariance, but this is not enough when we consider heavy tailed data \(X_{1},\dots,X_{N}\) as discussed above. We define the truncation function
\[\psi(x)=\begin{cases}x,&\quad\text{for }x\in[-1,1],\\ \operatorname{sign}(x),&\quad\text{for }|x|>1,\end{cases}\]
to robustify our estimator in each direction of the sphere. The idea is to estimate the matrix through its quadratic form, unfortunately here we lost the computational tractability of the estimator as we need to truncate in each direction. Even if we take a net, the number of computations would exponentially increase in the ambient dimension \(d\). We now describe its final form, we estimate the diagonal and off-diagonal part separately,
\[\widehat{\Sigma_{1}}(\lambda_{1}):=\operatorname*{argmin}_{\Sigma_{1}\in \mathbb{S}_{+}^{d}|\operatorname{Off}(\Sigma_{1})=0}\sup_{v\in S^{d-1}}|v^{T} \Sigma_{1}v-\frac{1}{n\lambda_{1}}\sum_{i=1}^{n}\psi(\lambda_{1}v^{T} \operatorname{Diag}(Y_{i}\otimes Y_{i})v)|,\]
\[\widehat{\Sigma_{2}}(\lambda_{2}):=\operatorname*{argmin}_{\Sigma_{2}\in \mathbb{S}_{+}^{d}|\operatorname{Diag}(\Sigma_{2})=0}\sup_{v\in S^{d-1}}|v^{T }\Sigma_{2}v-\frac{1}{n\lambda_{2}}\sum_{i=1}^{n}\psi(\lambda_{2}v^{T} \operatorname{Off}(Y_{i}\otimes Y_{i})v)|,\]
where \(\mathbb{S}_{+}^{d}\) is the set of \(d\) by \(d\) positive semi-definite matrices and \(S^{d-1}\) denotes the unit sphere in \(\mathbb{R}^{d}\). The final estimator becomes
\[\widehat{\Sigma}=\frac{1}{\widehat{p}}\operatorname{Diag}(\widehat{\Sigma_{ 1}})+\frac{1}{\widehat{p}^{2}}\operatorname{Off}(\widehat{\Sigma_{2}}),\]
where we choose \(\lambda_{1},\lambda_{2}>0\) to be appropriate truncation levels, we postpone its value when we deal with the analysis. The term \(\widehat{p}\) is an estimator for the parameter \(p\). The construction of the estimator and its analysis share similarities with the "trimmead covariance" estimator proposed by Zhivotovskiy and the author [1], however we need to break into diagonal and off-diagonal parts to take in account the different scales with \(p\). The main technical difficulty comes from controlling the random quadratic form to get the optimal dependence with respect to \(p\) mainly in the off-diagonal case. A direct approach faces the difficulty that we do not have a positive semidefinite matrix anymore and it is hard to capture cancellations. On the other hand, an indirect approach, expressing the off-diagonal part as the total part minus the diagonal part, implies in sub-optimality with respect to \(p\). So we need to carefully balance these two approaches.
Organization.The rest of the paper is organized as follows: In Section 2 we assume the knowledge of certain parameters to simply the analysis of the estimator and derive sharp convergence rates. We further systematically relax these assumptions in Section 3 by estimating each quantity separately in each subsection. The last subsection of Section 3 is devoted to the formal construction of the estimator and the proof of the main result.
Notation.Throughout this text \(C,c>0\) denote an absolute constant that may change from line to line. For an integer \(N\), we set \([N]=\{1,\ldots,N\}\). For any two functions (or random variables) \(f,g\) defined in some common domain, the notation \(f\lesssim g\) means that there is an absolute constant \(c\) such that \(f\leqslant cg\) and \(f\sim g\) means that \(f\lesssim g\) and \(g\lesssim f\). Let \(\mathbb{S}_{+}^{d}\) denote the set of \(d\) by \(d\) positive-definite matrices. The symbols \(\|\cdot\|,\|\cdot\|_{F}\) denote the operator norm and the Frobenius norm of a matrix, respectively. Let \(\mathcal{KL}(\rho,\mu)=\int\log\left(\frac{d\rho}{d\mu}\right)d\rho\) denote the Kullback-Leibler divergence between a pair of measures \(\rho\) and \(\mu\) and we write \(\rho\ll\mu\) to indicate that the measure \(\rho\) is absolutely continuous with respect to the measure \(\mu\). For a vector \(X\in\mathbb{R}^{d}\), the tensor product is defined as \(X\otimes X:=XX^{T}\in\mathbb{R}^{d\times d}\).
## 2 Oracle Estimator
In this section we prove our main result under the assumption that we know the effective rank of the covariance matrix \(r(\Sigma)\), the trace of the covariance matrix \(\operatorname{Tr}(\Sigma)\) and the sparsifying factor \(p\). These assumptions will be further relaxed in the next section by splitting the data and estimating such parameters separately. The goal is to prove the following result.
**Proposition 1**.: _Assume that \(X\) is a mean zero random vector in \(\mathbb{R}^{d}\) with covariance matrix \(\Sigma\) satisfying the \(L_{4}-L_{2}\) moment equivalence assumption. Fix the confidence level \(\delta\in(0,1)\). Suppose that \(Y_{1},\ldots,Y_{N}\) are i.i.d samples from \(X\odot\mathbf{p}\). Then there exists \(\lambda_{1},\lambda_{2}>0\) depending only on \(\operatorname{Tr}(\Sigma),\|\Sigma\|\) and \(p\) such that_
\[\max\{\|p^{-1}\widehat{\Sigma_{1}}(\lambda_{1})-\operatorname{Diag}(\Sigma) \|,\|p^{-2}\widehat{\Sigma_{2}}(\lambda_{2})-\operatorname{Off}(\Sigma)\| \}\leqslant\frac{C(\kappa)}{p}\|\Sigma\|\left(\sqrt{\frac{r(\Sigma)+\log(1/ \delta)}{N}}\right).\]
_Here \(C(\kappa)>0\) is an absolute constant depending only on \(\kappa\)._
Our analysis is based on the variational principle pioneered by O. Catoni [5, 4, 8] and further developed in many applications related to high dimensional probability and statistics [4, 6, 7, 8, 31, 21]. In most of the applications of the variational principle, the following lemma is the key stoning step.
**Lemma 1**.: _Assume that \(X_{i}\) are i.i.d. random variables defined on some measurable space. Let \(\Theta\) be a subset of \(\mathbb{R}^{p}\) for some \(p\geqslant 1\), \(\mu\) be a a fixed distribution on \(\Theta\) and let \(\rho\) be any distribution on \(\Theta\) such that \(\rho\ll\mu\). Then, simultaneously for any such \(\rho\), with probability at least \(1-\delta\),_
\[\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}_{\rho}f(X_{i},\theta)\leqslant\mathbb{E}_ {\rho}\log(\mathbb{E}_{X}e^{f(X,\theta)})+\frac{\mathcal{KL}(\rho,\mu)+\log(1/ \delta)}{N}.\]
_Here \(\theta\) is distributed according to \(\rho\)._
The proof can be found in [4, 31] and will be omitted. The next lemma is a technical fact that allow to "convexify" the truncation function \(\psi\). Indeed, it is easy to see that the function \(e^{\psi}(x)\) is bounded by \((1+x+x^{2})\) that still not convex but if we add a quadratic term, then it becomes convex. This result was already used before in the literature [4, 31, 1].
**Lemma 2**.: _Let \(\psi\) be the truncation function as above and let \(Z\) be a random variable with finite second moment. Then, we have_
\[\psi(\mathbb{E}Z)\leqslant\mathbb{E}\log(1+Z+Z^{2})+\min\left\{1,\frac{ \mathbb{E}Z^{2}}{6}\right\}.\]
_Moreover, for any \(a>0\),_
\[\mathbb{E}\log(1+Z+Z^{2})+a\min\left\{1,\frac{\mathbb{E}Z^{2}}{6}\right\} \leqslant\mathbb{E}\log\left(1+Z+\left(1+\frac{(7+\sqrt{6})(\exp(a)-1)}{6} \right)Z^{2}\right).\]
For completeness, we include a proof at the end of this section. Now we start with the facts that are specifically derived for the missing observation case. The next result is crucial to get the right dependence on \(p\), we also postpone the proof to the end of this section.
**Lemma 3**.: _Let \(Y\) as above. For every \(v\in S^{d-1}\), we have_
\[\mathbb{E}(v^{T}\operatorname{Diag}(Y\otimes Y)v)^{2}\leqslant 2p\kappa^{4} \|\operatorname{Diag}(\Sigma)\|^{2}\]
_and_
\[\mathbb{E}(v^{T}\operatorname{Off}(Y\otimes Y)v)^{2}\leqslant 4p^{2}\kappa^{4} \|\Sigma\|^{2}.\]
The proof of Proposition 1 consists in using the variational principle twice, one for the diagonal part and the other for the more delicate off-diagonal part.
Proof.: **Diagonal Part**: We start by defining the parameter space of interest, let
\[\Theta=\mathbb{R}^{d}\times\mathbb{R}^{d}.\]
Choose \(\mu\) to be a product of two zero mean multivariate Gaussians with mean zero and covariance \(\beta^{-1}I_{d}\), where \(\beta>0\) will be chosen later. For \(v\in S^{d-1}\) let \(\rho_{v}\) be the product of two multivariate Gaussian distribution with mean \(v\) and covariance \(\beta^{-1}I_{d}\). By construction, since \((\theta,\nu)\) is distributed according to \(\rho_{v}\), we have \(\mathbb{E}_{\rho_{v}}(\theta,\nu)=(v,v)\). By the standard formula for the \(\mathcal{KL}\)-divergence between two Gaussian measures [23],
\[\mathcal{KL}(\rho_{v},\mu)=\beta.\]
Fix \(\lambda_{1}>0\) to a free parameter to be optimized later. By the first part of Lemma 2 we have,
\[\psi\left(\lambda_{1}v^{T}\operatorname{Diag}(Y\otimes Y)v\right) =\psi\left(\lambda_{1}\mathbb{E}_{\rho_{v}}\theta^{T}\operatorname{Diag}(Y \otimes Y)\nu\right)\] \[\leqslant\mathbb{E}_{\rho_{v}}\log(1+\lambda_{1}\theta^{T} \operatorname{Diag}(Y\otimes Y)\nu+\lambda_{1}^{2}(\theta^{T}\operatorname{ Diag}(Y\otimes Y)\nu)^{2})+R.\]
where \(R:=\min\{1,\lambda_{1}^{2}\mathbb{E}_{\rho_{\nu}}(\theta^{T}\operatorname{Diag}(Y \otimes Y)\nu)^{2}/6\}\). Since, for all \(i\in[d]\), \(\mathbb{E}\theta_{i}^{2}=\beta^{-1}+v_{i}^{2}\) and \(\mathbb{E}\theta_{i}\theta_{j}=v_{i}v_{j}\), we have
\[\mathbb{E}_{\rho_{v}}(\theta^{T}\operatorname{Diag}(Y\otimes Y) \nu)^{2}=\mathbb{E}_{\rho_{\nu}}\left(\sum_{i=1}^{d}\langle Y,e_{i}\rangle^{2} \theta_{i}\nu_{i}\right)^{2}\] \[=\beta^{-2}\sum_{i=1}^{d}\langle Y,e_{i}\rangle^{4}+\sum_{i,j=1}^ {d}\langle Y,e_{i}\rangle^{2}\langle Y,e_{i}\rangle^{2}v_{i}^{2}v_{j}^{2}+2 \beta^{-1}\sum_{i=1}^{d}\langle Y,e_{i}\rangle^{4}v_{i}^{2}\] \[=\beta^{-2}\|\operatorname{Diag}(Y\otimes Y)\|_{F}^{2}+(v^{T} \operatorname{Diag}(Y\otimes Y)v)^{2}+2\beta^{-1}\|\operatorname{Diag}(Y \otimes Y)v\|_{2}^{2},\]
By symmetry, \(\mathbb{P}\left(\theta^{T}\operatorname{Diag}(Y\otimes Y)\nu\geqslant v^{T} \operatorname{Diag}(Y\otimes Y)v\right)\geqslant\frac{1}{4}\). To see this, observe that it is equal to
\[\mathbb{P}(\langle\operatorname{Diag}(Y\otimes Y)\theta,(\nu-v)\rangle+ \langle\operatorname{Diag}(Y\otimes Y)v,(\theta-v)\rangle\geqslant 0).\]
The second term is positive with probability one half. Conditioned on this event, the first term is positive with probability one half and it is independence from the first term. We obtain that the probability of both are positive is at least one quarter and then we can write,
\[\min\{1,\frac{\lambda_{1}^{2}}{6}(v^{T}\operatorname{Diag}(Y\otimes Y)v)^{2} \}\leqslant 4\mathbb{E}_{\rho_{v}}\min\{1,\frac{\lambda^{2}}{6}(\theta^{T} \operatorname{Diag}(Y\otimes Y)\nu)^{2}\}.\]
By the second part of Lemma 2, we have
\[\psi(\lambda_{1}v^{T}\operatorname{Diag}(Y\otimes Y)v)\] \[\leqslant\mathbb{E}_{\rho_{v}}\log(1+\lambda\theta^{T} \operatorname{Diag}(Y\otimes Y)\nu+C_{1}\lambda^{2}(\theta^{T}\operatorname{ Diag}(Y\otimes Y)\nu)^{2})+R(Y,\beta),\]
where \(R(Y,\beta):=\min\{1,2\lambda_{1}^{2}\beta^{-1}\|\operatorname{Diag}(Y\otimes Y )v\|_{2}^{2}/6\}+\min\{1,\lambda_{1}^{2}\beta^{-2}\|Y\otimes Y\|_{F}^{2}/6\}\). For instance, let us focus on the first term. The goal is to apply Lemma 1 to the function \(f\) defined below
\[f(Y,\theta,\nu):=\log(1+\lambda_{1}\theta^{T}\operatorname{Diag}(Y\otimes Y) \nu+C_{1}\lambda_{1}^{2}(\theta^{T}\operatorname{Diag}(Y\otimes Y)\nu)^{2}).\]
Using the numeric inequality \(\log(1+y)\leqslant y\) for \(y\geqslant-1\), Fubini's theorem and Lemma 3, we have
\[\mathbb{E}_{\rho_{v}}\log\mathbb{E}\left(1+\lambda_{1}\theta^{T} \operatorname{Diag}(Y\otimes Y)\nu+C_{1}\lambda_{1}^{2}(\theta^{T}\operatorname {Diag}(Y\otimes Y)\nu)^{2}\right)\] \[\leqslant\mathbb{E}_{\rho_{v}}\mathbb{E}\left(\lambda_{1}\theta^{ T}\operatorname{Diag}(Y\otimes Y)\nu+C_{1}\lambda_{1}^{2}(\theta^{T} \operatorname{Diag}(Y\otimes Y)\nu)^{2}\right)\] \[\leqslant p\lambda_{1}v^{T}\operatorname{Diag}(\Sigma)v+C_{1} \lambda_{1}^{2}(p\beta^{-2}\kappa^{4}\operatorname{Tr}^{2}(\Sigma)+2\beta^{-1 }p\kappa^{4}\|\Sigma\|^{2}+p\kappa^{4}\|\Sigma\|^{2})\]
Now we set \(\beta=r(\Sigma)\) and apply Lemma 1 to obtain that, for all \(v\in S^{d-1}\), with probability at least \(1-\delta\),
\[\frac{1}{N\lambda_{1}}\sum_{i=1}^{N}\psi(\lambda_{1}v^{T}\operatorname{Diag}( Y\otimes Y)v)\leqslant pv^{T}\operatorname{Diag}(\Sigma)v+C\lambda_{1}p\| \Sigma\|^{2}\kappa^{4}+\sum_{i=1}^{n}\frac{R_{i}}{N\lambda_{1}}+\frac{r(\Sigma) +\log(1/\delta)}{\lambda_{1}N}\]
Now we bound the third term above. By Bernstein inequality together with the fact that the variance is at most the expectation because the random variables are bounded by one, with probability \(1-\delta\)
\[\frac{1}{\lambda_{1}N}\sum_{i=1}^{N}\min\{1,\beta^{-2}\lambda_{1}^{2}\|Y\otimes Y \|_{F}^{2}/6\}\lesssim\mathbb{E}\beta^{-2}\|Y\otimes Y\|_{F}^{2}+\frac{\log(1/ \delta)}{\lambda_{1}N}\lesssim\lambda_{1}p\kappa^{4}\|\Sigma\|^{2}+\frac{\log (1/\delta)}{\lambda_{1}N}.\]
An analogous computation shows the same bound holds (up to an absolute constant) for the term \(\min\{1,2\beta^{-1}\|\operatorname{Diag}(Y\otimes Y)v\|_{2}^{2}/6\}\). Finally we conclude that, there exists an absolute constant \(C>0\) such that, with probability at least \(1-\delta\),
\[\frac{1}{N\lambda_{1}}\sum_{i=1}^{N}\psi(\lambda_{1}v^{T}\operatorname{Diag}(Y _{i}\otimes Y_{i})v)\leqslant pv^{T}\operatorname{Diag}(\Sigma)v+C\left(\lambda _{1}p\|\Sigma\|^{2}\kappa^{4}+\frac{r(\Sigma)+\log(1/\delta)}{\lambda_{1}N} \right).\]
We optimize over \(\lambda_{1}>0\), so we choose \(\lambda_{1}=(Np)^{-1/2}\|\Sigma\|^{-1}\kappa^{-2}\sqrt{r(\Sigma)+t}\) to obtain that, with probability \(1-\delta\),
\[\frac{1}{N\lambda_{1}}\sum_{i=1}^{N}\psi(\lambda_{1}v^{T}\operatorname{Diag}( Y_{i}\otimes Y_{i})v)\leqslant pv^{T}\operatorname{Diag}(\Sigma)v+C\kappa^{2} \sqrt{p}\sqrt{\frac{r(\Sigma)+\log(1/\delta)}{N}}.\]
We repeat the same arguments above for \(\rho_{2,v}\) being a product measure between two Gaussians \(\theta\sim N(v,\beta^{-1}I_{d})\) and \(\nu\sim N(-v,\beta^{-1}I_{d})\). The argument follows exactly the same steps because \(\psi\) is symmetric. So we also obtain that, with probability \(1-\delta\),
\[-\frac{1}{N\lambda_{1}}\sum_{i=1}^{N}\psi(\lambda_{1}v^{T}\operatorname{Diag}( Y_{i}\otimes Y_{i})v)\leqslant-pv^{T}\operatorname{Diag}(\Sigma)v+C\kappa^{2} \sqrt{p}\sqrt{\frac{r(\Sigma)+\log(1/\delta)}{N}}.\]
By union bound, we obtain a two-sided bound. Therefore we guarantee with probability at least \(1-\delta\),
\[\|p^{-1}\widehat{\Sigma}_{1}-\operatorname{Diag}(\Sigma)\|\lesssim\frac{1}{ \sqrt{p}}\kappa^{2}\|\Sigma\|\sqrt{\frac{r(\Sigma)+\log(1/\delta)}{N}}.\]
**Off-diagonal part**: We now proceed to the second part of the proof to deal with the off-diagonal part. We choose \(\mu\) and \(\rho(v)\) as before and write,
\[\psi\left(\lambda_{2}v^{T}\operatorname{Off}(Y\otimes Y)v\right) =\psi\left(\lambda_{2}\mathbb{E}_{\rho_{v}}\theta^{T}\operatorname{Off}(Y \otimes Y)\nu\right)\] \[\leqslant\mathbb{E}_{\rho_{v}}\log(1+\lambda_{2}\theta^{T} \operatorname{Off}(Y\otimes Y)\nu+\lambda_{2}^{2}(\theta^{T}\operatorname{Off} (Y\otimes Y)\nu)^{2})+R.\]
where \(R_{2}:=\min\{1,\lambda_{2}^{2}\mathbb{E}_{\rho_{v}}(\theta^{T}\operatorname{ Off}(Y\otimes Y)\nu)^{2}/6\}\). We have to deal with the quadratic form of the off-diagonal that requires a more delicate analysis, in fact,
\[\mathbb{E}_{\rho_{v}}(\theta^{T}\operatorname{Off}(Y\otimes Y)\nu)^{2}= \mathbb{E}_{\rho_{v}}\sum_{i\neq j;k\neq l}\langle Y,e_{i}\rangle\langle Y,e_{ j}\rangle\langle Y,e_{k}\rangle\langle Y,e_{l}\rangle\theta_{i}\nu_{j}\theta_{k}\nu_{l}\]
We need to analyze the term \(\mathbb{E}\theta_{i}\theta_{k}\mathbb{E}\nu_{j}\nu_{l}\), the factorization is possible due to the independence between \(\theta\) and \(\nu\). We split in three cases: The first one when \(k=i\) and \(j=l\), the second when \(k=i\) and \(j\neq l\) or \(k\neq i\) and \(j=l\) and finally when \(k\neq i\) and \(j\neq l\). In the first case, the summation becomes
\[\sum_{i\neq j}\langle Y,e_{i}\rangle^{2}\langle Y,e_{j}\rangle^{2}(\beta^{-1} +v_{i}^{2})(\beta^{-1}+v_{j}^{2}).\]
In the second case, the summation becomes
\[\sum_{i\neq j\neq l}\langle Y,e_{i}\rangle^{2}\langle Y,e_{j}\rangle\langle Y, e_{l}\rangle(\beta^{-1}+v_{i}^{2})v_{j}v_{l}+\sum_{i\neq j\neq k}\langle Y,e_{i} \rangle\langle Y,e_{j}\rangle^{2}\langle Y,e_{k}\rangle(\beta^{-1}+v_{j}^{2} )v_{i}v_{k}.\]
The third case is simpler,
\[\sum_{i\neq j\neq k\neq l}\langle Y,e_{i}\rangle\langle Y,e_{j}\rangle\langle Y,e_{k}\rangle\langle Y,e_{l}\rangle v_{i}v_{j}v_{k}v_{l}.\]
Observe that summing all terms that do not contain any \(\beta\) factor, we obtain \((v^{T}\operatorname{Off}(Y\otimes Y)v)^{2}\). As before, the goal is to apply Lemma 1 to the function \(f\),
\[f(Y,\theta,\nu):=\log(1+\lambda_{2}\theta^{T}\operatorname{Off}(Y\otimes Y)\nu +C_{2}\lambda_{2}^{2}(\theta^{T}\operatorname{Off}(Y\otimes Y)\nu)^{2}),\]
where \(C_{2}>0\) is a sufficiently large absolute constant. Using again the numeric inequality \(\log(1+y)\leqslant y\) for \(y\geqslant-1\), Fubini's theorem and Lemma 3, we have
\[\mathbb{E}_{\rho_{v}}\log\mathbb{E}\left(1+\lambda_{2}\theta^{T} \operatorname{Off}(Y\otimes Y)\nu+C_{2}\lambda_{2}^{2}(\theta^{T} \operatorname{Off}(Y\otimes Y)\nu)^{2}\right)\] \[\leqslant\mathbb{E}_{\rho_{v}}\mathbb{E}\left(\lambda_{2}\theta^ {T}\operatorname{Off}(Y\otimes Y)\nu+C_{2}\lambda_{2}^{2}(\theta^{T} \operatorname{Off}(Y\otimes Y)\nu)^{2}\right).\]
The first term is equal to \(p^{2}\lambda_{2}v^{T}\operatorname{Off}(\Sigma)v\). We know that all terms in the expansion of \(\theta^{T}\operatorname{Off}(Y\otimes Y)\nu\) without a \(\beta\) factor add up \(v^{T}\operatorname{Off}(Y\otimes Y)v\) and its expectation is at most \(4p^{2}\kappa^{4}\|\Sigma\|^{2}\) by Lemma 3. We bound the terms containing \(\beta\) factors systematically. Using Cauchy-Schwarz inequality together with the moment equivalence for \(X\), we obtain
\[\beta^{-2}\sum_{i\neq j}\mathbb{E}\langle Y,e_{i}\rangle^{2} \langle Y,e_{j}\rangle^{2}=\beta^{-2}p^{2}\sum_{i\neq j}\mathbb{E}\langle X,e_ {i}\rangle^{2}\langle X,e_{j}\rangle^{2}\leqslant p^{2}\beta^{-2}\kappa^{4} \sum_{i\neq j}\Sigma_{ii}\Sigma_{jj}\] \[\leqslant p^{2}\beta^{-2}\kappa^{4}\operatorname{Tr}^{2}(\Sigma).\]
Similarly, we obtain
\[\mathbb{E}\sum_{i\neq j}\langle Y,e_{i}\rangle^{2}\langle Y,e_{j}\rangle^{2} (\beta^{-1}+v_{i}^{2})(\beta^{-1}+v_{j}^{2})\lesssim p^{2}\kappa^{4}\beta^{-2 }\operatorname{Tr}^{2}(\Sigma)+\beta^{-1}p^{2}\kappa^{4}\|\Sigma\|\operatorname {Tr}(\Sigma).\]
It remains to analyze
\[\mathbb{E}\sum_{i\neq j\neq l}\langle Y,e_{i}\rangle^{2}\langle Y,e_{j} \rangle\langle Y,e_{l}\rangle\beta^{-1}v_{j}v_{l}+\sum_{i\neq j\neq k}\langle Y,e_{i}\rangle\langle Y,e_{j}\rangle^{2}\langle Y,e_{k}\rangle\beta^{-1}v_{i}v _{k}.\]
We do the computations for the first term, the second follows similarly. We apply Holder's inequality with conjugates \(4/3\) and \(4\) and use the moment equivalence to obtain that
\[\mathbb{E}\sum_{i\neq j\neq l}\langle Y,e_{i}\rangle^{2}\langle Y,e_{j}\rangle\langle Y,e_{l}\rangle\beta^{-1}v_{j}v_{l}\leqslant p^{3}\sum_{i \neq j\neq l}\mathbb{E}\langle X,e_{i}\rangle^{2}\langle X,e_{j}\rangle \langle X,e_{l}\rangle\beta^{-1}v_{j}v_{l}\] \[\leqslant p^{3}\sum_{i;j\neq l}\mathbb{E}\langle X,e_{i}\rangle^{2 }\langle X,e_{j}\rangle\langle X,e_{l}\rangle\beta^{-1}v_{j}v_{l}+\left|2p^{3 }\sum_{j\neq l}\mathbb{E}\langle X,e_{j}\rangle^{3}\langle X,e_{l}\rangle \beta^{-1}v_{j}v_{l}\right|\] \[\leqslant p^{3}\mathbb{E}\left[(v^{T}\operatorname{Off}(X\otimes X )v)\left(\sum_{i=1}^{d}\beta^{-1}\langle X,e_{i}\rangle^{2}\right)\right]+2p^ {3}\kappa^{4}\beta^{-1}\sum_{j\neq l}(\Sigma_{ll})^{1/2}(\Sigma_{jj})^{3/2}v_{l }v_{j}\] \[\leqslant p^{3}\left[(v^{T}\operatorname{Off}(X\otimes X)v)\left( \sum_{i=1}^{d}\beta^{-1}\langle X,e_{i}\rangle^{2}\right)\right]+2p^{3}\beta^{ -1}\kappa^{4}\|\Sigma\|\operatorname{Tr}(\Sigma)\] \[\leqslant p^{3}\mathbb{E}(v^{T}X\otimes Xv-v^{T}\operatorname{ Diag}(X\otimes X)v)(\beta^{-1}\operatorname{Tr}(X\otimes X))+2p^{3}\beta^{-1} \kappa^{4}\|\Sigma\|\operatorname{Tr}(\Sigma)\] \[\leqslant p^{3}\mathbb{E}\langle X,v\rangle^{2}\beta^{-1} \operatorname{Tr}(X\otimes X)+2p^{3}\beta^{-1}\kappa^{4}\|\Sigma\| \operatorname{Tr}(\Sigma)\] \[\lesssim p^{3}\kappa^{4}(\|\Sigma\|^{2}+\beta^{-2}\operatorname {Tr}^{2}(\Sigma)+\beta^{-1}\|\Sigma\|\operatorname{Tr}(\Sigma)).\]
The last step follows from arithmetic-geometric inequality. Putting all together, we conclude that
\[\mathbb{E}\mathbb{E}_{\rho_{v}}(\theta^{T}\operatorname{Off}(Y\otimes Y)\nu)^ {2}\lesssim p^{2}\beta^{-2}\kappa^{4}\operatorname{Tr}^{2}(\Sigma)+p^{2}\kappa ^{4}\|\Sigma\|^{2}+p^{3}\beta^{-1}\kappa^{4}\operatorname{Tr}(\Sigma)\| \Sigma\|.\]
And then we set \(\beta=r(\Sigma)\) and increase the value of \(C_{2}\) to obtain that
\[\mathbb{E}_{\rho_{v}}\log\mathbb{E}\left(1+\lambda_{2}\theta^{T} \operatorname{Diag}(Y\otimes Y)\nu+C_{2}\lambda_{2}^{2}(\theta^{T} \operatorname{Diag}(Y\otimes Y)\nu)^{2}\right)\] \[\leqslant\lambda_{2}v^{T}\operatorname{Off}(Y\otimes Y)v+C_{2} \lambda_{2}^{2}\delta^{2}\kappa^{4}\|\Sigma\|^{2}.\]
Finally we conclude that, there exists an absolute constant \(C_{2}>0\) such that, with probability at least \(1-\delta\),
\[\frac{1}{N\lambda_{2}}\sum_{i=1}^{N}\psi(\lambda_{2}v^{T} \operatorname{Off}(Y_{i}\otimes Y_{i})v)\leqslant p^{2}v^{T}\operatorname{ Off}(\Sigma)v+C_{2}\left(\lambda_{2}p^{2}\|\Sigma\|^{2}\kappa^{4}+\sum_{i=1}^{n} \frac{R_{2}(Y_{i})}{n\lambda_{2}}+\frac{r(\Sigma)+\log(1/\delta)}{\lambda_{2}n }\right).\]
By Bernstein inequality the remainder terms \(R_{2}(Y_{i})\) are absorbed by the last term in the sum exactly in the same way as in the diagonal case. We optimize over \(\lambda_{2}>0\), \(\lambda_{2}=N^{-1/2}(p\|\Sigma\|)^{-1}\kappa^{-2}\sqrt{r(\Sigma)+\log(1/\delta)}\). Then, with probability \(1-\delta\),
\[\frac{1}{N\lambda_{2}}\sum_{i=1}^{N}\psi(\lambda_{2}v^{T} \operatorname{Off}(Y_{i}\otimes Y_{i})v)\leqslant p^{2}v^{T}\operatorname{ Off}(\Sigma)v+C\kappa^{2}p\sqrt{\frac{r(\Sigma)+\log(1/\delta)}{n}}.\]
We repeat the arguments by changing the mean of \(\nu\) to \(-v\). This gives the other side of the inequality in the same way it was done for the diagonal part. We conclude that, with probability \(1-\delta\),
\[\|p^{-2}\widehat{\Sigma}(\lambda_{2})-\operatorname{Off}(\Sigma)\|\lesssim \frac{1}{p}\kappa^{2}\|\Sigma\|\sqrt{\frac{r(\Sigma)+\log(1/\delta)}{n}}.\]
By triangular inequality, union bound and re-scaling the multiplicative constant in \(\delta\), the estimator \(\widehat{\Sigma}\) satisfies, with probability \(1-\delta\),
\[\|\widehat{\Sigma}-\Sigma\|\lesssim\frac{\kappa^{2}}{p}\|\Sigma\|\sqrt{\frac{r (\Sigma)+\log(1/\delta)}{N}}.\]
To end this section, we prove the technical facts. We start with the proof of Lemma 3.
Proof.: We start with the diagonal case. Observe that
\[\mathbb{E}(v^{T}\operatorname{Diag}(Y\otimes Y)v)^{2}=\mathbb{E} \sum_{i,j=1}^{d}\langle Y,e_{i}\rangle^{2}\langle Y,e_{j}\rangle^{2}v_{i}^{2} v_{j}^{2}=\sum_{i=1}^{d}\mathbb{E}\langle Y,e_{i}\rangle^{4}v_{i}^{4}+\sum_{i \neq j}^{d}\mathbb{E}\langle Y,e_{i}\rangle^{2}\langle Y,e_{j}\rangle^{2}v_{i }^{2}v_{j}^{2}.\]
The first term is at most \(p\kappa^{4}\sum_{i=1}^{d}\Sigma_{ii}^{2}v_{i}^{4}\leqslant p\kappa^{4}\| \operatorname{Diag}(\Sigma)\|^{2}\). For the second term, we use arithmetic-geometric inequality to obtain that it is at most
\[\frac{p^{2}}{2}\sum_{i\neq j}\mathbb{E}(\langle X,e_{i}\rangle^{4}v_{i}^{2}v_ {j}^{2}+\langle X,e_{j}\rangle^{4}v_{i}^{2}v_{j}^{2})\leqslant p^{2}\kappa^{4} \|\operatorname{Diag}(\Sigma)\|^{2}.\]
For the off-diagonal term, we need to proceed carefully. The most natural idea would be to decompose the off-diagonal matrix into the matrix itself minus the diagonal part, but it leads to suboptimal dependence on \(p\). We first expand it directly,
\[\mathbb{E}(v^{T}\operatorname{Off}(Y\otimes Y)v)^{2}=\sum_{i\neq j ;k\neq l}\mathbb{E}\langle Y,e_{i}\rangle\langle Y,e_{j}\rangle\langle Y,e_{k} \rangle\langle Y,e_{l}\rangle v_{i}v_{j}v_{k}v_{l}\] \[\leqslant p^{2}\sum_{i\neq j;k\neq l}\mathbb{E}\langle X,e_{i} \rangle\langle X,e_{j}\rangle\langle X,e_{k}\rangle\langle X,e_{l}\rangle v_{i} v_{j}v_{k}v_{l}=p^{2}\mathbb{E}(v^{T}\operatorname{Off}(X\otimes X)v)^{2}.\]
The term \(\delta^{2}\) comes from the observation that at least two indices are distinct in each term inside the sum. Now we can split the off-diagonal term \(\mathbb{E}(v^{T}\operatorname{Off}(X\otimes X)v)^{2}\). It is equal to
\[\mathbb{E}(v^{T}(X\otimes X)v)^{2}+\mathbb{E}(v^{T}\operatorname{Diag}(X \otimes X)v)^{2}-2\mathbb{E}(v^{T}(X\otimes X)v)\mathbb{E}(v^{T}\operatorname{ Diag}(X\otimes X)v)\]
The last term is negative because both matrices are positive semidefinite so we can ignore it. The first term is at most \(\kappa^{4}(v^{T}\Sigma v)^{2}\leqslant\kappa^{4}\|\Sigma\|^{2}\) by the moment equivalence assumption. The second term is at most \(\kappa^{4}\|\Sigma\|^{2}\) by the computations above.
We end this section with the proof of Lemma 2.
Proof.: Notice that \(\psi(x)\leqslant\log(1+x+x^{2})\) holds trivially and we add \(x^{2}/6\) to make the latter function convex. Now we can bound \(\psi(\mathbb{E}Z)\leqslant\min\{\log(1+\mathbb{E}Z+\mathbb{E}Z^{2})+\mathbb{E }Z^{2}/6,1\}\). We now apply Jensen's inequality to conclude the proof of the first part. By Taylor series expansion, if \(t\in[0,a]\), then we have the numeric inequality,
\[e^{t}\leqslant 1+\frac{t}{a}\left(\sum_{i=1}^{\infty}\frac{a^{i}}{i!}\right) \leqslant 1+\frac{t}{a}(e^{a}-1).\]
We apply the inequality above to obtain that
\[\mathbb{E}\log(1+Z+Z^{2})+a\mathbb{E}\min\{1,Z^{2}/6\}\] \[=\mathbb{E}\log\left(\left(1+Z+Z^{2}\right)\exp(\min\{a,aZ^{2}/6 \})\right)\] \[\leqslant\mathbb{E}\log\left(\left(1+Z+Z^{2}\right)\left(1+\min\{ 1,Z^{2}/6\}\left(e^{a}-1\right)\right)\right).\]
To get the inequality in the statement, we only need to split into the cases where \(|Z|^{2}/6\) is smaller than one and where it is greater than one.
## 3 Proof of Theorem 1
In the previous section, we showed in Proposition 1, that the proof of the main result boils down to estimate the trace of the covariance matrix, the operator norm and the sparsifying parameter \(p\). For the trace and operator norm it is enough to obtain an estimate that holds up to an absolute constant. On the other hand, for the parameter \(p\), we need a more accurate estimator because we need to divide the estimator by \(p\), so an estimate that holds only up to constants would insert a bias.
_Remark 1_.: The best possible convergence rate is at least
\[\|\widehat{\Sigma}-\Sigma\|\leqslant\|\Sigma\|\left(\frac{1}{p}\sqrt{\frac{r( \Sigma)+\log(1/\delta)}{N}}\right).\]
The trivial estimator \(\widehat{\Sigma}=0\) satisfies \(\|\widehat{\Sigma}-\Sigma\|\leqslant\|\Sigma\|\), so in order to have a meaningful result we need \(\left(\frac{1}{p}\sqrt{\frac{r(\Sigma)+\log(1/\delta)}{N}}\right)<1\). Therefore we may assume that \(N\geqslant C\frac{r(\Sigma)+\log(1/\delta)}{p^{2}}\) for a suitable absolute constant \(C>0\) if necessary along the proofs without making further comments.
### Estimation of \(p\)
The idea here is to explore the proportion of non-zeros entries in the observed data. As in any standard data set, a missed value does not appear with zero, we set it to zero for convenience as we did in the entire manuscript until now. Unfortunately, to estimate the proportion of missing values, it could be take that the the distribution of the random vector \(X\) has non-trivial mass at zero. Clearly, we can distinguish between the zero that comes from the distribution and the zero from the missing value. Or equivalently, we may assume that the marginals of \(X\), namely \(\langle X,v\rangle\), do not have mass at zero.
We collect \(Y_{1},\ldots,Y_{N}\) and compute \(Z_{1},\ldots,Z_{N}\), where \(Z_{i}(j)=1\) if and only \(Y_{i}(j)\neq 0\) and zero, otherwise. The goal is to estimate the random variable
\[R(Z):=\frac{1}{d}\|Z\|_{\ell_{1}}\quad\text{because}\quad\mathbb{E}R(Z)=p.\]
**Lemma 4**.: _Let \(Y_{1},\ldots,Y_{N}\) be i.i.d copies of \(X\odot\mathbf{p}\), then there exists an estimator \(\widehat{p}\) depending only on the sample and the confidence level \(\delta\) such that, with probability at least \(1-\delta\),_
\[|\widehat{p}-p|\leqslant Cp\sqrt{\frac{\log(1/\delta)}{N}}.\]
_As an immediate consequence, if \(N\geqslant C\log(1/\delta)\), then (with the same probability guarantee)_
\[\frac{1}{2}p\leqslant\widehat{p}\leqslant\frac{3}{2}p.\]
Before we proceed to the proof, we remark that if we obtain an output of the estimator larger than one, then we can safely estimate \(p\) by one.
Proof.: Following the notation above, we collect \(R(Z_{1}),\ldots,R(Z_{N})\) i.i.d copies of \(R(Z)\). We invoke a standard sub-Gaussian mean estimator for \(R(Z)\) (e.g trimmead mean estimator [18, Theorem 1]) together with the fact that \(\operatorname{Var}(R(Z))\leqslant p^{2}\), to obtain that, with probability at least \(1-\delta\),
\[|\widehat{R}(p)-\mathbb{E}R(Z)|\leqslant C\sqrt{\frac{\log(1/\delta) \operatorname{Var}R(Z)}{N}}\leqslant Cp\sqrt{\frac{\log(1/\delta)}{N}}.\]
### Estimation of the Trace
To simplify the analysis, we can safely assume that \(p\) is known because we can accurately estimate it using the lemma from the previous section. We begin with the estimation of the trace. We have the following expression,
\[p\operatorname{Tr}(\Sigma)=\mathbb{E}\sum_{i=1}^{d}\langle Y,e_{i}\rangle^{2}.\]
To invoke a mean estimator, we need to compute the standard deviation of the random variable in the right hand side. We have,
\[\mathbb{E}\left(\sum_{i=1}^{d}\langle Y,e_{i}\rangle^{2}\right)^{2}\leqslant p \sum_{i=1}^{d}\mathbb{E}\langle X,e_{i}\rangle^{4}+p^{2}\sum_{i\neq j}\mathbb{ E}\langle X,e_{i}\rangle\langle X,e_{j}\rangle^{2}\lesssim p\kappa^{4} \operatorname{Tr}(\Sigma)^{2}.\]
The latter step follows from moment equivalence and Holder's inequality as it was done several times in this manuscript. Since we know \(p\), it is enough to estimate the mean of the random
variable in the right hand side. We can invoke any standard mean estimators, for example, Theorem 1[18] to obtain an estimator \(\widehat{\operatorname{Tr}}(\Sigma)\) such that, with probability \(1-\delta\),
\[|\widehat{\operatorname{Tr}}(\Sigma)-p\operatorname{Tr}(\Sigma)|\leqslant C \kappa^{2}p\operatorname{Tr}(\Sigma)\sqrt{\frac{\log(1/\delta)}{N}}.\]
If the sample size \(N\geqslant C\kappa^{2}\log(1/\delta)\) is sufficiently large, then
\[|\widehat{\operatorname{Tr}}(\Sigma)-p\operatorname{Tr}(\Sigma)|\leqslant\frac {p\operatorname{Tr}(\Sigma)}{2},\]
and consequently
\[\frac{1}{2}\operatorname{Tr}(\Sigma)\leqslant p^{-1}\widehat{\operatorname{ Tr}}(\Sigma)\leqslant\frac{3}{2}\operatorname{Tr}(\Sigma). \tag{1}\]
### Estimation of the Operator Norm
The most delicate part of this section is the estimation of the operator norm. The main lemma is the following
**Lemma 5**.: _Let \(Y_{1},\ldots,Y_{N}\) be i.i.d copies of \(X\odot p\). Then there exists an estimator \(\widehat{\|\Sigma\|}\) depending only on the samples and \(\kappa\) such that, with probability at least \(1-\delta\),_
\[c_{2}(\kappa)\|\Sigma\|\leqslant\widehat{\|\Sigma\|}\leqslant c_{1}(\kappa) \|\Sigma\|,\]
_as long as \(N\geqslant Cp^{-2}(\log(1/\delta)+r(\Sigma))\) with a sufficiently large absolute constant \(C>0\). Here \(c_{1},c_{2}>0\) are two absolute constants depending only on \(\kappa\)._
The key idea is to repeat the same analysis as before for each part with an additional parameter \(\alpha\) and show that if certain inequalities are satisfied then \(\alpha\) needs to be of same order as the operator norm. Along the proof \(C_{1}>0\) is an explicit constant that can be computed by just keeping track of the constants in the proofs of Section 2.
Proof.: **Diagonal Part:** We slightly change the choice of the measures, as before we define
\[\Theta=\mathbb{R}^{d}\times\mathbb{R}^{d}.\]
Now we choose the measure \(\mu\) to be a product of two zero mean multivariate Gaussians with mean zero and covariance \(\beta^{-1}I_{d}\). For \(v\in\mathcal{S}^{d-1}\), let \(\rho_{v}\) be a product of two multivariate Gaussian distribution with mean \(\alpha v\) and covariance \(\beta^{-1}I_{d}\). The \(\mathcal{KL}\)-divergence becomes
\[\mathcal{KL}(\rho_{v},\mu)=\alpha^{2}\beta.\]
To simplify the notation we write \(\rho_{v,\alpha}=\rho_{v}\). Following the same lines for the proof of the diagonal part, we have with probability at least \(1-3\delta\),
\[\frac{1}{N}\sum_{i=1}^{N}\psi(\alpha^{2}v^{T}\operatorname{Diag}( Y_{i}\otimes Y_{i})v) \leqslant\alpha^{2}pv^{T}\operatorname{Diag}(\Sigma)v+C_{1}p\| \operatorname{Diag}(\Sigma)\|^{2}\kappa^{4}(\alpha^{4}+\beta^{-1}\alpha^{2})\] \[+(C_{1}\beta^{-2}pv^{4})\operatorname{Tr}(\Sigma)^{2}+\frac{2 \log(1/\delta)}{N}+\frac{\alpha^{2}\beta}{N}.\]
Now we simplify the expression above. We choose \(\beta=c_{\beta}\operatorname{Tr}(\Sigma)\) where \(c_{\beta}>0\) is an absolute constant to be chosen later. We may assume \(N\geqslant c_{N}p^{-2}\max\{r(\Sigma),\log(1/\delta)\}\) and then
\[\frac{1}{pN}\sum_{i=1}^{N}\psi(\alpha^{2}v^{T}\operatorname{Diag }(Y_{i}\otimes Y_{i})v) \leqslant\alpha^{2}v^{T}\operatorname{Diag}(\Sigma)v+C_{1}\| \Sigma\|^{2}\kappa^{4}\alpha^{4}+C_{1}\kappa^{4}c_{\beta}^{-1}\alpha^{2}\|\Sigma\|\] \[+C_{1}c_{\beta}^{-2}\kappa^{4}+2c_{N}^{-1}+\alpha^{2}\|\Sigma\|c_ {\beta}c_{N}^{-1}.\]
**Off-Diagonal Part:** We use the same choice of the measures and proceed analogously. We obtain, with probability at least \(1-5\delta\),
\[\frac{1}{p^{2}n}\sum_{i=1}^{n}\psi(\alpha^{2}v^{T}\operatorname{Off} (Y_{i}\otimes Y_{i})v)\leqslant\alpha^{2}v^{T}\operatorname{Off}(\Sigma)v\] \[+C_{1}\alpha^{4}\kappa^{4}\|\Sigma\|^{2}+C_{1}c_{\beta}^{-2} \kappa^{4}\] \[+C_{1}c_{\beta}^{-1}\kappa^{4}\alpha^{2}\|\Sigma\|+2c_{N}^{-1}+c _{N}^{-1}c_{\beta}\alpha^{2}\|\Sigma\|.\]
**Everything Together:** We define the function \(g:\mathbb{R}\to\mathbb{R}\) to be
\[g(\alpha):=\frac{1}{Np}\sup_{v\in S^{d-1}}\sum_{i=1}^{N}\psi(\alpha^{2}v^{T} \operatorname{Diag}(Y_{i}\otimes Y_{i})v)+\frac{1}{Np^{2}}\sup_{v\in S^{d-1} \cup 0}\sum_{i=1}^{N}\psi(\alpha^{2}v^{T}\operatorname{Off}(Y_{i}\otimes Y _{i})v).\]
From above, we obtain that, with probability at least \(1-8\delta\),
\[g(\alpha)\leqslant C_{1}\|\Sigma\|^{2}\alpha^{4}\kappa^{4}+\|\Sigma\|\alpha^{2 }(\kappa^{2}+\kappa^{4}C_{1}c_{\beta}^{-1}+c_{\beta}c_{N}^{-1})+C_{1}\kappa^{4 }c_{\beta}^{-2}+4c_{n}^{-1}.\]
Notice that the constants \(C_{1}c_{\beta}^{-1}+c_{\beta}c_{N}^{-1}\) and \(C_{1}\kappa^{4}c_{\beta}^{-2}+4c_{N}^{-1}\) can be made arbitrarily small by increasing \(c_{N}\). In particular, we choose \(c_{\beta}\) and \(c_{N}\) so that
\[g(\alpha)\leqslant C_{1}\|\Sigma\|^{2}\alpha^{4}\kappa^{4}+\|\Sigma\|\alpha^{2 }(1+L_{1})+L_{2},\]
where \(L_{1},L_{2}>0\) are two absolute constants satisfying the following conditions: \(1.1L_{2}<1\) and \((1+L_{1})^{2}-8.4\kappa^{4}C_{1}L_{2}>0\). The reason for such choice will become clear along the proof. Now, without loss of generality, we assume that \(\mathbb{P}(Y_{i}=0)=0\). This is always possible by adding a small amount of Gaussian noise without changing the covariance too much. We construct a vector \(w\in S^{d-1}\) such that \(\min_{i\in[n]}\langle Y_{i},w\rangle\neq 0\) by sampling from an isotropic Gaussian vector \(g\sim N(0,I_{d})\). Then we normalize \(g/\|g\|_{2}\). Notice that \(g(0)=0\) and \(g\) is continuous. The plan is to show that \(g\) can be greater or equal to one, so we can apply the intermediate value theorem. Since \(w\) is a unit vector such that \(\min_{i\in[n]}\langle w,Y_{i}\rangle\neq 0\), as \(\langle Y,w\rangle^{2}=w^{T}\operatorname{Diag}(Y\otimes Y_{i})w+w^{T} \operatorname{Off}(Y\otimes Y)w\), at least one of the terms is non-zero. If both are non-zero, we evaluate \(g\) at the point
\[\min\left\{\min_{i\in[n]}|w^{T}\operatorname{Diag}(Y_{i}\otimes Y_{i})w|,\min _{i\in[d]}|w^{T}\operatorname{Off}(Y_{i}\otimes Y_{i})w\}|\right\}.\]
It is clear that \(g\) is at least one at such point, observe that the terms in the summand are non-negative because we are allowed to take \(v=0\) in the supremum of the off-diagonal part. If one is zero, then we remove it from the min term above. Finally, we choose \(\widehat{\alpha}\) such that \(g(\widehat{\alpha})=1.1L_{2}\) and such choice is possible because \(1.1L_{2}<1\). The existence of such \(\widehat{\alpha}\) is guaranteed by the intermediate value theorem. We obtain that,
\[C_{1}\|\Sigma\|^{2}\alpha^{4}\kappa^{4}+\|\Sigma\|\alpha^{2}(1+L_{1})-0.1L_{2}\geqslant 0\]
The expression above can be interpreted as a parabola in the variable \(x:=\widehat{\alpha}^{2}\|\Sigma\|\). It has two real roots. One is negative and it does not play any role. The other one is a positive absolute constant. Therefore, there exists an absolute constant \(c_{min}(\kappa)\) such that
\[\widehat{\alpha}^{2}\|\Sigma\|\geqslant c_{min}(\kappa).\]
This translates in a lower bound for \(\|\Sigma\|\). We now need an upper bound for \(\|\Sigma\|\) in terms of \(\widehat{\alpha}\). We repeat the same argument above for the product measure \(\rho_{2,v}\) between \(\theta\) and \(\nu\), where
\(\theta\sim N(\alpha v,\beta^{-1}I_{d})\) and \(\nu\sim N(-\alpha v,\beta^{-1}I_{d})\). Therefore, if \(v_{1}\in S^{d-1}\) is the normalized eigenvector corresponding to the maximum eigenvalue of \(\Sigma\), then
\[-g(\alpha)\leqslant-\frac{1}{np}\sum_{i=1}^{n}\psi(\alpha^{2}v_{1}^{T}\operatorname {Diag}(Y_{i}\otimes Y_{i})v_{1})-\frac{1}{np^{2}}\sum_{i=1}^{n}\psi(\alpha^{2 }v_{1}^{T}\operatorname{Off}(Y_{i}\otimes Y_{i})v_{1}).\]
Moreover, since \(-g(\alpha)\) is non-increasing in the interval \([0,\widehat{\alpha}]\), we have
\[-1.1L_{2}=-g(\widehat{\alpha})\leqslant C_{1}\|\Sigma\|^{2}\widehat{\alpha}^{ 4}\kappa^{4}-\|\Sigma\|\alpha^{2}(1-L_{1})+L_{2}\]
We obtain that if \(x=\|\Sigma\|\alpha^{2}\), then the inequality above holds for all \(\alpha\in[0,\widehat{\alpha}]\)
\[C_{1}\kappa^{4}x^{2}-(1-L_{1})x+2.1L_{2}\geqslant 0.\]
The discriminant of the quadratic equation is \(\Delta=(1-L_{1})^{2}-8.4C_{1}\kappa^{4}L_{2}\), it is positive by construction. The inequality above is true if \(x\leqslant x_{1}\) or \(x\geqslant x_{2}\), where \(0<x_{1}<x_{2}\) are the positive roots of the corresponding quadratic equation. We claim that \(x\geqslant x_{2}\) cannot happen. Otherwise, since the inequality above holds for all \(\alpha\in[0,\widehat{\alpha}]\), it must hold for \(\alpha^{*}\) such that \(\|\Sigma\|\alpha^{*}\in(x_{1},x_{2})\), but this contradicts the fact that the parabola assumes negative values between \((x_{1},x_{2})\). To conclude, we obtain that there exists another constant \(c_{max}(\kappa)>0\) such that \(\widehat{\alpha}^{2}\|\Sigma\|\leqslant c_{max}(\kappa)\) and then
\[c_{min}(\kappa)\leqslant\widehat{\alpha}^{2}\|\Sigma\|\leqslant c_{max}( \kappa).\]
We reach the conclusion by setting \(\widehat{\|\Sigma\|}:=\widehat{\alpha}^{-2}\).
### Completion of the proof
The final construction of our estimator is the following:
1. Split the sample \(Y_{1},\ldots,Y_{N}\) into four parts of size at least \(\lfloor N/4\rfloor\).
2. Estimate the parameter \(p\) with the first quarter of the sample using Lemma 4.
3. Estimate the trace \(\operatorname{Tr}(\Sigma)\) with the second quarter using (1) and the operator norm \(\|\Sigma\|\) with the third quarter using Lemma 5.
4. For the last quarter of the sample, use the estimator of Proposition 1 to estimate the covariance matrix.
We are now in position to prove the main result.
Proof.: As discussed in Section 2, the proof follows easily once we estimate the parameters of the truncation level. Indeed, the truncation levels in Proposition 1 only requires the knowledge of \(\operatorname{Tr}(\Sigma),\|\Sigma\|\) and \(p\) up to an absolute constant. The error that we need to take in account is to use the estimated value of \(p\) instead of the true value when we divide the estimated quantity by \(p\). This is the only reason why we have to estimate the precise value of the parameter \(p\). However, by triangle inequality
\[\left\|\frac{1}{\widehat{p}}\widehat{\Sigma}_{1}-\operatorname{Diag}(\Sigma) \right\|\leqslant\left\|\frac{1}{\widehat{p}}\left(\widehat{\Sigma}_{1}-p \operatorname{Diag}(\Sigma)\right)\right\|+\left\|\frac{1}{\widehat{p}}\left(p \operatorname{Diag}(\Sigma)-\widehat{p}\operatorname{Diag}(\Sigma)\right) \right\|.\]
We apply Lemma 4 to estimate both terms. The first term in the right hand side is, with probability at least \(1-\delta\), at most
\[\frac{\sqrt{p}}{\widehat{p}}C\|\Sigma\|\sqrt{\frac{r(\Sigma)+\log(1/\delta)}{N}} \lesssim\frac{1}{\sqrt{p}}\|\Sigma\|\sqrt{\frac{r(\Sigma)+\log(1/\delta)}{N}}.\]
The second term satisfies, with same probability guarantee,
\[\left\|\frac{1}{\widehat{p}}\left(p\operatorname{Diag}(\Sigma)-\widehat{p} \operatorname{Diag}(\Sigma)\right)\right\|\leqslant\|\operatorname{Diag}( \Sigma)\|\frac{1}{\widehat{p}}\,|p-\widehat{p}|\lesssim\|\Sigma\|\sqrt{\frac{ \log(1/\delta)}{N}}.\]
The same procedure holds for the off-diagonal part since \(\|\operatorname{Off}(\Sigma)\|\leqslant 2\|\Sigma\|\) and then we omit it for the sake of simplicity. Finally, the probability guarantee holds by union bound a constant number of times.
Acknowledgments.The author would like to thank Tanja Finger, Felix Kuchelmeister and Nikita Zhivotovskiy for helpful discussions.
|
2310.07684 | Hypergraph Neural Networks through the Lens of Message Passing: A Common
Perspective to Homophily and Architecture Design | Most of the current hypergraph learning methodologies and benchmarking
datasets in the hypergraph realm are obtained by lifting procedures from their
graph analogs, leading to overshadowing specific characteristics of
hypergraphs. This paper attempts to confront some pending questions in that
regard: Q1 Can the concept of homophily play a crucial role in Hypergraph
Neural Networks (HNNs)? Q2 Is there room for improving current HNN
architectures by carefully addressing specific characteristics of higher-order
networks? Q3 Do existing datasets provide a meaningful benchmark for HNNs? To
address them, we first introduce a novel conceptualization of homophily in
higher-order networks based on a Message Passing (MP) scheme, unifying both the
analytical examination and the modeling of higher-order networks. Further, we
investigate some natural, yet mostly unexplored, strategies for processing
higher-order structures within HNNs such as keeping hyperedge-dependent node
representations, or performing node/hyperedge stochastic samplings, leading us
to the most general MP formulation up to date -MultiSet-, as well as to an
original architecture design, MultiSetMixer. Finally, we conduct an extensive
set of experiments that contextualize our proposals and successfully provide
insights about our inquiries. | Lev Telyatnikov, Maria Sofia Bucarelli, Guillermo Bernardez, Olga Zaghen, Simone Scardapane, Pietro Lio | 2023-10-11T17:35:20Z | http://arxiv.org/abs/2310.07684v2 | Hypergraph Neural Networks through the Lens of message passing: A Common Perspective to Homophily and Architecture Design
###### Abstract
Most of the current hypergraph learning methodologies and benchmarking datasets in the hypergraph realm are obtained by _lifting_ procedures from their graph analogs, simultaneously leading to overshadowing hypergraph network foundations. This paper attempts to confront some pending questions in that regard: Can the concept of homophily play a crucial role in Hypergraph Neural Networks (HGNNs), similar to its significance in graph-based research? Is there room for improving current hypergraph architectures and methodologies? (e.g. by carefully addressing the specific characteristics of higher-order networks) Do existing datasets provide a meaningful benchmark for HGNNs? Diving into the details, this paper proposes a novel conceptualization of homophily in higher-order networks based on a message passing scheme; this approach harmonizes the analytical frameworks of datasets and architectures, offering a unified perspective for exploring and interpreting complex, higher-order network structures and dynamics. Further, we propose MultiSet, a novel message passing framework that redefines HGNNs by allowing hyperedge-dependent node representations, as well as introduce a novel architecture -MultiSetMixer- that leverages a new hyperedge sampling strategy. Finally, we provide an extensive set of experiments that contextualize our proposals and lead to valuable insights in hypergraph representation learning.
## 1 Introduction
Hypergraph learning techniques have rapidly grown in recent years, demonstrating their effectiveness in processing higher-order interactions in numerous fields, spanning from recommender systems (Yu et al., 2021; Zheng et al., 2018; La Gatta et al., 2022), to bioinformatics (Zhang et al., 2018; Yadati et al., 2020; Klamt et al., 2009) and computer vision (Li et al., 2022; Xu et al., 2022; Gao et al., 2012; Yin et al., 2017; Kim et al., 2011). However, so far, the development of HyperGraph Neural Networks (HGNNs) has been largely influenced by the well-established Graph Neural Network (GNN) field. In fact, most of the current methodologies and benchmarking datasets in the hypergraph realm are obtained by _lifting_ procedures from their graph counterparts.
Drawing inspiration from graph-based models has significantly propelled the advancement of hypergraph research (Feng et al., 2019; Yadati et al., 2019; Chien et al., 2022), and it has simultaneously led to overshadowing hypergraph network foundations. We argue that it is now the time to address fundamental questions in order to pave the way for further innovative ideas in the field. In that regard, this study explores some of these open questions to understand better current HGNN architectures and benchmarking datasets. Can the concept of homophily play a crucial role in HGNNs, similar to its significance in graph-based research? Given that current HGNNs are predominantly extensions of GNN architectures adapted to the hypergraph domain, are these extended methodologies suitable, or should we explore new strategies tailored specifically for handling hypergraph-based data? Are the existing hypergraph benchmarking datasets truly _meaningful_ and representative enough to draw robust and valid conclusions?
To begin with, we explore how the concept of homophily can be characterized in complex, higher-order networks. Notably, there are many ways of characterizing homophily in hypergraphs -such as the distribution of node features, the analogous distribution of the labels, or the group connectivity similarity (as already discussed in (Veldt et al., 2023)). In particular, this work places the _node class distribution_ at the core of the analysis, and introduces a novel definition of homophily that relies on a message passing scheme. Interestingly, this enables us to analyze both hypergraph datasets and architecture designs from the same perspective. In fact, we reckon that this unified message passing framework has the potential to inspire the development of meaningful contributions for processing higher-order relationships more effectively.
Next, we study state-of-the-art HGNN architectures and introduce a new framework called MultiSet. We demonstrate that MultiSet generalizes most existing frameworks for HGNNs, including AllSet (Chien et al., 2022) and UniGCNII (Huang and Yang, 2021). Our framework presents an innovative approach to message passing, where multiple hyperedge-dependent representations of nodes are enabled. Then, we introduce novel methodologies to process hypergraphs -including MultiSetMixer, a new HGNN architecture based on a particular implementation of a MultiSet layer. In these implementations, we introduce a novel connectivity-based mini-batching strategy capable of processing large hyperedges and discuss the intriguing property of natural connectivity-based distribution shifts.
Last, but not least, we provide an extensive set of experiments that, driven by the general questions stated above, aim to gain a better understanding on fundamental aspects of hypergraph representation learning. In fact, the obtained results not only help us contextualize the proposals introduced in this work, but indeed offer valuable insights that might help improve future hypergraph approaches.
## 2 Related Works
Homophily in hypergraphs.Homophily measures are typically defined for graph models and consider only pairwise relationships. In the context of Graph Neural Networks (GNNs), many of the current models implicitly use the homophily assumption, which is shown to be crucial for achieving a robust performance with relational data (Zhou et al., 2020; Chien et al., 2020; Halcrow et al., 2020). Nevertheless, despite the pivotal role that homophily plays in graph representation learning, its hypergraph counterpart mainly remains unexplored. In fact, to the best of our knowledge, Veldt et al. (2023) is the only work that faces the challenge of defining homophily in higher-order networks. Veldt et al. (2023) introduces a framework in which hypergraphs are used to quantify homophily from group interactions; however, the definition of homophily is restricted to uniform hypergraphs -i.e. where all hyperedges have exactly the same size (more details in Section 3). This represents a hard assumption that complicates its applicability to most of the current hypergraph datasets.
Hypergraph Neural Networks.The work of Chien et al. (2022) introduced AllSet, a general framework to describe HGNNs through a two-step message passing based mechanism, and demonstrated that most of the current hypergraph models are special instances of their formulation, based on the composition of two learnable permutation invariant functions that transmit information from nodes to hyperedges, and back from hyperedges to nodes. In particular, AllSet can be seen as a generalization of the most commonly used HGNNs, including all clique expansion based (CE) methods, HGNN (Feng et al., 2019), HNHN (Dong et al., 2020), HCHA (Bai et al., 2021), HyperSAGE (Arya et al., 2020) and HyperGCN(Yadati et al., 2019). Chien et al. (2022) also proposes two novel AllSet-like learnable layers: the first one -AllDeepSet- exploits Deep Set (Zaheer et al., 2017), and the second one -AllSetTransformer- Set Transformer (Lee et al., 2019), both of them achieving state-of-the-art results in the most common hypergraph benchmarking datasets. Concurrent to AllSet, the work of Huang and Yang (2021) also aimed at designing a common framework for graph and hypergraph NNs, and its more advanced UniGCNII method leverages initial residual connections and identity mappings in the hyperedge-to-node propagation to address over-smoothing issues; notably, UniGCNII do not fall under AllSet notation due to these residual connections. With Chien et al. (2022) and Huang and Yang (2021) being the most relevant ones to our work, we extend this review in Appendix A.
**Notation.** A hypergraph is an ordered pair of sets \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) is the set of nodes and \(\mathcal{E}\) is the set of hyperedges. Each hyperedge \(e\in\mathcal{E}\) is a subset of \(\mathcal{V}\), i.e., \(e\subseteq\mathcal{V}\). A hypergraph is a generalization of the concept of a graph where (hyper)edges can connect more than two nodes. A vertex \(v\) and a hyperedge \(e\) are said to be incident if \(v\in e\). For each node \(v\), we denote its class by \(y_{v}\), and by \(\mathcal{E}_{v}=\{e\in\mathcal{E}:v\in e\}\) the subset of hyperedges in which it is contained, with \(d_{v}=|\mathcal{E}_{v}|\) depicting the node degree. The set of classes of the hypergraph is represented by \(\mathcal{C}=\{c_{i}\}_{i=1}^{|\mathcal{C}|}\).
## 3 Defining and measuring homophily in hypergraphs
Homophily is a graph property that describes the tendency for edges to connect nodes that are similar (Moody, 2001; Shrum et al., 1988; Verbrugge, 1983). Currently, the most common measure of graph homophily is the proportion of edges that connect nodes of the same class. In pairwise relationships, a high degree of network homophily tends to create a network with communities, based on nodes' class, that are highly connected within each other and poorly connected to the outside. Extending the concept of homophily to higher-order interactions is not straightforward, but it becomes crucial in order to avoid discarding valuable information about the composition of groups in which individuals participate. In this Section, we recap the general notion of higher-order homophily for \(k\)-uniform hypergraphs introduced in (Veldt et al., 2023) and present a novel propagation-based homophily measure which is applicable for general, non-uniform hypergraphs. In essence, the score proposed in Veldt et al. (2023) tends to primarily assess the composition of hyperedges within the graph by quantifying the distribution of classes among hyperedges. In contrast, our definition places a greater emphasis on capturing the interconnections between different hyperedges by the exchange of information between nodes following the message passing scheme.
\(k\)-uniform HomophilyVeldt et al. (2023) defines general higher-order homophily for \(k\)-uniform hypergraphs \(G_{k}=(\mathcal{V},\mathcal{E}_{k})\) which we refer to as _\(k\)-uniform homophily_. The type \(t\)-affinity score for each \(t\in\{1,\ldots,k\}\), indicates the likelihood of a node belonging to class \(c\) participating in hyperedges in which exactly \(t\) group members belong to class \(c\). The authors introduce a _baseline score_ that measures the probability that a class-\(c\) node is in a hyperedge where \(t\) members are from class \(c\), given that the other \(k-1\) nodes were chosen uniformly at random. The \(k\)-uniform hypergraph homophily measure can be expressed as a ratio of affinity and baseline scores, with a ratio value of 1 indicating that the group is formed uniformly at random, while any other number indicates that group interactions are either overexpressed or underexpressed for class \(c\). Note that non-uniform hypergraphs cannot directly be evaluated with \(k\)-uniform homophily scores; instead, the corresponding initial hyperedge set \(\mathcal{E}\) has to be restricted to particular \(k\)-uniform hyperedges, and for each \(k\) values they are processed separately. The detailed formulation of homophily measures and corresponding plots for each dataset can be found in Appendix J.
Message Passing HomophilyWe present a novel two-step message passing homophily measure that, unlike the one proposed by Veldt et al. (2023), does not assume a \(k\)-uniform hypergraph structure. Furthermore, the proposed measure enables the definition of a score for each node and hyperedge for any neighborhood resolution, i.e., the connectivity of the hypergraph can be explicitly investigated. Our homophily definition follows the two-step message passing mechanism starting from the hyperedges of the hypergraph. Thus, given an edge \(e\), we define the 0-level hyperedge homophily \(h_{e}^{0}(c)\) as the fraction of nodes within each hyperedge that belong to class \(c\), i.e.
\[h_{e}^{0}(c)=\frac{1}{|e|}\sum_{v\in e}\mathds{1}_{y_{v}=c}. \tag{1}\]
This score describes how homophilic the initial connectivity is with respect to class \(c\). By computing the score for every class \(c_{i}\in\mathcal{C}\) we obtain a categorical distribution for each hyperedge \(e\in\mathcal{E}\), i.e. \(h_{e}^{0}=(h_{e}^{0}(c_{0}),\ldots,h_{e}^{0}(c_{|\mathcal{C}|}))\). We can then use this 0-level homophily information as a starting point to calculate higher-level homophily measurements for both nodes and hyperedges through the two-step message passing approach. Formally, we define the \(t\)-level homophily score as
\[h_{v}^{t}(y_{v})=\texttt{AGG}_{\mathcal{E}}\left(\{h_{e}^{t-1}(y_{v})\}_{e\in \mathcal{E}_{v}}\right), \tag{2}\]
where \(\texttt{AGG}_{\mathcal{E}}\) and \(\texttt{AGG}_{\mathcal{V}}\) are functions that aggregate edge and node homophily scores, respectively. In our implementation, we considered the mean operation for both aggregations.
Qualitative AnalysisIn this paragraph, we are taking a closer look at the qualitative analysis of the node homophily measure we introduced. One of the most straightforward ways to make use of the message passing homophily measure is to visualize how the node homophily score, as described in Eq. 2, changes dynamically. We've depicted this process in Figure 1, focusing on the CORA-CA and 20NewsGroup datasets. Please note that in the figure, we are only showing non-isolated nodes. Looking at Figure 1 (a), we can observe several notable trends. First, in the initial node distribution (\(t=0\)), every class, except class 6, has a significant number of fully homophilic nodes. As we move to the 1-hop neighborhood (\(t=1\)), the corresponding classes either exhibit a moderate decrease in homophily or show no decrease at all. It's worth noting that at \(t=0,1,\) and \(10\), class 2 maintains a stable homophily distribution, hinting at an isolated subnetwork within. Furthermore, at \(t=10\), some points still maintain a node homophily score of 1, indicating the presence of multiple small subnetworks. Class 6 consistently displays the lowest average homophily measure at every step, with an average score of approximately 38% at \(t=10\). The node homophily distribution for the 20Newsgroups dataset is visualized in Figure 1 (b). At time step \(t=0\), we observe a wide range of homophily scores from 0 to 1 for each class. This suggests that the network is highly irregular with respect to connectivity. Moving to time step \(t=1\), there is a significant decrease in the homophily scores for every class, indicating a high degree of heterophily within the 1-hop neighborhood, which is not surprising considering step zero node homophily distribution. Finally, at time step \(t=10\), we can observe that all the classes converge to approximately the same homophily values within each class. This convergence suggests that the network is highly interconnected. More insights regarding node homophily measure and related HGNNs performances are described in Section 5 while the rest of the plots for the datasets can be found in Appendix I.
## 4 Methods
Current HGNNs aim to generalize GNN concepts to the hypergraph domain, and are specially focused on redefining graph-based propagation rules to accommodate higher-order structures. In this regard, the work of Chien et al. (2022) introduced a general notation framework, called AllSet, that encompasses most of the currently available HGNN layers, including CEGCN/CEGAT, HGNN (Feng et al., 2019), HNHN (Dong et al., 2020), HCHA (Bai et al., 2021), HyperGCN (Yadati et al., 2019), and the AllDeepSet and AllSetTransformer presented in the same work (Chien et al., 2022).
The first part of this Section revisits the original AllSet formulation. Then, we introduce a new framework -termed MultiSet- which extends AllSet by allowing multiple hyperedge-dependent
Figure 1: Node Homophily Distribution Scores for CORA-CA (a) and 20Newsgroups (b) using Equation 2 at \(t=0,1,\) and \(10\) (left, middle, and right plots correspondingly). Horizontal lines depict class mean homophily, with numbers above indicating the number of visualized points per class.
representations of nodes. Finally, we present some novel methodologies to process hypergraphs -including MultiSetMixer, a new HGNN architecture within the MultiSet framework.
### AllSet Propagation Setting
For a given node \(v\in\mathcal{V}\) and hyperedge \(e\in\mathcal{E}\) in a hypergraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), let \(\mathbf{x}_{v}^{(t)}\in\mathbb{R}^{f}\) and \(\mathbf{z}_{e}^{(t)}\in\mathbb{R}^{d}\) denote their vector representations at propagation step \(t\). We say that a function \(f\) is a multiset function if it is permutation invariant w.r.t. each of its arguments in turn. Typically, \(\mathbf{x}_{v}^{(0)}\) and \(\mathbf{z}_{e}^{(0)}\) are initialized based on the corresponding node and hyperedge original features, if available. The vectors \(\mathbf{x}_{v}^{(0)}\) and \(\mathbf{z}_{e}^{(0)}\) represent the initial node and hyperedge features, respectively. In this context, the AllSet framework (Chien et al., 2022) consists in the following two-step update rule:
\[\mathbf{z}_{e}^{(t+1)}=f_{\mathcal{V}\to\mathcal{E}}(\{\mathbf{x}_{u}^{(t)}\}_{u:u\in \mathcal{E}};\mathbf{z}_{e}^{(t)}), \tag{4}\]
\[\mathbf{x}_{v}^{(t+1)}=f_{\mathcal{E}\to\mathcal{V}}(\{\mathbf{z}_{e}^{(t+1)}\}_{e\in \mathcal{E}_{v}};\mathbf{x}_{v}^{(t)}), \tag{5}\]
where \(f_{\mathcal{V}\to\mathcal{E}}\) and \(f_{\mathcal{E}\to\mathcal{V}}\) are two permutation invariant functions with respect to their first input. Equations 4 and 5 describe the propagation from nodes to hyperedges and from hyperedges to nodes, respectively. We extend the original AllSet formulation to accommodate UniGCNII (Huang and Yang, 2021), a concurrent work to AllSet, by modifying the node update rule (Eq. 5) in order to allow residual connections, i.e.:
\[\mathbf{x}_{v}^{(t+1)}=f_{\mathcal{E}\to\mathcal{V}}(\{\mathbf{z}_{e}^{(t+1)}\}_{e\in \mathcal{E}_{v}};\{\mathbf{x}_{v}^{(k)}\}_{k=0}^{t}). \tag{6}\]
There is no requirement for the function to be permutation invariant with respect to this second set.
**Proposition 1**.: _UniGCNII Huang and Yang, 2021 is a special case of AllSet considering 4 and 6._
In the practical implementation of a model, \(f_{\mathcal{V}\to\mathcal{E}}\) and \(f_{\mathcal{E}\to\mathcal{V}}\) are parametrized and learnt for each dataset and task, and particular choices of these functions give rise to the different HGNN layer architectures considered in this paper; more details in Appendix B.
### MultiSet Framework
In this Section, we introduce our proposed MultiSet framework, which can be seen as an extension of AllSet where nodes can have multiple co-existing hyperedge-based representations. For a given hyperedge \(e\in\mathcal{E}\) in a hypergraph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), we denote by \(\mathbf{z}_{e}^{(t)}\in\mathbb{R}^{d}\) its vector representation at step \(t\). However, for a node \(v\in\mathcal{V}\), MultiSet allows for as many representations of the node as the number of hyperedges it belongs to. We denote by \(\mathbf{x}_{v,e}^{(t)}\in\mathbb{R}^{f}\) the vector representation of node \(v\) in a hyperedge \(e\in\mathcal{E}_{v}\) at propagation time \(t\), and by \(\mathbb{X}_{v}^{(t)}=\{\mathbf{x}_{v,e}^{(t)}\}_{e\in\mathcal{E}_{v}}\), the set of all \(d_{v}\) hidden states of that node in the specified time-step. Accordingly, the hyperedge and node update rules of Multiset are formulated to accommodate hyperedge-dependent node representations:
\[\mathbf{z}_{e}^{(t+1)}=f_{\mathcal{V}\to\mathcal{E}}(\{\mathbb{X}_{u}^{(t)}\}_{u:u \in e};\mathbf{z}_{e}^{(t)}), \tag{7}\]
\[\mathbf{x}_{v,e}^{(t+1)}=f_{\mathcal{E}\to\mathcal{V}}(\{\mathbf{z}_{e}^{(t+1)}\}_{e \in\mathcal{E}_{v}};\{\mathbb{X}_{v}^{(k)}\}_{k=0}^{t}), \tag{8}\]
where \(f_{\mathcal{V}\to\mathcal{E}}\) and \(f_{\mathcal{E}\to\mathcal{V}}\) are two multiset functions with respect to their first input. After \(T\) iterations of message passing, MultiSet also considers a last readout-based step with the idea of obtaining a unique final representation \(x_{v}^{T}\in\mathbb{R}^{f^{\prime}}\) for each node from the set of its hyperedge-based representations:
\[\mathbf{x}_{v}^{(T)}=f_{\mathcal{V}\to\mathcal{V}}(\{\mathbb{X}_{v}^{(k)}\}_{k=0} ^{T}) \tag{9}\]
where \(f_{\mathcal{V}\to\mathcal{V}}\) is also a multiset function.
**Proposition 2**.: _AllSet 4-5, as well as its extension 4-6, are special cases of MultiSet 7-8-9._
Figure 3: MultiSet layout
Figure 2: AllSet layout
### Training MultiSet networks
This Section describes the main characteristics of our MultiSet layer implementation, termed MultiSetMixer, and presents a novel sampling procedure that our model incorporates.
Learning MultiSet LayersFollowing the mixer-style block designs (Tolstikhin et al., 2021) and standard practice, we propose the following MultiSet layer implementation for HGNNs:
\[\mathbf{z}_{e}^{(t+1)}=f_{\mathcal{V}\rightarrow\mathcal{E}}(\{\mathbf{x}_{u,e}^{(t)} \}_{w:u\in e};\mathbf{z}_{e}^{(t)}):=\frac{1}{|e|}\sum_{v\in e}\mathbf{x}_{u,e}^{(t)}+ \text{MLP}\left(\text{LN}\left(\frac{1}{|e|}\sum_{v\in e}\mathbf{x}_{u,e}^{(t)} \right)\right), \tag{10}\]
\[\mathbf{x}_{v,e}^{(t+1)}=f_{\mathcal{E}\rightarrow\mathcal{V}}(\mathbf{z}_{e}^{(t+1)} ;\mathbf{x}_{v,e}^{(t)}):=\mathbf{x}_{v,e}^{(t)}+\text{MLP}\left(\text{LN}(\mathbf{x}_{v,e }^{(t)})\right)+\mathbf{z}_{e}^{(t+1)}, \tag{11}\]
\[\mathbf{x}_{v}^{(T)}=f_{\mathcal{V}\rightarrow\mathcal{V}}(\mathbb{X}_{v}^{(T)}): =\frac{1}{d_{v}}\sum_{e\in\mathcal{E}_{v}}\mathbf{x}_{v,e}^{(t)} \tag{12}\]
where MLPs are composed of two fully-connected layers, and LN stands for layer normalisation. This novel architecture, which we call MultiSetMixer, is based on a mixer-based pooling operation for _(i)_ updating hyperedges from its node's representations, and _(ii)_ generate and update hyperedge-dependent representations of the nodes.
**Proposition 3**.: _The functions \(f_{\mathcal{V}\rightarrow\mathcal{E}}\), \(f_{\mathcal{E}\rightarrow\mathcal{V}}\) and \(f_{\mathcal{V}\rightarrow\mathcal{V}}\) defined in MultiSetMixer are permutation invariant. Furthermore, these functions are universal approximators of multiset functions when the size of the input multiset is finite._
Mini-batchingThe motivation for introducing a new strategy to iterate over hypergraph datasets is twofold. On the one hand, current HGNN pipelines suffer from scalability issues to process large datasets and very large hyperedges. On the other, pooling operations over relatively large sets can also lead to over-squashing the signal. To help in these directions, we propose sampling \(X\) mini-batches of a certain size \(B\) at each iteration. At _step 1_, it samples \(B\) hyperedges from \(\mathcal{E}\). The hyperedge sampling over \(\mathcal{E}\) can be either uniform or weighted (e.g. by taking into account hyperedge cardinalities). Then in _step 2_\(L\) nodes are in turn sampled from each sampled hyperedge \(e\), padding the hyperedge with \(L-|e|\) special padding tokens if \(|e|>L\). Overall, the shape of the obtained mini-batch \(X\) has fixed size \(B\times L\). See Appendix H for additional analysis of the sampling procedure.
## 5 Experimental Results
The questions that we introduced in the Introduction have shaped our research, leading to a new definition of higher-order homophily and novel architectural designs and sampling strategies that can potentially fit better the properties of hypergraph networks. In subsequent subsections, we set again three main questions that follow up from these fundamental inquiries and can help contextualize the technical contributions introduced in this paper.
Dataset and ModelsWe use the same datasets used in Chien et al. (2022), which includes Cora, Citeseer, Pubmed, ModelNet40, NTU2012, 20Newsgroups, Mushrorom, ZOO, CORA-CA, and DBLP-CA. More information about datasets and corresponding statistics can be found in Appendix F.2. We also utilize the benchmark implementation provided by Chien et al. (2022) to conduct the experiments with several models, including AllDeepSets, AllSetTransformer, UniGCNII, CEGAT, CEGCN, HCHA, HGNN, HNPN, HyperGCN, HAN, and HAN (mini-batching). Additionally, we consider vanilla MLP applied to node features and a transformer architecture and introduce three new models: MultiSetMixer, MLP Connectivity Batching (MLP CB), and Multiple MLP CB (MMLP CB). The MLP CB and MMLP CB models use connectivity information to form and process batches. Specifically, the MMLP CB model processes the top three most frequent connectivities using separate MLP encoders, while the fourth encoder is used to process the remaining connectivities. We refer to Section 4.3 for further details about all these architectures. All models are optimized using 15 splits with 2 model initializations, resulting in a total of 30 runs; see Appendix F.1 for further details.
### How does MultiSetMixer perform?
Our first experiment aims to assess the performance of our proposed model, MultiSetMixer, as well as the two introduced baselines, MLP CB and MMLP CB. Figure 4 shows the average rankings -across all models and datasets- of the top-3 best performing models for the considered training splits, exhibiting that those splits can impact the relative performance among models.
However, due to space limitations, we restrict our analysis to the \(50\%\) split results shown in Table 1,1 and relegate to Appendix G.1 the corresponding tables for the other scenarios. Table 1 emphasizes the MultiSetMixer model's relatively solid performance, being the best-performing model on the NTU2012, ModelNet40, and 20Newsgroups datasets. Its performance on the 20Newsgroups dataset is especially noteworthy, significantly outperforming the other models. Moreover, it is notable that MLP CB and MMLP CB exhibit similar behaviour on this dataset. In contrast, the performance of all other models achieves roughly the same performance as the MLP. This observation suggests that these models can not account for dataset connectivity; in particular, as we demonstrated in Section 3, the dispersion of the node homophily measure, with a subsequent convergence to a similar value within each class, indicates that the dataset's connectivity is notably non-homophilic and presents a challenge. In contrast, CORA-CA exhibits a high degree of homophily within its hyperedges and shows the most significant performance gap between the best-performing model, AllSetTransformer, and the basic MLP. A similar trend is observed for DBLP-CA (see node homophily plot in Appendix I). Please refer to Section 5.3 for additional experiments analyzing the impact of connectivity on the models.
Footnote 1: Unless otherwise specified, all tables in the main body of the paper use a \(50\%/25\%/25\%\) split between training and testing. The results are shown as Mean Accuracy Standard Deviation, with the best result highlighted in bold and shaded in grey, and results within one standard deviation of the best result are displayed in blue-shaded boxes.
On the other hand, we can notice that CEGAT, CEGON and our proposed model don't perform well on the Mushroom dataset. This is noteworthy because the Mushroom dataset's features are highly representative, as demonstrated by the near-perfect performance of the MLP classifier. This suggests that, in this particular case, connectivity may not play a crucial role in achieving high performance.
### What is the impact of the introduced mini-batch sampling strategy?
Next, we examine the role that our proposed mini-batching sampling can play _(i)_ in explaining previous results and _(ii)_ in influencing other model's performance.
Class distribution analysisTo evaluate and motivate the potential of the proposed mini-batching sampling, we investigate the reason behind both the superior performance of MultiSetMixer, MLP CB and MMLP CB on 20NewsGroup and their poor performance on Mushroom. Framing mini-batching from the connectivity perspective presents a nuanced challenge that conceals significant potential for improvement (Teney et al., 2023). It is important to note that connectivity, by definition,
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline Model & Case & Case & Case & Case & Case & Case & Case & Case & Case & Case & Case \\ \hline Mushroom & 77.1 & 77.1 & 77.2 & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** \\ \hline Mushroom & 77.1 & 77.1 & 77.2 & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** & **77.2** \\ \hline Mushroom & 77.1 & 77.
describes relationships among the nodes, implying that some parts of the dataset might interconnect much more densely, creating some sort of hubs within the network. Thus, mini-batching might introduce unexpected skew in training distribution. In particular, in Figure 5, we depict the class distribution of the original dataset, referred to as _Node_, while _'Step 1 and 2'_ and _'Step 1'_ shows the distribution after each step in our mini-batching procedure. The sampling procedure tends to rebalance class distributions in certain cases, such as the 20NewsGroup dataset, while in contrast, it introduces an imbalance that was not present in the original labels in the Mushroom dataset, where our model demonstrated suboptimal performance. This observation leads to the hypothesis that, in some cases, the sampling procedure produces a shift distribution that rebalances the class distributions and conducts our model to outperform the comparison models.
Application to Other ModelsFurthermore, we explore the proposed mini-batch sampling procedure with the AllSetTransformer and UniGCNII models by implementing Step 1 of the mini-batch procedure without additional hyperparameter optimization. From Table 2, we can observe a drop in performance for most of the datasets both for AllSetTransformer and for UniGCNII; both models, on average, outperform the HAN (mini-batching) model. This suggests the substantial potential of the proposed sampling procedure. More in detail, AllSetTransformer has a substantial decrease in accuracy for the CORA-CA dataset, in contrast to the UniGCNII, which registers only marginal decreases. A parallel pattern emerges with the DBLP-CA dataset.
### How do connectivity changes affect performance?
To shine a light on this, we design two different experimental approaches aiming at modifying the original connectivity of datasets in a systematic manner. The first experiment tests the performance when some hyperedges are removed following different _drop connectivity_ strategies. Then, a second experiment examines the model's performance with the introduction of two preprocessing strategies applied to the given hypergraph connectivity.
Reducing ConnectivityThis experiment aims to investigate the significance of connectivity in datasets and the extent to which it influences the performances of the models. We divide this experiment into two parts: (i) drop connectivity and (ii) connectivity rewiring. In the first part of the experiment, we employ three strategies to introduce variations in the initial dataset's connectivity. The first two strategies involve ordering hyperedges based on their lengths in **ascending order**. In the first approach, referred to as _trimming_, we remove the initial \(x\%\) of ordered hyperedges. The second approach, referred to as _retention_, involves keeping the first \(x\%\) of hyperedges and discarding the remaining \(100-x\%\). Finally, the last strategy involves randomly dropping \(x\%\) of hyperedges from the dataset, referred to as _random drop_. Results shown in Table 3 also indicate that connectivity minimally impacts CEGCN, and AllSetTransformer for the Citeseer and Pubmed datasets. On the other hand, MultiSetMixer performs better at the _trimming 25%_ setting, although the achieved performance is on par with MLP reported in Table 1. This suggests that the proposed model was negatively affected by the distribution shift. Conversely, we observe a similar but opposite trend for the Mushroom dataset, where MultiSetMixer's performance improves due to the reduced impact of the distribution shift. Another interesting observation is that the CEGCN model gains improvement in 6 out of 9 datasets, with a doubled increase for the ZOO dataset. In the case of Cora, CORA-CA, and DBLP-CA datasets, another interesting pattern emerges: retaining only 25% of the highest relationships (_retention 25%_) consistently results in better performance compared to retaining 50% or 75%. This is intriguing because, at the 25% level, we are preserving only a small fraction of the higher-order relationships. The opposite pattern holds for the _trimming_ strategy. For the
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c} \hline Model & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} & \multicolumn{2}{c|}{CIM} \\ \hline AllSetTransformer & Mobile & 71.34 & 1.08 & 69.51 & 1.46 & **0.55** & **0.55** & **0.55** & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 \\ UniGCNI (DBLP) & **0.55** & **0.55** & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 \\ UniGCNI (DBLP) & **0.55** & **0.55** & **0.55** & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 \\ UniGCNI (DBLP) & **0.55** & **0.55** & **0.55** & **0.55** & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 \\ UniGCNI (DBLP) & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & 0.55 & 0.55 & 0.55 \\ UniGCNI & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** & **0.55** \\ \hline \end{tabular}
\end{table}
Table 2: Mini-batching experiment. Test accuracy in % averaged over 15 splits
datasets mentioned above, this phenomenon remained consistent across all models. Notice that this phenomenon doesn't appear when we remove hyperedges randomly, in this case, as expected, the more hyperedges we remove, the more the performances decrease.
Rewiring ConnectivityIn this experiment, we preserve the original connectivity and investigate the influence of homophilic hyperedges on performance. To do so, we adjust the given connectivity in two different ways. The first strategy aims to unveil the full potential of homophily for each dataset by dividing the given hyperedges into fully homophilic ones based on the _node labels_. In contrast, the second strategy explores the possibility of splitting hyperedges based on their _initial node features_. More in detail, the hyperedge division results from applying multiple times \(k\)-means for each hyperedge \(e\), varying at each iteration the number of centroids \(m\) from \(2\) to \(\min(C,|e|)\); the elbow method is then used to determine the optimal hyperedge partitioning. It's not surprising that the "Label Based" strategy improves the performance for all datasets and models, as evident from Table 4. However, it's worth highlighting that the graph-based method CEGCN achieves results similar to HGNNs in this strategy. Additionally, only CEGCN, on average, performs better with the "k-means" strategy. These observations collectively suggest that connectivity preprocessing plays a crucial role, particularly for graph-based models. Applying "k-means" diminishes the distribution shift for MultiSetMixer.
## 6 Discussion
This section summarizes some key findings from our extensive evaluation and proposed homophily measure. Firstly, we showed that the proposed message passing formalization of the homophily measure enables the discovery of patterns and provides valuable insights into the dynamics of hypernetworks. Importantly, this approach can be extended to other definitions of homophily beyond labels. Furthermore, we showed that our MultiSetMixer model outperforms existing architectures in several scenarios. We also identified some common failure modes, which we attribute to the distribution shift introduced by the proposed mini-batching sampling scheme and the way message-passing propagates information. The experimental results demonstrate that certain benchmark datasets (Citeseer, Pubmed, 20Newsgroups) for hypergraph learning contain connectivity patterns that are not
\begin{table}
\begin{tabular}{c|c|c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & Topic & \multicolumn{1}{c|}{City} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c|}{Combined} & \multicolumn{1}{c}{Combined} & \multicolumn{1}{c}{Combined} \\ \cline{2-13} & \(\Delta\)LeLeNet & **0.20\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** & **0.00\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraphGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraphGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** \\ \hline \multirow{2}{*}{AllGraph} & \(\Delta\)LeNet & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.20\(\pm\)0.03** & **0.
effectively captured by HGNNs and are often overlooked. We show that the substantial performance gap between HGNNs and MLP for Cora, CORA-CA, and DBLP-CA is primarily attributed to patterns within a particular subset of connectivity, i.e., the largest hyperedge cardinalities have a stronger influence on performance. Finally, there is compelling evidence that connectivity can serve purposes beyond message propagation, such as acting as a tool for intentional distribution shift with mini-batching. We believe that the provided set of experiments and dynamic homophily figures are valuable tools to shape novel ideas for the hypergraph modeling research field.
## 7 Reproducibility
We include all the details about our experimental setting, including the choice of hyperparameters, the specifications of our machine and environment, the training/validation/test split, in Appendix F.1 and in Section 5. To ensure the reproducibility of our results, we will provide the source code along with the camera-ready version.
|
2305.03654 | Uniqueness of traveling fronts in premixed flames with stepwise
ignition-temperature kinetics and fractional reaction order | In this paper, we consider a reaction-diffusion system describing the
propagation of flames under the assumption of ignition-temperature kinetics and
fractional reaction order. It was shown in [3] that this system admits a
traveling front solution. In the present work, we show that this traveling
front is unique up to translations. We also study some qualitative properties
of this solution using the combination of formal asymptotics and numerics. Our
findings allow conjecture that the velocity of the propagation of the flame
front is a decreasing function of all of the parameters of the problem:
ignition temperature, reaction order and an inverse of the Lewis number. | Amanda Matson, Claude-Michel Brauner, Peter V. Gordon | 2023-05-05T16:20:12Z | http://arxiv.org/abs/2305.03654v1 | Uniqueness of traveling fronts in premixed flames with stepwise ignition-temperature kinetics and fractional reaction order.
###### Abstract
In this paper, we consider a reaction-diffusion system describing the propagation of flames under the assumption of ignition-temperature kinetics and fractional reaction order. It was shown in [3] that this system admits a traveling front solution. In the present work, we show that this traveling front is unique up to translations. We also study some qualitative properties of this solution using the combination of formal asymptotics and numerics. Our findings allow conjecture that the velocity of the propagation of the flame front is a decreasing function of all of the parameters of the problem: ignition temperature, reaction order and an inverse of the Lewis number.
**Keywords:** Reaction-diffusion systems, traveling front solution, uniqueness of solution, qualitative dependency of solution on parameters.
**AMS subject classifications:** 35K57, 35C07, 34B08, 34E05, 80A25.
## 1 Introduction
The canonical constant density approximation model of flame propagation in one dimensional formulation reads [1, 7, 13, 15]:
\[\left\{\begin{array}{ll}T_{\tilde{t}}=T_{\bar{x}\bar{x}}+\Omega(T,C),\\ C_{\tilde{t}}=\Lambda C_{\bar{x}\bar{x}}-\Omega(T,C),\end{array}\right. \tag{1.1}\]
where \(T\) and \(C\) are appropriately normalized temperature and concentration of the deficient reactant, \(\bar{x}\in\mathbb{R},\ \ \bar{t}>0\) are normalized spatiotemporal coordinates, \(\Lambda>0\) is an inverse of the Lewis number and \(\Omega\) is the reaction rate. The reaction rate is typically specified as:
\[\Omega(T,C):=\left\{\begin{array}{ll}C^{\alpha}F(T)&T\geq\theta\ \ \ \mbox{and}\ \ \ \ \ \ C>0,\\ 0&T<\theta\ \ \ \mbox{and/or}\ \ \ \ C=0,\end{array}\right. \tag{1.2}\]
where \(\alpha\geq 0\) is the reaction order, \(\theta\in(0,1)\) is the ignition temperature and \(F(T)\) is a positive non-decreasing function that characterizes the enhancement of chemical reaction with the increase of the temperature.
The studies of system (1.1) trace back to pioneering works of Frank-Kamenetskii, Semenov and Zeldovich in 1930's and 1940's [4, 12, 15]. This system was then analyzed by mathematicians and physicists alike. The substantial body of literature concerning system (1.1) is dedicated to the analysis of traveling front solutions, that is special solutions of the form:
\[T(\bar{x},\bar{t}):=u(\xi),\ \ \ C(\bar{x},\bar{t}):=v(\xi),\ \ \ \xi=\bar{x}+c\bar{t}, \tag{1.3}\]
where \(c>0\) is an a-priori unknown speed of propagation. In a context of combustion, such solutions represent flame fronts propagating with a constant speed from burned state far to the right to unburned state far to the left.
When \(\alpha\geq 1\) system (1.1), after substitution of an ansatz (1.3), reduces to the following system of ODE's on a real line:
\[u_{\xi\xi}-cu_{\xi}+\Omega(u,v)=0,\quad\Lambda v_{\xi\xi}-cv_{\xi}-\Omega(u,v)= 0,\quad\xi\in\mathbb{R}, \tag{1.4}\]
complemented with boundary like conditions:
\[u\to 1,\quad v\to 0\quad\text{as}\quad\xi\to\infty,\quad\text{and}\quad u\to 0, \quad v\to 1\quad\text{as}\quad\xi\to-\infty. \tag{1.5}\]
Conditions (1.5) prescribe the steady temperature and reactant concentration far ahead (\(\xi\to-\infty\)) and far behind (\(\xi\to\infty\)) the flame front.
Since solutions of (1.4), (1.5) are translationally invariant, we fix translations by imposing a constraint:
\[u(\xi_{ign})=\theta, \tag{1.6}\]
where \(\xi_{ign}\) is an arbitrary fixed number.
The constraint (1.6) fixes the position of an _ignition interface_, the unique position where the temperature is equal to the ignition temperature \(\theta\). Hence, when crossing the ignition interface, the reaction rate jumps from some positive value to zero while preserving continuity of the temperature and concentration of reactant as well as their fluxes. Consequently, the mixture ahead of the ignition interface (\(\xi<\xi_{ign}\)) is in a non-reactive state, whereas at and behind the ignition interface (\(\xi\geq\xi_{ign}\)), the chemical reaction takes place. We note that uniqueness of the ignition interface follows from the monotonicity of any solution of problem (1.4),(1.5) that results from the fact that \(\Omega(u,v)\geq 0\) and can be directly verified.
In view of the discussion above, system (1.4) is equivalent to the following one:
\[\left\{\begin{array}{ll}u_{\xi\xi}-cu_{\xi}=0,&\Lambda v_{\xi\xi}-cv_{\xi}= 0,&\xi\in(-\infty,\xi_{ign}),\\ u_{\xi\xi}-cu_{\xi}+v^{\alpha}F(u)=0,&\Lambda v_{\xi\xi}-cv_{\xi}-v^{\alpha}F(u )=0,&\xi\in(\xi_{ign},\infty),\end{array}\right. \tag{1.7}\]
complemented with conditions of continuity of a solution and its first derivatives when crossing the interface:
\[[u]=[v]=[u_{\xi}]=[v_{\xi}]=0\quad\text{at}\quad\xi=\xi_{ign}. \tag{1.8}\]
Here and below \([\cdot]\) stands for a jump of the quantity. A sketch of a traveling front solution for system (1.1) with the reaction order \(\alpha\geq 1\) is depicted on Figure 1.
System (1.4), (1.5), (1.6) (equivalently system (1.5), (1.6), (1.7), (1.8)) was extensively studied in the literature. In the special case when \(\Lambda=1\), this system reduces to a single equation. This equation is well understood by now, details can be found in many books and review articles on the subject, for example [14, 15, 6, 10]. In particular, it is well known that in this case (1.4), (1.5), (1.6) admits a unique traveling front solution for any \(\Lambda\in(0,\infty)\) fixed. The general case when \(\Lambda\neq 1\) is substantially more complex, and the complete understanding of this case is still lacking. Review of many relevant results concerning this general case can be found in [11, 10]. In particular, it was shown in [1] that when the reaction rate \(\alpha\) is a positive integer then system (1.4), (1.5), (1.6) admits a traveling front solution. Moreover, when \(\alpha=1\) and \(\Lambda\in(0,1]\) this solution is unique [5]. The question whether uniqueness of a solution holds for \(\alpha=1\) and \(\Lambda>1\) is still open.
When \(0<\alpha<1\), non-linear term (1.2) becomes non-Lipschitz at \(C=0.\) Consequently, the system that describes traveling fronts for (1.1) changes substantially. After substitution of an ansatz (1.3) into (1.1) this system reads:
\[\left\{\begin{array}{ll}u_{\xi\xi}-cu_{\xi}=0,&\Lambda v_{\xi\xi}-cv_{\xi} =0,&\xi\in(-\infty,\xi_{ign}),\\ u_{\xi\xi}-cu_{\xi}+v^{\alpha}F(u)=0,&\Lambda v_{\xi\xi}-cv_{\xi}-v^{\alpha}F (u)=0,&\xi\in(\xi_{ign},\xi_{tr}),\\ u_{\xi\xi}-cu_{\xi}=0,&v=0,&\xi\in(\xi_{tr},\infty),\end{array}\right. \tag{1.9}\]
where \(\xi_{tr}>\xi_{ign}\) is an a-priori unknown constant which we refer to as a position of _trailing interface_. The trailing interface, in the context of combustion, indicates the leftmost point where entire reactant available for the reaction is fully consumed. As a result of it, as can be easily checked, the temperature at the trailing interface as well as at all points to the right of it is equal to the temperature of the fully reacted mixture
(\(u=1,\xi\geq\xi_{tr}\)). Hence, the distance between the ignition and trailing interfaces \(R=\xi_{tr}-\xi_{ign}\) is the length of the reaction zone.
In view of the discussion above, the last equation in (1.9) can be solved explicitly and the solution reads:
\[u=1,\ \ \ v=0,\ \ \ \xi\in(\xi_{tr},\infty). \tag{1.10}\]
Hence, system (1.9) reduces to
\[\left\{\begin{array}{ll}u_{\xi\xi}-cu_{\xi}=0,&\Lambda v_{\xi\xi}-cv_{\xi}=0,&\xi\in(-\infty,\xi_{ign}),\\ u_{\xi\xi}-cu_{\xi}+v^{\alpha}F(u)=0,&\Lambda v_{\xi\xi}-cv_{\xi}-v^{\alpha}F( u)=0,&\xi\in(\xi_{ign},\xi_{tr}),\end{array}\right. \tag{1.11}\]
complemented with boundary conditions at the trailing interface:
\[u=1,\ \ \ u_{\xi}=0,\ \ \ v=v_{\xi}=0\ \ \ \mbox{at}\ \ \ \xi=\xi_{tr}, \tag{1.12}\]
that manifest continuity of the temperate and reactant and their fluxes when crossing the trailing interface. Problem (1.11), (1.12) should be considered with constraint (1.6), continuity conditions at the ignition interface (1.8) and boundary like conditions far ahead of the ignition interface:
\[u\to 0,\ \ \ v\to 1\ \ \ \mbox{as}\ \ \ \xi\to-\infty. \tag{1.13}\]
We note that the main principle difference of a system describing traveling front solution for system (1.1) with \(\alpha\geq 1\) and \(0<\alpha<1\) is that the latter involves a trailing interface which position is a-priori unknown. We also note that any solution of (1.6), (1.8), (1.11),(1.12), (1.13) is monotone so for any such solution positions of both ignition and trailing interfaces are uniquely defined. The sketch of the traveling front solution for this system is depicted in Figure 2.
Despite the fact that reactions of fractional order (\(0<\alpha<1\)) are common in various combustion applications, for example in high pressure combustion systems (see e.g [7]), problem (1.6), (1.8), (1.11),(1.12), (1.13) has not received appropriate attention in mathematical literature partially due to the fact that the non-Lipschitz reaction rate creates multiple technical difficulties for the analysis. To the best of our knowledge, a recent paper [3] is the only mathematical paper where this system was analyzed. In [3] the authors considered (1.6), (1.8), (1.11), (1.12), (1.13) with a stepwise ignition-temperature kinetics, that is with \(F(T)\) being a Heaviside step-function
\[F(T):=\left\{\begin{array}{ll}1&T\geq\theta,\\ 0&T<\theta.\end{array}\right. \tag{1.14}\]
Figure 1: Sketch of a traveling front solution for system (1.1) with \(\alpha\geq 1\). Flame (red) and deficient reactant (blue) fronts propagating from burned (\((u,v)=(1,0)\)) to unburned (\((u,v)=(0,1)\)) states at \(\xi\to\infty\) and \(\xi\to-\infty\) respectively with the constant speed \(c>0\). The arrow indicates the direction of propagation.The position of an ignition interface where the temperature is equal to an ignition one (\(u=\theta\)) is indicated by \(\xi_{ign}\).
The particular choice of stepwise ignition-temperature kinetics, from perspective of applications, is justified by the fact that such kinetic is apparently the most appropriate approximation of overall reaction kinetics for certain hydrogen-oxygen/air and ethylene-oxygen mixtures as evident from multiple theoretical and numerical studies of detailed reaction mechanism in such mixtures see [9] and references therein.
Under the assumption of stepwise ignition-temperature kinetics, system (1.11) simplifies to :
\[\left\{\begin{array}{ll}u_{\xi\xi}-cu_{\xi}=0,&\Lambda v_{\xi\xi}-cv_{\xi}=0,&\xi\in(-\infty,\xi_{ign}),\\ u_{\xi\xi}-cu_{\xi}+v^{\alpha}=0,&\Lambda v_{\xi\xi}-cv_{\xi}-v^{\alpha}=0,& \xi\in(\xi_{ign},\xi_{tr}).\end{array}\right. \tag{1.15}\]
In [3] it was shown that problem (1.6), (1.8), (1.15), (1.12), (1.13) admits a solution, however, the question of multiplicity of solution was not addressed there. In this paper, we prove that a solution constructed in [3] is unique and discuss some qualitative properties of the solution.
The paper is organized as follows. In section 2, we set up the stage by giving some preliminaries and stating the main result. In section 3, we give a proof of necessary lemmas and propositions used in the proof of the main result. In the last section, we discuss certain properties of traveling front solutions and present results of numerical simulations. In particular, we give some formal arguments based on asymptotic and numerical studies of the problem that the velocity of propagation of the flame front is a decreasing function of the ignition temperature \(\theta\), inverse of the Lewis number \(\Lambda\) and the reaction order \(\alpha\).
## 2 Preliminaries and the statement of the main result.
In this section, we give an alternative formulation of problem (1.6), (1.8), (1.15), (1.12), (1.13), and state the main result of this paper. Let us first note that in a view of translational invariants of traveling fronts, we can fix the position of trailing interface \(\xi_{tr}\) and treat \(\xi_{ign}\) as a-priori unknown. Hence, we set \(\xi_{tr}=0\) and \(\xi_{ign}=-R\) where \(R>0\) is a-priori unknown. With this alteration, the problem describing traveling front solutions for (1.1) with \(\alpha\in(0,1)\) and step-wise reaction kinetics discussed in the introduction reads:
\[\left\{\begin{array}{ll}u_{\xi\xi}-cu_{\xi}=0,&\Lambda v_{\xi\xi}-cv_{\xi}=0,&v>0,&\xi\in(-\infty,-R),\\ u_{\xi\xi}-cu_{\xi}=-v^{\alpha},&\Lambda v_{\xi\xi}-cv_{\xi}=v^{\alpha},&v>0,& \xi\in(-R,0),\end{array}\right. \tag{2.1}\]
Figure 2: Sketch of a traveling front solution for system (1.1) with \(0<\alpha<1\). Flame (red) and deficient reactant (blue) fronts propagating from burned \(((u,v)=(1,0))\) to an unburned \(((u,v)=(0,1))\) states at \(\xi\to\infty\) and \(\xi\to-\infty\) respectively with the constant speed \(c>0\). The arrow indicates the direction of propagation.The position of an ignition interface where the temperature is equal to an ignition one \((u=\theta)\) is indicated by \(\xi_{ign}\). The position of the trailing interface, the point to the right of which temperature and concentration of deficient are identically one and zero \((u=1,v=0)\), respectively is indicated by \(\xi_{tr}\).
This system is complemented with boundary like conditions far ahead of the combustion front:
\[u\to 0\quad v\to 1\qquad\mbox{as}\quad\xi\to-\infty, \tag{2.2}\]
continuity conditions for a solution and its first derivatives at the ignition interface:
\[[u]=[v]=[u_{\xi}]=[v_{\xi}]=0\quad\mbox{at}\quad\xi=-R, \tag{2.3}\]
boundary condition at the ignition interface:
\[u=\theta,\quad\mbox{at}\quad\xi=-R, \tag{2.4}\]
and boundary conditions at the trailing interface
\[u=1,\quad u_{\xi}=0,\quad v=v_{\xi}=0\quad\mbox{at}\quad\xi=0 \tag{2.5}\]
Straightforward computations show that for any solution of problem (2.1) satisfying (2.3) and (2.4) has:
\[u(\xi)=\theta\exp(c(\xi+R)),\quad\xi\leq-R, \tag{2.6}\]
and thus
\[u=\theta,\quad u_{\xi}=c\theta\quad\mbox{at}\quad\xi=-R. \tag{2.7}\]
Combining elementary computations above, we conclude that any solution of (2.1)-(2.5) on an interval \([0,R]\) must verify the following boundary value problem:
\[\left\{\begin{array}{ll}u_{\xi\xi}-cu_{\xi}=-v^{\alpha},&\Lambda v_{\xi\xi} -cv_{\xi}=v^{\alpha}\quad v>0,&\xi\in(-R,0),\\ u=\theta,&u_{\xi}=c\theta&\xi=-R,\\ u=1&u_{\xi}=0,&v=v_{\xi}=0,&\xi=0.\end{array}\right. \tag{2.8}\]
Therefore, the necessary condition for problem (2.1)-(2.5) to have a solution is the existence of solution for problem (2.8). We note that the existence of solution of (2.8) does not guarantee existence of solution for (2.1)-(2.5). Indeed, one can easily verify that for any solution of (2.1)-(2.5) we must have
\[v(\xi)=1-a\exp\Big{(}\frac{c}{\Lambda}(\xi+R)\Big{)}\quad\xi\in(-\infty,-R), \tag{2.9}\]
for some constant \(a\in(0,1)\). This constant, as follows from continuity conditions (2.3), has to be chosen from the overdetermined system:
\[1-a=\lim_{\xi\to-R^{+}}v(\xi),\quad-ca=\Lambda\lim_{\xi\to-R^{+}}v_{\xi}(\xi), \tag{2.10}\]
which is not necessarily consistent. However, [3, Theorem 1.1] guarantees that for any set of relevant parameters ( \(\theta\in(0,1)\), \(\alpha\in(0,1)\), \(\Lambda\in(0,\infty)\)) system (2.1)-(2.5) admits a solution. In view of this result, problem (2.8) admits a solution and uniqueness of the solution for problem (2.8) implies uniqueness of the solution for (2.1)-(2.5). The main result of this section is as follows:
**Theorem 2.1**.: _Let \(\theta\in(0,1)\), \(\alpha\in(0,1)\), \(\Lambda\in(0,\infty)\). Then, problem (2.8) admits a unique solution._
Let us now show that the proof of Theorem 2.1 reduces to the analysis of the solution of a single second order ODE. First, let us observe that integration of the equation for temperature \(u\) in (2.8) and integration of the same equation multiplied by \(\exp(-c\xi)\) taking into account boundary conditions give:
\[c=\int_{-R}^{0}v^{\alpha}(\xi)d\xi, \tag{2.11}\]
\[c\theta=\int_{-R}^{0}v^{\alpha}(\xi)\exp(-c(\xi+R))d\xi. \tag{2.12}\]
Next we introduce the following rescaling:
\[v(\xi)=A\tilde{w}(x), \tag{2.13}\]
where
\[x:=c\xi,\quad A:=c^{-\frac{2}{1-\alpha}}. \tag{2.14}\]
Substituting (2.13), (2.14) into the equation for concentration of reactant \(v\) in (2.8) and taking into account boundary conditions, we have:
\[\left\{\begin{array}{ll}\Lambda\tilde{w}^{\prime\prime}-\tilde{w}^{\prime}- \tilde{w}^{\alpha}=0,&\tilde{w}>0,\quad x\in(-cR,0),\\ \tilde{w}=0,&\tilde{w}^{\prime}=0,&x=0,\end{array}\right. \tag{2.15}\]
where the prime denotes the derivative with respect to the variable \(x\).
Consider now the first equation of (2.15) extended on the half line, that is, the following problem:
\[\left\{\begin{array}{ll}\Lambda w^{\prime\prime}-w^{\prime}-w^{\alpha}=0,&x <0,\\ w(0)=0,&w^{\prime}(0)=0.\end{array}\right. \tag{2.16}\]
Properties of (2.16) were studied in great detail in [3, Section 4] via a topological approach, see also appendix below. In particular, the following result comes directly from Lemmas 4.7 and 4.8 of [3], see appendix for more details.
**Lemma 2.1**.: _There exists a unique function \(w\) positive on \((-\infty,0)\) that verifies (2.16)._
It is clear that \(w\) is smooth as long as it does not vanish. The function \(w\) and its derivative can be extended by \(0\) up to \(0^{-}\). More precisely, it follows from [3, Lemmas 6.5 & 6.6] that the Holder regularity of the function \(w\) near \(x=0\) reads: \(w\in C^{\infty}(-\infty,0))\cap C^{2+[\beta],\beta-[\beta]}(-\infty,0])\), \(\beta=\frac{2\alpha}{1-\alpha}\), \(0<\alpha<1\). Moreover, \(w\) is decreasing on \((-\infty,0]\).
As a consequence, for an arbitrary \(c>0\), \(R>0\) there is a unique decreasing positive solution to (2.15) and \(\tilde{w}(x)=w(x)\) on \((-cR,0]\).
In view of observations above we note that in terms of rescaled variables (2.11), (2.12) read:
\[c^{\frac{2}{1-\alpha}}=\int_{-cR}^{0}w^{\alpha}(s)ds, \tag{2.17}\]
\[\theta c^{\frac{2}{1-\alpha}}=\int_{-cR}^{0}w^{\alpha}(s)\exp(-(s+cR))ds, \tag{2.18}\]
which in particular imply
\[\theta=\frac{\int_{-cR}^{0}w^{\alpha}(s)\exp(-(s+cR))ds}{\int_{-cR}^{0}w^{ \alpha}(s)ds}. \tag{2.19}\]
Introduce the following function:
\[\phi(x):=\frac{\int_{-x}^{0}w^{\alpha}(s)\exp(-(s+x))ds}{\int_{-x}^{0}w^{ \alpha}(s)ds},\quad x\in(0,\infty). \tag{2.20}\]
The key observation that allows us to prove the main Theorem 2.1 is the following.
**Proposition 2.1**.: _Let \(w\) be the solution of (2.16) and set_
\[\phi(x):=\frac{\int_{-x}^{0}w^{\alpha}(s)\exp(-(s+x))ds}{\int_{-x}^{0}w^{ \alpha}(s)ds},\quad x\in(0,\infty). \tag{2.21}\]
_The function \(\phi\) defined by (2.21) is strictly decreasing on \((0,\infty)\) and_
\[\lim_{x\to 0}\phi(x)=1,\quad\lim_{x\to\infty}\phi(x)=0. \tag{2.22}\]
Proof of Proposition 2.1 is given in the next section. This proposition makes the proof of Theorem 2.1 given below elementary.
Proof of Theorem 2.1.: From (2.17) and (2.18) we have
\[c=\left(\int_{-cR}^{0}w^{\alpha}(s)ds\right)^{\frac{1-\alpha}{2}}, \tag{2.23}\]
\[\theta=\frac{\int_{-cR}^{0}w^{\alpha}(s)\exp(-(s+cR))ds}{\int_{-cR}^{0}w^{ \alpha}(s)ds}. \tag{2.24}\]
By strict monotonicity of \(\phi\) (Proposition 2.1), there is a unique value \(\sigma^{*}\in(0,\infty)\) such that \(\phi(\sigma^{*})=\theta.\) Thus, in (2.23) we must have \(cR=\sigma^{*}.\) This observation and uniqueness of the solution of (2.16) imply uniqueness of the speed \(c=c^{*}\) with
\[c^{*}=\left(\int_{-\sigma^{*}}^{0}w^{\alpha}(s)ds\right)^{\frac{1-\alpha}{2}}. \tag{2.25}\]
The latter immediately implies uniqueness of position of the ignition interface \(R=R^{*}=\sigma^{*}/c^{*}.\)
## 3 Proof of Proposition 2.1
In this section we present a proof of Proposition 2.1 which is based on the following three lemmas.
**Lemma 3.1**.: _Function \(\phi(x)\) defined by (2.21) has the following properties:_
\[\lim_{x\to 0}\phi(x)=1,\quad\lim_{x\to\infty}\phi(x)=0. \tag{3.1}\]
Proof.: Let us first prove the first equality in (3.1). Observe that
\[e^{-x}\int_{-x}^{0}w^{\alpha}(s)ds<\int_{-x}^{0}w^{\alpha}(s)e^{-(s+x)}ds< \int_{-x}^{0}w^{\alpha}(s)ds,\quad x>0. \tag{3.2}\]
Hence,
\[e^{-x}<\phi(x)<1,\quad x>0. \tag{3.3}\]
Sending \(x\) to zero in the inequality above we have the first equality in (3.1).
Now let us prove the second equality in (3.1). Integrating the equation in (2.16) and taking into account boundary conditions we have:
\[-\Lambda w^{\prime}(-x)+w(-x)=\int_{-x}^{0}w^{\alpha}(s)ds. \tag{3.4}\]
Since \(w^{\prime}(-x)<0\) for \(x>0\) we have:
\[w(-x)<\int_{-x}^{0}w^{\alpha}(s)ds. \tag{3.5}\]
Let
\[\rho(x)=\frac{\int_{-x}^{0}w^{\alpha}(s)ds}{w^{\alpha}(-x)}. \tag{3.6}\]
In view of the inequality above we have:
\[\rho(x)>w^{1-\alpha}(-x), \tag{3.7}\]
and thus
\[\rho(x)\rightarrow\infty\quad\text{as}\quad x\rightarrow\infty, \tag{3.8}\]
as \(w(-x)\rightarrow\infty\) as \(x\rightarrow\infty\).
Next observe that
\[\int_{-x}^{0}w^{\alpha}(s)e^{-(s+x)}ds=\int_{-x+\sqrt{\rho(x)}}^{ 0}w^{\alpha}(s)e^{-(s+x)}ds+\int_{-x}^{-x+\sqrt{\rho(x)}}w^{\alpha}(s)e^{-(s+x )}ds\leq\] \[e^{-\sqrt{\rho(x)}}\int_{-x}^{0}w^{\alpha}(s)ds+w^{\alpha}(-x) \sqrt{\rho(x)} \tag{3.9}\]
Dividing the expression above by \(\int_{-x}^{0}w^{\alpha}(s)ds\) and using the definitions of \(\phi(x)\) and \(\rho(x)\) we have:
\[\phi(x)\leq e^{-\sqrt{\rho(x)}}+\frac{1}{\sqrt{\rho(x)}}. \tag{3.10}\]
Sending \(x\rightarrow\infty\) in this inequality and taking into account (3.8) we obtain the second equation in (3.1).
**Lemma 3.2**.: _Let \(Y\) be a non-negative solution of_
\[\Lambda Y^{\prime\prime}-Y^{\prime}-Y^{\alpha}=0. \tag{3.11}\]
_Assume there is \(x_{0}\in\mathbb{R}\) such that_
\[Y(x_{0})>0,\quad Y^{\prime}(x_{0})<0,\quad Y^{\prime\prime}(x_{0})Y(x_{0})> \left(Y^{\prime}(x_{0})\right)^{2}. \tag{3.12}\]
_Then,_
\[\inf_{x\in[x_{0},\infty)}Y(x)>0. \tag{3.13}\]
Proof.: Let
\[Y(x_{0})=Y_{0},\quad Y^{\prime}(x_{0})=Y_{0}^{\prime}, \tag{3.14}\]
and assume that (3.12) hold. Then, as follows from (3.11) and the last inequality in (3.12)
\[\Lambda\frac{\left(Y_{0}^{\prime}\right)^{2}}{Y_{0}}-Y_{0}^{\prime}-Y_{0}^{ \alpha}<\Lambda Y^{\prime\prime}(x_{0})-Y_{0}^{\prime}-Y_{0}^{\alpha}=0, \tag{3.15}\]
and hence
\[\Lambda\left(\frac{Y_{0}^{\prime}}{Y_{0}}\right)^{2}-\left(\frac{Y_{0}^{ \prime}}{Y_{0}}\right)-Y_{0}^{\alpha-1}<0. \tag{3.16}\]
Therefore,
\[\mu_{-}<\frac{Y_{0}^{\prime}}{Y_{0}}<0<\mu_{+}, \tag{3.17}\]
where
\[\mu_{\pm}=\frac{1\pm\sqrt{1+4\Lambda Y_{0}^{\alpha-1}}}{2\Lambda}, \tag{3.18}\]
are roots of the quadratic equation
\[\Lambda\mu^{2}-\mu-Y_{0}^{\alpha-1}=0. \tag{3.19}\]
Next observe that a linear equation
\[\Lambda Z^{\prime\prime}-Z^{\prime}-Y_{0}^{\alpha-1}Z=0, \tag{3.20}\]
satisfying
\[Z(x_{0})=Y_{0},\quad Z^{\prime}(x_{0})=-\gamma Y_{0}, \tag{3.21}\]
has the following solution
\[Z(x)=\left(\frac{2\Lambda Y_{0}}{\sqrt{1+4\Lambda Y_{0}^{\alpha- 1}}}\right)\exp\left(\frac{x-x_{0}}{2\Lambda}\right)\cosh\left(\frac{\sqrt{1+ 4\Lambda Y_{0}^{\alpha-1}}}{2\Lambda}(x-x_{0})\right)\times\] \[\left\{|\mu_{-}|+\frac{1}{2\Lambda}-\left(\frac{1}{2\Lambda}+ \gamma\right)\tanh\left(\frac{\sqrt{1+4\Lambda Y_{0}^{\alpha-1}}}{2\Lambda}(x -x_{0})\right)\right\}. \tag{3.22}\]
In view that for \(x\geq 0\)\(\cosh(x)\geq 1,\ 0\leq\tanh(x)<1\) we have from (3.22) that
\[Z(x)>Z_{0}=\frac{2\Lambda Y_{0}}{\sqrt{1+4\Lambda Y_{0}^{\alpha- 1}}}\left(|\mu_{-}|-\gamma\right),\quad x\in[x_{0},\infty). \tag{3.23}\]
Hence, \(Z(x)\) is uniformly bounded away from zero on \([x_{0},\infty)\) provided
\[\gamma<|\mu_{-}|. \tag{3.24}\]
Let \(x_{M}\) be the largest \(x>x_{0}\) such that
\[0<Y(x)<Y_{0}\quad x\in I=(x_{0},x_{M}). \tag{3.25}\]
Note that the existence of such \(x_{M}\) is guaranteed since \(Y(x_{0})=Y_{0}\) and \(Y^{\prime}(x_{0})<0\).
We claim that for any solution of (3.11) satisfying (3.12) the following inequalities hold
\[Y(x)>Z(x),\quad Y^{\prime}(x)>Z^{\prime}(x), \tag{3.26}\]
for \(x\in I.\) Here \(Z\) is the solution of (3.20), (3.21) with arbitrary
\[\gamma\in\left(\frac{|Y_{0}^{\prime}|}{Y_{0}},|\mu_{-}|\right), \tag{3.27}\]
fixed. Indeed, in view of the facts that \(Y(x_{0})=Z(x_{0})\) and \(Z^{\prime}(x_{0})<Y^{\prime}(x_{0})<0\) (which is guaranteed by our choice of parameter \(\gamma\)) we first observe that (3.26) holds for \(x\in(x_{0},x_{0}+\delta)\) with \(\delta>0\) sufficiently small. Assume now \(x^{*}\) is a smallest value of \(x\in I\) at which (3.26) is violated. Consider two possibilities of how it can happen. In the first one we have
\[Y(x^{*})=Z(x^{*})\quad\text{and}\quad Y^{\prime}(x^{*})>Z^{\prime}(x^{*}), \tag{3.28}\]
but then
\[Y(x^{*}-\varepsilon)<Z(x^{*}-\varepsilon), \tag{3.29}\]
for \(\varepsilon>0\) sufficiently small. This contradicts the definition of \(x^{*}\) and hence this situation is impossible. The second possibility is
\[Y(x^{*})\geq Z(x^{*})\quad\text{and}\quad Y^{\prime}(x^{*})=Z^{\prime}(x^{*}), \tag{3.30}\]
In this case we have
\[\Lambda Y^{\prime\prime}(x^{*})=Y^{\prime}(x^{*})+Y^{\alpha}(x^{*})>Y^{\prime }(x^{*})+Y_{0}^{\alpha-1}Y(x^{*})\geq Z^{\prime}(x^{*})+Y_{0}^{\alpha-1}Z(x^{* })=\Lambda Z^{\prime\prime}(x^{*}), \tag{3.31}\]
and thus
\[Y^{\prime\prime}(x^{*})>Z^{\prime\prime}(x^{*}). \tag{3.32}\]
But then
\[Y^{\prime}(x^{*}-\varepsilon)<Z^{\prime}(x^{*}-\varepsilon). \tag{3.33}\]
for \(\varepsilon>0\) sufficiently small, which again contradicts the definition of \(x^{*}.\)
Hence for \(x\in I\) we have \(Y_{0}>Y(x)\geq Z_{0}>0\) and thus by continuity
\[Z_{0}\leq Y(x)\leq Y_{0}\quad x\in[x_{0},x_{M}]. \tag{3.34}\]
Clearly at \(x=x_{M}\) we necessarily have
\[Y(x_{M})=Y_{0},\quad Y^{\prime}(x_{M})\geq 0. \tag{3.35}\]
By (3.11) for \(x>x_{M}\) we have
\[Y^{\prime}(x)=Y^{\prime}(x_{M})e^{\frac{x-x_{M}}{\Lambda}}+\Lambda^{-1}\int_{ x_{M}}^{x}Y^{\alpha}(s)e^{\frac{x-x}{\Lambda}}ds. \tag{3.36}\]
We now claim that \(Y(x)>Y_{0}\) for \(x>x_{M}.\) Indeed, by continuity \(Y(x)>Y_{0}/2\) for \(x\in[x_{M},x_{M}+\delta]\) for some possibly small \(\delta>0.\) Then by (3.36) we have
\[Y^{\prime}(x)\geq\left(\frac{Y_{0}}{2}\right)^{\alpha}\left(e^{\frac{x-x_{M}} {\Lambda}}-1\right)\quad x\in(x_{M},x_{M}+\delta]. \tag{3.37}\]
This implies that
\[Y^{\prime}(x)>0\quad\text{on}\quad(x_{M},x_{M}+\delta], \tag{3.38}\]
and hence
\[Y(x)>Y_{0}\quad\text{on}\quad(x_{M},x_{M}+\delta]. \tag{3.39}\]
Thus,
\[Y(x_{M}+\delta)>Y_{0},\quad Y^{\prime}(x_{M}+\delta)>0. \tag{3.40}\]
Now assume that \(x_{M}^{*}\) is the first point greater than \(x_{M}+\delta\) such that \(Y^{\prime}(x_{M}^{*})=0,\) but by (3.36) and (3.40)
\[Y^{\prime}(x_{M}^{*})\geq Y^{\prime}(x_{M}+\delta)e^{\frac{x_{M}-(x_{M}+ \delta)}{\Lambda}}+Y_{0}^{\alpha}\left(e^{\frac{x_{M}-(x_{M}+\delta)}{\Lambda }}-1\right)>0, \tag{3.41}\]
and hence \(Y(x)\) is strictly increasing on \((x_{M},\infty).\) Thus
\[Y(x)>Y_{0}\quad x\in(x_{M},\infty). \tag{3.42}\]
Combining (3.34) and (3.42) we have
\[Y(x)\geq Z_{0}>0\quad x\in[x_{0},\infty), \tag{3.43}\]
that imply (3.13) and thus completes the proof.
**Corollary 3.1**.: _It follows immediately from Lemma 3.2 that the solution of (2.16) must satisfy_
\[w^{\prime\prime}(x)w(x)\leq\left(w^{\prime}(x)\right)^{2},\quad\forall x\in(- \infty,0). \tag{3.44}\]
_Indeed, if there was a point \(x_{0}<0\) where (3.44) didn't hold, then it would imply that \(w(x)>0\) for \(x\in[x_{0},0].\) Thus such \(w\) would not satisfy boundary condition \(w(0)=0\) and hence would not solve (2.16) which gives a contradiction._
We note that formula (3.44) has a clear geometric interpretation. Consider problem (2.16) as a dynamical system on the plane. Introducing change of variables
\[t=x,\quad q(t)=w(x),\quad p(t)=w^{\prime}(x), \tag{3.45}\]
we obtain the following system of two first order ODE's:
\[\left\{\begin{array}{l}\dot{q}(t)=p(t),\\ \Lambda\dot{p}(t)=p(t)+q^{\alpha}(t).\end{array}\right. \tag{3.46}\]
where dot stands for the derivative with respect to \(t\). This problem is considered for \(t<0\) in the quadrant \(Q=\{q\geqslant 0,p\leqslant 0\}.\) Rewriting \(p\) and \(q\) in polar coordinates we have
\[q(t):=r(t)\cos(\vartheta(t)),\quad p(t):=r(t)\sin(\vartheta(t)), \tag{3.47}\]
where, \(t\in(-\infty,0),\;r>0,\;\theta\in(-\pi/2,0)\). In terms of the new variables condition (3.44) is equivalent to
\[\dot{\vartheta}(t)\leq 0. \tag{3.48}\]
We present derivation of this formula in the appendix.
**Lemma 3.3**.: _Let \(w\) be the solution of (2.16) and set_
\[f(x):=\left\{\begin{array}{ll}w^{\alpha}(x),&x\leq 0,\\ 0,&x>0.\end{array}\right. \tag{3.49}\]
_For any fixed \(y>0,\) the function_
\[\psi(x,y)=\frac{\int_{-x+y}^{\infty}f(s)ds}{\int_{-x}^{\infty}f(s)ds}, \tag{3.50}\]
_is non-decreasing function of \(x\) on \((0,\infty)\) and strictly increasing on \((y,\infty).\)_
Proof.: First we observe that
\[\psi(x,y)\equiv 0\quad\mbox{for}\quad x\leq y. \tag{3.51}\]
Hence, \(\psi(x,y)\) is non-decreasing in \(x\) for \(x\in(0,y].\)
For \(x>y\) we have,
\[\frac{\partial}{\partial x}\psi(x,y)=\frac{f(y-x)\int_{-x}^{\infty}f(s)ds-f(- x)\int_{y-x}^{\infty}f(s)ds}{\left(\int_{-x}^{\infty}f(s)ds\right)^{2}}. \tag{3.52}\]
Thus to show that \(\psi(x,y)\) is strictly increasing in \(x\) for \(x\in(y,\infty),\) we need to prove the following inequality
\[\frac{\int_{y-x}^{\infty}f(s)ds}{f(y-x)}<\frac{\int_{-x}^{\infty}f(s)ds}{f(-x )}\qquad x\in(y,\infty). \tag{3.53}\]
Define
\[g(x):=\frac{f^{\prime}(x)}{f(x)}=\alpha\frac{w^{\prime}(x)}{w(x)}\quad x\in( -\infty,0). \tag{3.54}\]
Observe that
\[g^{\prime}(x)=\frac{\alpha}{w^{2}(x)}\left(w^{\prime\prime}(x)w(x)-\left(w^{ \prime}(x)\right)^{2}\right), \tag{3.55}\]
and thus by Corollary 3.1
\[g^{\prime}(x)\leq 0\quad\mbox{on}\quad(-\infty,0). \tag{3.56}\]
This in particular implies that for \(x>y>0\) we have
\[g(s+y-x)\leq g(s-x)\quad s\in(0,x-y). \tag{3.57}\]
Consequently,
\[\int_{0}^{s}g(\tau+y-x)d\tau\leq\int_{0}^{s}g(\tau-x)d\tau. \tag{3.58}\]
By the definition of the function \(g\) (see Eq. (3.54)) we have
\[\int_{0}^{s}g(\tau+y-x)d\tau=\int_{0}^{s}\frac{\frac{d}{d\tau}f( \tau+y-x)}{f(\tau+y-x)}d\tau=\int_{0}^{s}\left(\frac{d}{d\tau}\log f(\tau+y-x) \right)d\tau=\log\left(\frac{f(s+y-x)}{f(y-x)}\right),\] \[\int_{0}^{s}g(\tau-x)d\tau=\int_{0}^{s}\frac{\frac{d}{d\tau}f( \tau-x)}{f(\tau-x)}d\tau=\int_{0}^{s}\left(\frac{d}{d\tau}\log f(\tau-x) \right)d\tau=\log\left(\frac{f(s-x)}{f(-x)}\right). \tag{3.59}\]
Therefore, by (3.58) and (3.59) we have that for \(x>y>0\) and \(s\in(0,x-y)\)
\[\log\left(\frac{f(s+y-x)}{f(y-x)}\right)\leq\log\left(\frac{f(s-x)}{f(-x)} \right), \tag{3.60}\]
that is
\[\frac{f(s+y-x)}{f(y-x)}\leq\frac{f(s-x)}{f(-x)}. \tag{3.61}\]
Using the facts that \(f(x)\equiv 0\) for \(x>0\) and \(f(x)>0\) for \(s\in(-y,0)\) for \(y>0\) we observe that
\[\int_{0}^{x-y}f(s+y-x)ds=\int_{y-x}^{0}f(s)ds=\int_{y-x}^{\infty}f (s)ds,\] \[\int_{0}^{x-y}f(s-x)ds=\int_{-x}^{-y}f(s)ds<\int_{-x}^{0}f(s)ds= \int_{-x}^{\infty}f(s)ds. \tag{3.62}\]
Hence integrating (3.61) in \(s\) from \(0\) to \(x-y\) and taking into account (3.62) we conclude that (3.53) holds for arbitrary \(x>y>0\) and hence \(\psi(x,y)\) is indeed strictly increasing in \(x\) for \(x>y>0\).
Now we are ready to give a proof of the proposition
Proof of Proposition 2.1.: Let
\[\chi(x,y)=1-\psi(x,y)=\frac{\int_{-x}^{y-x}f(s)ds}{\int_{-x}^{ \infty}f(s)ds}. \tag{3.63}\]
Observe that
\[\int_{0}^{\infty}\left\{\int_{-x}^{y-x}f(s)ds\right\}e^{-y}dy= \int_{0}^{\infty}\left\{\int_{0}^{y}f(s-x)e^{-y}ds\right\}dy=\int_{0}^{\infty }\left\{\int_{s}^{\infty}f(s-x)e^{-y}dy\right\}ds\] \[=\int_{0}^{\infty}\left\{\int_{s}^{\infty}e^{-y}dy\right\}f(s-x) ds=\int_{0}^{\infty}f(s-x)e^{-s}ds=\int_{-x}^{\infty}f(s)e^{-(s+x)}ds. \tag{3.64}\]
Hence,
\[\int_{0}^{\infty}\chi(x,y)e^{-y}dy=\frac{\int_{-x}^{\infty}f(s)e^{-(s+x)}ds}{ \int_{-x}^{\infty}f(s)ds}=\frac{\int_{-x}^{0}w^{\alpha}(s)e^{-(s+x)}ds}{\int_ {-x}^{0}w^{\alpha}(s)ds}=\phi(x). \tag{3.65}\]
In computation above we used the definition of \(f\) (see Eq. (3.49) ) and \(\psi\) (see Eq.(3.50)).
In view of the statement of Lemma 3.3, \(\chi(x,y)\) is non-increasing function of \(x\) on \((0,\infty)\) and is strictly decreasing function of \(x\) on \([y,\infty)\). Consequently, \(\int_{0}^{\infty}\chi(x,y)e^{-y}dy\) is a strictly decreasing function, and therefore, \(\phi(x)\) is strictly decreasing as follows from (3.65).
Dependency of the speed of propagation on parameters: asymptotics and numerics.
In the previous sections, we established the uniqueness of a solution for problem (2.8). Our main result states that for an arbitrary fixed \(\theta\in(0,1)\), \(\Lambda\in(0,\infty)\) and \(\alpha\in(0,1)\) there exists a unique pair \((c^{*},R^{*})\) for which this problem admits a solution. This pair represents the velocity of propagation and width of the reaction zone for this set of parameters. The natural question is then to investigate the dependency of these quantities on the parameters of the problem, that is, dependencies \(c^{*}(\theta,\Lambda,\alpha)\) and \(R^{*}(\theta,\Lambda,\alpha)\). Of a particular interest is how the velocity of propagation changes with the parameters of the problem as this quantity is the main characteristic of the flame front.
As we showed in the previous sections, \((c^{*},R^{*})\) are uniquely determined from the solution of problem (2.16) that represents scaled distribution of the deficient reactant over the reaction zone. Hence, we start with a discussion of qualitative properties of the solution of (2.16). While the solution of this problem can not be obtained in the closed form, it can be computed numerically. Figure 3 depicts function \(w\) for different values of \(\alpha\) and \(\Lambda=1.\)
We note that solutions of (2.16) for \(\Lambda\neq 1\) can be obtained from solutions of this equation with \(\Lambda=1\) by rescaling. Indeed, one can verify by the direct substitution that
\[w(x|\Lambda,\alpha)=\Lambda^{\frac{1}{1-\alpha}}w\left(\frac{x}{\Lambda}\Big{|} \Lambda=1,\alpha\right). \tag{4.1}\]
Observe that solutions for (2.16) for different values of \(\alpha\) are not ordered. Indeed, for fixed \(|x|>0\) sufficiently small, \(w(x|\Lambda,\alpha)\) is a decreasing function of \(\alpha.\) Whereas for \(|x|\) sufficiently large, \(w(x|\Lambda,\alpha)\) is increasing function of \(\alpha\). Formal considerations (which can be made rigorous) show that asymptotic behavior of the solution of (2.16) is as follows:
\[w(x)=\left[\frac{(1-\alpha)^{2}}{2\Lambda(1+\alpha)}\right]^{ \frac{1}{1-\alpha}}(-x)^{\frac{2}{1-\alpha}}(1+o(1))\quad|x|\ll 1,\] \[w(x)=(1-\alpha)^{\frac{1}{1-\alpha}}(-x)^{\frac{1}{1-\alpha}}(1+o (1))\quad|x|\gg 1. \tag{4.2}\]
That is, as \(\alpha\) increases, \(w\) gets flatter and flatter near the origin and grows faster and faster for large values of \(|x|\). Another important observation is that the solution (2.16) near the origin is heavily influenced by the specific value of the Lewis number as in this region the solution is obtained (in the first approximation) by balancing diffusion and reaction whereas the transport term is negligible. In contrast, far from the origin, the key ingredients are transport and reaction while diffusion is negligible in this region. Consequently, the solution of (2.16) is essentially independent of the Lewis number when \(|x|\gg 1.\)
Figure 3: Numerical solutions of problem (2.16) for \(\alpha=1/4\) (blue), \(\alpha=1/2\) (orange) and \(\alpha=3/4\) (green).
Let us also discuss the limiting behavior of the function \(w\) as \(\alpha\to 0\) and \(\alpha\to 1\). It is straightforward to verify that as \(\alpha\to 0\) the solution of (2.16) approaches, on a compact sets, to the limiting profile
\[w_{0}(x)=-x+\Lambda\left(\exp\left(\frac{x}{\Lambda}\right)-1\right), \tag{4.3}\]
that verifies the limiting problem
\[\left\{\begin{array}{ll}\Lambda w_{0}^{\prime\prime}-w_{0}^{\prime}-1=0,&x<0,\\ w_{0}(0)=0,&w_{0}^{\prime}(0)=0.\end{array}\right. \tag{4.4}\]
We now claim that as \(\alpha\to 1\) the function \(w\) approaches zero on compact sets. Indeed, let \(\eta=w-\Lambda w^{\prime}\). In view that \(w(x)>0\) and \(w^{\prime}(x)<0\) on \((-\infty,0)\) and \(w(0)=w^{\prime}(0)=0\), we have \(\eta(x)>0\) on \((-\infty,0)\) as \(\eta(0)=0\). By (2.16) we then have
\[\left\{\begin{array}{ll}-\eta^{\prime}=w^{\alpha},&x<0,\\ \eta(0)=0.\end{array}\right. \tag{4.5}\]
Taking into account that \(\eta,w\) are positive on \((-\infty,0)\) we the have
\[-\eta^{\prime}=w^{\alpha}\leq\eta^{\alpha} \tag{4.6}\]
Integrating this inequality and taking into account the initial condition we have
\[\eta(x)\leq\left[(1-\alpha)(-x)\right]^{\frac{1}{1-\alpha}},\quad x\leq 0 \tag{4.7}\]
Since \(w<\eta\) we conclude
\[w(x)\leq\left[(1-\alpha)(-x)\right]^{\frac{1}{1-\alpha}},\quad x\leq 0. \tag{4.8}\]
Fixing \(x<0\) and taking a limit \(\alpha\to 1\) in the inequality above, we conclude that \(w\to 0\) as \(\alpha\to 0\) on compact sets.
Now we turn to the evaluation of the pair \(c^{*}(\theta,\Lambda,\alpha)\), \(R^{*}(\theta,\Lambda,\alpha)\). An algorithm for finding this pair is rather straightforward. First, we fix \(\Lambda\) and \(\alpha\) and solve problem (2.16). Then, we follow the procedure outlined in the proof of the main Theorem 2.1. Namely, using the solution of (2.16), we evaluate \(\phi(x)\) given by (2.21) and a function \(\zeta(x)\) given as:
\[\zeta(x)=\left(\int_{-x}^{0}w^{\alpha}(s)ds\right)^{\frac{1-\alpha}{2}}. \tag{4.9}\]
We next fix \(\theta\) and find a unique number \(\sigma^{*}>0\) such that
\[\phi(\sigma^{*})=\theta. \tag{4.10}\]
The existence and uniqueness of such \(\sigma^{*}\) follows from Proposition 2.1. Hence as follows from (2.24)
\[c^{*}=\zeta(\sigma^{*}), \tag{4.11}\]
and then by (2.25)
\[R^{*}=\sigma^{*}/c^{*}. \tag{4.12}\]
The function \(\phi(x)\) is decreasing, and the function \(\zeta(x)\) is increasing on \(x\in(0,\infty).\) Moreover, by (4.1) we have
\[\zeta(x|\Lambda,\alpha)=\sqrt{\Lambda}\zeta\left(\frac{x}{\Lambda}\big{|} \Lambda=1,\alpha\right). \tag{4.13}\]
Let us now discuss the dependency of \(\phi,\zeta\) on parameter \(\alpha\). Several profiles of functions \(\phi\) and \(\zeta\) for several values of \(\alpha\) and \(\Lambda=1\) are depicted in Figures 4. For a fixed \(x\in(0,\infty)\), the function \(\phi\) appears to be increasing in \(\alpha\) and the function \(\zeta\) decreasing in \(\alpha.\) It is also easy to check that on compact sets \(\phi(x)\rightarrow\frac{1-e^{-x}}{x}\) and
\(\zeta(x)\rightarrow\sqrt{x}\) as \(\alpha\to 0\) which follows directly from the fact that \(w\) approaches to \(w_{0}\) in this limit. We now claim that \(\phi(x)\) approaches unity and \(\zeta(x)\) approaches zero on any fixed compact subset of \(x\in(0,\infty)\) as \(\alpha\to 1.\) To see that let us observe first that by (2.16) and (2.21) after integration by parts we have:
\[\phi(x)=1-\tilde{\phi}(x), \tag{4.14}\]
with
\[\tilde{\phi}(x)=\frac{\Lambda+(1-\Lambda)\int_{-x}^{0}w(s)\exp(-(s+x))ds/w(-x )}{1+\Lambda|w^{\prime}(-x)/w(-x)|}\leq\frac{\Lambda+|1-\Lambda|}{1+\Lambda|w^{ \prime}(-x)/w(-x)|},\quad x>0, \tag{4.15}\]
where the last inequality follows from the monotonicity of \(w\). Next by (2.16) and (3.44) we obtain
\[w^{\prime\prime}(x)w(x)=\frac{1}{\Lambda}\left(w^{\prime}(x)+w^{\alpha}(x) \right)w(x)\leq(w^{\prime}(x))^{2},\quad x<0. \tag{4.16}\]
Dividing the expression above by \(w^{2}\) and taking into account that \(w^{\prime}<0\) we have
\[\Lambda\left(\frac{w^{\prime}(x)}{w(x)}\right)^{2}+\left|\frac{w^{\prime}(x)} {w(x)}\right|\geq\frac{1}{w^{1-\alpha}(x)},\quad x<0. \tag{4.17}\]
Combining this observation with (4.8) we have
\[\Lambda\left(\frac{w^{\prime}(x)}{w(x)}\right)^{2}+\left|\frac{w^{\prime}(x)} {w(x)}\right|\geq\frac{1}{(1-\alpha)(-x)},\quad x<0. \tag{4.18}\]
Fixing \(x<0\) and taking a limit as \(\alpha\to 1\) in the expression above, we conclude that \(|w^{\prime}/w|\rightarrow\infty\) as \(\alpha\to 1\) on compact sets. This observation implies that \(\tilde{\phi}(x)\to 0\) as \(\alpha\to 1\) and hence \(\phi(x)\to 1\) in this limit. Finally by (4.8) and (4.9) we have
\[\zeta(x)\leq\sqrt{(1-\alpha)}\sqrt{x}. \tag{4.19}\]
Taking a limit as \(\alpha\to 1\) for \(x>0\) fixed we obtain \(\zeta\to 0\) as \(\alpha\to 1\) on compact sets.
We now will discuss the dependency of \(\phi,\zeta\) of \(\alpha\). Figure 5 depicts functions \(\phi\) and \(\zeta\) for several values of \(\Lambda\) and \(\alpha=1/2.\) For a fixed \(x\in(0,\infty),\) the function \(\phi\) is increasing in \(\Lambda,\) and the function \(\zeta\) is decreasing in \(\Lambda.\)
One obvious observation that follows from the monotonicity of \(\phi\) is that \(\sigma^{*}\) is a decreasing function of \(\theta\) which together with the monotonicity of \(\zeta\) immediately implies that \(c^{*}\) is a decreasing function of \(\theta\). This result
Figure 4: Functions \(\phi\) and \(\zeta\) for \(\alpha=0.75\) (blue), \(\alpha=0.5\) (orange) and \(\alpha=0.25\) (green) and \(\Lambda=1\). The arrow indicates direction of increase of \(\alpha.\)
is very natural in a physical context of the problem as an increase in ignition temperature decreases the speed of propagation.
Let us now discuss the behavior of \(\phi(x),\zeta(x)\) near the origin which will allow us to obtain asymptotic expressions for \((c^{*},R^{*})\) for ignition temperatures near unity. This regime is of particular interest as flame fronts are known to become unstable when ignition temperature approaches one. Discussion of flame front instabilities in this regime for the cases of zero and first order kinetics can be found in [2].
The behavior of functions \(\phi(x)\) and \(\zeta(x)\) for small values of \(x\) can be reconstructed from asymptotic formula (4.2). After rather tedious but straightforward computations, we have:
\[\phi(x|\Lambda,\alpha)\approx 1-\left(\frac{1-\alpha}{2}\right)x,\quad\zeta(x| \Lambda,\alpha)=(2\Lambda)^{-\frac{\alpha}{2}}\left(\frac{(1-\alpha)^{\frac{1+ \alpha}{2}}}{\sqrt{1+\alpha}}\right)x^{\frac{1+\alpha}{2}}\quad\text{for}\quad x \ll 1. \tag{4.20}\]
This observation immediately implies that for ignition temperatures close to unity we have:
\[c^{*}(\theta,\Lambda,\alpha)\simeq\sqrt{\frac{2}{1+\alpha}}\Lambda^{-\frac{ \alpha}{2}}(1-\theta)^{\frac{1+\alpha}{2}},\quad R^{*}(\theta,\Lambda,\alpha) \simeq\frac{\sqrt{2(1+\alpha)}}{1-\alpha}\Lambda^{\frac{\alpha}{2}}(1-\theta )^{\frac{1-\alpha}{2}},\quad|1-\theta|\ll 1. \tag{4.21}\]
Direct verification shows that, in this regime, \(c^{*}\) is a decreasing function of both \(\alpha\) and \(\Lambda\), whereas \(R^{*}\) is an increasing function of both of these parameters. Moreover, \(R^{*}\) is decreasing function of \(\theta\). Figure 6 depicts dependency of \(c^{*}\) and \(R^{*}\) on \(\alpha\) for several values of \(\Lambda\) with \(\theta=0.98\), and Figure 7 shows the dependency of the velocity \(c^{*}\) on \(\alpha\) and \(\Lambda\). These figures were generated using asymptotic formulas (4.21) which are extremely close to their numerical counterparts.
Moreover, one can verify that \(c^{*}\) and \(R^{*}\) given by (4.21) fully reproduce the limiting behavior of these functions in the limits \(\alpha\to 0\) and \(\alpha\to 1\). Indeed, when \(\alpha=1\), the velocity of propagation and width of the reaction zone are given by [2]
\[c^{*}(\theta,\Lambda,\alpha=1)=\left(\left(\frac{\theta}{1-\theta}\right)+ \Lambda\left(\frac{\theta}{1-\theta}\right)^{2}\right)^{-\frac{1}{2}},\quad R ^{*}(\theta,\Lambda,\alpha=1)=\infty. \tag{4.22}\]
whereas when \(\alpha=0\), we have [2]
\[c^{*}(\theta,\Lambda,\alpha=0)=R^{*}(\theta,\Lambda,\alpha=0)=\sqrt{\kappa}. \tag{4.23}\]
where \(\kappa\) is defined implicitly as the positive solution of
\[e^{\kappa}=\frac{1}{1-\theta\kappa}. \tag{4.24}\]
For \(\theta\) near unity these formulas give:
\[c^{*}(\theta,\Lambda,\alpha=1)\simeq\frac{1-\theta}{\sqrt{\Lambda}},\quad c^{*}( \theta,\Lambda,\alpha=0)\simeq\sqrt{2(1-\theta)}\quad\text{for}\quad|\theta-1| \ll 1. \tag{4.25}\]
Hence
\[c^{*}(\theta,\Lambda,\alpha\to 1)\to c^{*}(\theta,\Lambda,\alpha=1),\quad R^{*}(\theta, \Lambda,\alpha\to 1) \to\infty,\] \[c^{*}(\theta,\Lambda,\alpha\to 0)\to c^{*}(\theta,\Lambda,\alpha=0),\quad R^{*}( \theta,\Lambda,\alpha\to 0) \to R^{*}(\theta,\Lambda,\alpha=0). \tag{4.26}\]
Now consider the regime of intermediate values of \(\theta\), we study this regime numerically. Figures 8, 9, 10 depict dependency of the velocity of propagation and width of the reaction zone on the reaction order \(\alpha\) for \(\Lambda=0.2,1,5\) and \(\theta=0.75,0.5,0.25\), and Figure 11 shows dependency of the velocity of propagation on \(\alpha\) and \(\Lambda\) for \(\theta=0.5\). According to numerics for intermediate values of \(\theta\), the character of the dependency of \(c^{*}\) and \(R^{*}\) on \(\alpha\) and \(\Lambda\) remains similar to the one for \(\theta\) near unity. However, dependency of \(c^{*}\) on both \(\alpha\) and \(\Lambda\) becomes weaker as \(\theta\) decreases. The dependency of \(R^{*}\) on \(\alpha\) in this regime is still very strong, but dependency on \(\Lambda\) becomes weaker as \(\theta\) decreases.
Figure 6: Dependency of the velocity of propagation \(c^{*}\) and width of the reaction zone \(R^{*}\) on \(\alpha\) for \(\Lambda=0.2\) (blue), \(\Lambda=1\) (orange) and \(\Lambda=5\) (green) for the ignition temperature \(\theta=0.98\). The arrow indicates direction of increase of \(\Lambda\).
When \(\theta\) becomes sufficiently small, the asymptotic behavior of \(\phi(x)\) and \(\zeta(x)\) can again be recovered from the asymptotic formula (4.2) and reads:
\[\phi(x|\Lambda,\alpha)\approx\left(\frac{1}{1-\alpha}\right)\frac{1}{x},\quad \zeta(x|\Lambda,\alpha)=\left(\sqrt{1-\alpha}\right)\sqrt{x}\quad\text{for} \quad x\gg 1. \tag{4.27}\]
Therefore, in this regime we have:
\[c^{*}(\theta)\approx\frac{1}{\sqrt{\theta}},\quad R^{*}(\theta)\approx\frac{1 }{(1-\alpha)}\frac{1}{\sqrt{\theta}},\quad\theta\ll 1. \tag{4.28}\]
Consequently in this regime, the velocity of propagation (in the first approximation) depends exclusively on the ignition temperature \(\theta\), and the reaction width is independent of \(\Lambda\) and is an increasing function of \(\alpha\). As in the regime \(\theta\) near unity, the velocity of propagation and width of the reaction zone approaches to formal limits as \(\alpha\to 0\) and \(\alpha\to 1\).
Figure 8: Dependency of the velocity of propagation \(c^{*}\) and width of the reaction zone \(R^{*}\) on \(\alpha\) for \(\Lambda=0.2\) (blue), \(\Lambda=1\) (orange) and \(\Lambda=5\) (green) for the ignition temperature \(\theta=0.75\). The arrow indicates direction of increase of \(\Lambda\).
Figure 9: Dependency of the velocity of propagation \(c^{*}\) and width of the reaction zone \(R^{*}\) on \(\alpha\) for \(\Lambda=0.2\) (blue), \(\Lambda=1\) (orange) and \(\Lambda=5\) (green) for the ignition temperature \(\theta=0.5\). The arrow indicates direction of increase of \(\Lambda\).
The discussion above strongly suggests that the velocity of propagation decreases with the increase of the reaction order. Consequently, the velocity of propagation with the reaction order \(\alpha\in(0,1)\) is bounded from below by the velocity of propagation with the reaction order unity and from above by the velocity of propagation with zero reaction order. This, far from obvious observation, is quite in line with the physical intuition as an increase of the reaction order decreases the reaction rate which, in turn, slows the flame front. Another observation which is less surprising is that the increase of the molecular diffusivity with regard to the thermal diffusivity decreases the speed of the flame front. This is clearly the case for the reactions of first order but remains true for reaction order \(\alpha\in(0,1)\).
We hence formulate the following:
**Conjecture 4.1**.: _The velocity of propagation \(c^{*}(\theta,\Lambda,\alpha)\) is a decreasing function of all of its arguments whereas the reaction width \(R^{*}(\theta,\Lambda,\alpha)\) is a decreasing function of \(\theta\) and an increasing function of \(\Lambda\) and \(\alpha\)._
**Acknowledgments.** The work of AM and PVG was supported in a part by US-Israel BSF grant 2020005. PVG would like to thank Fedor Nazarov for multiple valuable discussions and substantial help with proving the main result of this paper.
Figure 11: Dependency of the velocity of propagation \(c^{*}\) on \(\alpha\) and \(\Lambda\) for \(\theta=0.5\). Left panel represents a three dimensional plot of \(c^{*}(\theta=0.5,\Lambda,\alpha)\) and the right plane depicts level sets of this function.
Figure 10: Dependency of the velocity of propagation \(c^{*}\) and width of the reaction zone \(R^{*}\) on \(\alpha\) for \(\Lambda=0.2\) (blue), \(\Lambda=1\) (orange) and \(\Lambda=5\) (green) for the ignition temperature \(\theta=0.25\). The arrow indicates direction of increase of \(\Lambda\).
Appendix
In this appendix we show that inequality (3.44) is equivalent to (3.47) and briefly discuss solution of problem (2.16) from dynamical systems point of view.
In what follows we denote the vector field in the right hand side of (3.46) by \(\mathbf{X}_{c}\). The unique critical point of this vector field in \(Q\) is the origin \(O=(0,0)\).
System (3.46) was thoroughly investigated in [3, Section 4] using a weak version of Poincare-Bendixson theorem (see [8]). The following proposition summarizes the results of Lemmas 4.7 and 4.8 of [3].
**Proposition 5.1**.: _There exists a unique global stable manifold at the origin given by a trajectory of \(\mathbf{X}_{c}\) in \(Q\) converging toward the origin when \(t\to 0^{-}\) such that the orbit defined by this trajectory is the graph of a function \(p^{\star}=p^{\star}(q)\) with \(q\in(0,+\infty)\). Moreover, \(p^{\star}\) is analytic for \(q>0\) and extends continuously at \(0\) with the value \(p^{\star}(0)=0\)._
The trajectory for system (3.46) can be obtained numerically. Figures 12 and 13 depict such a trajectory on the phase plane for \(\Lambda=1\), \(\alpha=1/2\) in cartesian and polar coordinates respectively. Qualitative behavior of the trajectory is similar for other values of parameters \(\Lambda,\alpha\).
Next observe that by (3.47) we have
\[\tan(\theta(t))=\frac{p(t)}{q(t)}. \tag{5.1}\]
In view of Proposition 5.1, the trajectory \((q,p)(t)\) is smooth and hence (differentiating the expression above) we have
\[\dot{\vartheta}(t) = \frac{q(t)\dot{p}(t)-p(t)\dot{q}(t)}{r^{2}(t)}. \tag{5.2}\]
Since
\[q(t)\dot{p}(t)-p(t)\dot{q}(t)=w(x)w^{\prime\prime}(x)-(w^{\prime} (x))^{2}, \tag{5.3}\]
by (3.44) we obtain (3.48). The plot of \(\vartheta(t)\) for \(\Lambda=1\) and \(\alpha=1/2\) is depicted in Figure 14. The function \(\vartheta(t)\) is decreasing on \((-\infty,0)\) and approaches \(-\pi/2\) as \(t\to 0\) and to zero as \(t\to-\infty\) regardless of the specific values of parameters \(\Lambda,\alpha\) as follows from asymptotic expressions for \(w\) (see equation (4.2)).
**Declaration of Competing Interest:** The authors declare that they have no competing interests.
**Data availability:** No data was used for the research described in the article.
|
2305.18512 | A Rainbow in Deep Network Black Boxes | We introduce rainbow networks as a probabilistic model of trained deep neural
networks. The model cascades random feature maps whose weight distributions are
learned. It assumes that dependencies between weights at different layers are
reduced to rotations which align the input activations. Neuron weights within a
layer are independent after this alignment. Their activations define kernels
which become deterministic in the infinite-width limit. This is verified
numerically for ResNets trained on the ImageNet dataset. We also show that the
learned weight distributions have low-rank covariances. Rainbow networks thus
alternate between linear dimension reductions and non-linear high-dimensional
embeddings with white random features. Gaussian rainbow networks are defined
with Gaussian weight distributions. These models are validated numerically on
image classification on the CIFAR-10 dataset, with wavelet scattering networks.
We further show that during training, SGD updates the weight covariances while
mostly preserving the Gaussian initialization. | Florentin Guth, Brice Ménard, Gaspar Rochette, Stéphane Mallat | 2023-05-29T17:09:26Z | http://arxiv.org/abs/2305.18512v1 | # A Rainbow in Deep Network Black Boxes
###### Abstract
We introduce rainbow networks as a probabilistic model of trained deep neural networks. The model cascades random feature maps whose weight distributions are learned. It assumes that dependencies between weights at different layers are reduced to rotations which align the input activations. Neuron weights within a layer are independent after this alignment. Their activations define kernels which become deterministic in the infinite-width limit. This is verified numerically for ResNets trained on the ImageNet dataset. We also show that the learned weight distributions have low-rank covariances. Rainbow networks thus alternate between linear dimension reductions and non-linear high-dimensional embeddings with white random features. Gaussian rainbow networks are defined with Gaussian weight distributions. These models are validated numerically on image classification on the CIFAR-10 dataset, with wavelet scattering networks. We further show that during training, SGD updates the weight covariances while mostly preserving the Gaussian initialization.
deep neural networks, infinite-width limit, weight probability distribution, random features, network alignment. +
Footnote †: C2023 Florentin Guth, Brice Menard, Gaspar Rochette, and Stephane Mallat.
License: CC-BY 4.0, see [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/).
## 1 Introduction
Deep neural networks have been described as black boxes because many of their fundamental properties are not understood. Their weight matrices are learned by performing stochastic gradient descent from a random initialization. Each training run thus results in a different set of weight matrices, which can be considered as a random realization of some probability distribution. What is this probability distribution? What is the corresponding functional space? Do all networks learn the same function, and even the same weights, up to some symmetries? This paper addresses these questions.
Theoretical studies have mostly focused on shallow learning. A first line of work has studied learning of the last layer while freezing the other ones. The previous layers thus implement random features (Jarrett et al., 2009; Pinto et al., 2009) which specify a kernel that becomes deterministic in the infinite-width limit (Rahimi and Recht, 2007; Daniely
et al., 2016). Learning has then been incorporated in these models. Neal (1996); Williams (1996); Lee et al. (2018); Matthews et al. (2018) show that some networks behave as Gaussian processes. Training is then modeled as sampling from the Bayesian posterior given the training data. On the other hand, Jacot et al. (2018) and Lee et al. (2019) assume that trained weights have small deviations from their initialization. In these cases, learning is in a "lazy" regime (Chizat et al., 2019) specified by a fixed kernel. It has been opposed to a "rich" or feature-learning regime (Chizat and Bach, 2020; Woodworth et al., 2020), which achieves higher performance on complex tasks (Lee et al., 2020; Geiger et al., 2020). Empirical observations of weight statistics have indeed shown that they significantly evolve during training (Martin and Mahoney, 2021; Thamm et al., 2022). This has been precisely analyzed for one-hidden-layer networks in the infinite-width "mean-field" limit (Chizat and Bach, 2018; Mei et al., 2018; Rotskoff and Vanden-Eijnden, 2018; Sirignano and Spiliopoulos, 2020), which allows tracking the neuron weight distribution as it evolves away from the Gaussian initialization during training. The generalization to deeper networks is greatly complicated by the fact that intermediate activations depend on the random weight realizations (Sirignano and Spiliopoulos, 2022; E and Wojtowytsch, 2020; Nguyen and Pham, 2020; Chen et al., 2022; Yang and Hu, 2021). However, numerical experiments (Raghu et al., 2017; Kornblith et al., 2019) show that intermediate activations correlate significantly across independent realizations, which calls for an explanation of this phenomenon.
Building upon these ideas, we introduce the rainbow model of the joint probability distribution of trained network weights across layers. It assumes that dependencies between the weight matrices \(W_{j}\) at all layers \(j\) are reduced to rotations. This means that \(W_{j}=W_{j}^{\prime}\hat{A}_{j-1}\), where \(W_{1}^{\prime},\ldots,W_{J}^{\prime}\) are independent random matrices, and \(\hat{A}_{j-1}\) is a rotation that depends on the previous layer weights \(W_{1},\ldots,W_{j-1}\). The \(W_{j}^{\prime}\) are further assumed to be random feature matrices, that is, their rows are independent and identically distributed.
The functional properties of rainbow networks depend on the random feature distribution at each layer. We show numerically that weights of trained networks typically have low-rank covariances. The corresponding rainbow networks thus implement dimensionality reductions in-between the high-dimensional random feature embeddings, similar to previous works (Cho and Saul, 2009; Mairal, 2016; Bietti, 2019). We further demonstrate that input activation covariances provide efficient approximations of the eigenspaces of the weight covariances. The number of model parameters and hence the supervised learning complexity can thus be considerably reduced by unsupervised information.
The weight covariances completely specify the rainbow network output and properties when the weight distributions are Gaussian. The eigenvectors of these weight covariances can be interpreted as learned features, rather than individual neuron weights which are random. This Gaussian assumption is too restrictive to model arbitrary trained networks. However, it can approximately hold for architectures which incorporate prior information and restrict their learned weights. In some of our numerical experiments, we will thus consider learned scattering networks (Zarka et al., 2021; Guth et al., 2022), which have fixed wavelet spatial filters and learn weights along channels only.
This paper makes the following main contributions:
* We prove that the rainbow network activations converge to a random rotation of a deterministic kernel feature vector in the infinite-width limit, which explains the
empirical results of representation similarity of Raghu et al. (2017) and Kornblith et al. (2019). We verify numerically this convergence on scattering networks and ResNets trained on the CIFAR-10 and ImageNet image classification datasets. We conjecture but do not prove that this convergence conversely implies the first rainbow assumption that layer dependencies are reduced to rotations.
* We validate the Gaussian rainbow model for scattering networks trained on CIFAR-10. We verify that the weight covariances converge up to rotation when the width increases, and that the weights are approximately Gaussian. The weight covariances are sufficient to sample rainbow weights and define new networks that achieve comparable classification accuracy as the original trained network when the width is large enough. Further, we show that SGD training only updates the weight covariances while nearly preserving the white random feature initializations, suggesting a possible explanation for the Gaussian rainbow assumption in this setting.
* We prove that equivariance to general groups can be achieved in rainbow networks with weight distributions that are invariant to the group action. This constraint on distributions rather than on individual neurons (Cohen and Welling, 2016; Kondor and Trivedi, 2018) avoids any weight sharing or synchronizations, which are difficult to implement in biological systems.
The rainbow model is illustrated in Figure 1. In Section 2, we introduce rainbow networks and the associated kernels that describe their infinite-width limit. We validate numerically the above properties and results in Section 3.
Figure 1: A deep rainbow network cascades random feature maps whose weight distributions are learned. They typically have a low-rank covariance. Each layer can be factorized into a linear dimensionality reduction determined by the colored covariance, followed by a non-linear high-dimensional embedding with white random features. At each layer, the hidden activations define a kernel which converges to a deterministic rainbow kernel in the infinite width limit. It induces a random rotation of the next layer weights. For Gaussian rainbow networks, the random feature embedding is a dot-product kernel feature map which does not need to be rotated.
## 2 Rainbow networks
Weight matrices of learned deep networks are strongly dependent across layers. Deep rainbow networks define a mathematical model of these dependencies through rotation matrices that align input activations at each layer. We review in Section 2.1 the properties of random features, which are the building blocks of the model. We then introduce in Section 2.2 deep fully-connected rainbow networks, which cascade aligned random feature maps. We show in Section 2.3 how to incorporate inductive biases in the form of symmetries or local neuron receptive fields. We also extend rainbow models to convolutional networks.
### Rotations in random feature maps
We being by reviewing the properties of one-hidden layer random feature networks. We then prove that random weight fluctuations produce a random rotation of the hidden activation layer in the limit of infinite layer width. The rainbow network model will be obtained by applying this result at all layers of a deep network.
Random feature network.A one-hidden layer network computes a hidden activation layer with a matrix \(W\) of size \(d_{1}\times d_{0}\) and a pointwise non-linearity \(\rho\):
\[\hat{\varphi}(x)=\rho(Wx)\ \ \text{for}\ \ x\in\mathbb{R}^{d_{0}}.\]
We consider a random feature network (Rahimi and Recht, 2007). The rows of \(W\), which contain the weights of different neurons, are independent and have the same probability distribution \(\pi\):
\[W=(w_{i})_{i\leq d_{1}}\ \ \text{with i.i.d.}\ \ w_{i}\sim\pi.\]
In many random feature models, each row vector has a known distribution with uncorrelated coefficients (Jarrett et al., 2009; Pinto et al., 2009). Learning is then reduced to calculating the output weights \(\hat{\theta}\), which define
\[\hat{f}(x)=\langle\hat{\theta},\hat{\varphi}(x)\rangle.\]
In contrast, we consider general distributions \(\pi\) which will be estimated from the weights of trained networks in Section 3.
Our network does not include any bias for simplicity. Bias-free networks have been shown to achieve comparable performance as networks with biases for denoising (Mohan et al., 2019) and image classification (Zarka et al., 2021; Guth et al., 2022). However, biases can easily be incorporated in random feature models and thus rainbow networks.
We consider a normalized network, where \(\rho\) includes a division by \(\sqrt{d_{1}}\) so that \(\|\hat{\varphi}(x)\|\) remains of the order of unity when the width \(d_{1}\) increases. We shall leave this normalization implicit to simplify notations, except when illustrating mathematical convergence results. Note that this choice differs from the so-called standard parameterization (Yang and Hu, 2021). In numerical experiments, we perform SGD training with this standard parameterization which avoids getting trapped in the lazy training regime (Chizat et al., 2019). Our normalization convention is only applied at the end of training, where the additional factor of \(\sqrt{d_{1}}\) is absorbed in the next-layer weights \(\hat{\theta}\).
We require that the input data has finite energy: \(\mathbb{E}_{x}[\left\lVert x\right\rVert^{2}]<+\infty\). We further assume that the non-linearity \(\rho\) is Lipschitz, which is verified by many non-linearities used in practice, including ReLU. Finally, we require that the random feature distribution \(\pi\) has finite fourth-order moments.
Kernel convergence.We now review the convergence properties of one-hidden layer random feature networks. This convergence is captured by the convergence of their kernel (Rahimi and Recht, 2007, 2008),
\[\hat{k}(x,x^{\prime})=\langle\hat{\varphi}(x),\hat{\varphi}(x^{\prime})\rangle =\frac{1}{d_{1}}\sum_{i=1}^{d_{1}}\rho(\langle x,w_{i}\rangle)\,\rho(\langle x ^{\prime},w_{i}\rangle),\]
where we have made explicit the factor \(d_{1}^{-1}\) coming from our choice of normalization. Since the rows \(w_{i}\) are independent and identically distributed, the law of large numbers implies that when the width \(d_{1}\) goes to infinity, this empirical kernel has a mean-square convergence to the asymptotic kernel
\[k(x,x^{\prime})=\mathbb{E}_{w\sim\pi}\Big{[}\rho(\langle x,w\rangle)\,\rho( \langle x^{\prime},w\rangle)\Big{]}. \tag{1}\]
This convergence means that even though \(\hat{\varphi}\) is random, its geometry (as described by the resulting kernel) is asymptotically deterministic. As we will see, this imposes that random fluctuations of \(\hat{\varphi}(x)\) are reduced to rotations.
Let \(\varphi(x)\) be an infinite-dimensional deterministic colored feature vector in a separable Hilbert space \(H\), which satisfies
\[\langle\varphi(x),\varphi(x^{\prime})\rangle_{H}=k(x,x^{\prime}). \tag{2}\]
Such feature vectors always exist (Aronszajn, 1950, see also Scholkopf and Smola, 2002). For instance, one can choose \(\varphi(x)=\left(\rho(\langle x,w\rangle)\right)_{w}\), the infinite-width limit of random features \(\rho W\). In that case, \(H=L^{2}(\pi)\), that is, the space of square-integrable functions with respect to \(\pi\), with dot-product \(\left\langle g,h\right\rangle_{H}=\mathbb{E}_{w\sim\pi}[g(w)\,h(w)]\). This choice is however not unique: one can obtain other feature vectors defined in other Hilbert spaces by applying a unitary transformation to \(\varphi\), which does not modify the dot product in eq. (2). In the following, we choose the kernel PCA (KPCA) feature vector, whose covariance matrix \(\mathbb{E}_{x}[\varphi(x)\,\varphi(x)^{\mathrm{T}}]\) is diagonal with decreasing values along the diagonal, introduced by Scholkopf et al. (1997). It is obtained by expressing any feature vector \(\varphi\) in its PCA basis relative to the distribution of \(x\). In this case \(H=\ell^{2}(\mathbb{N})\).
Finally, we denote by \(\mathcal{H}\) the reproducing kernel Hilbert space (RKHS) associated to the kernel \(k\) in eq. (1). It is the space of functions \(f\) which can be written \(f(x)=\left\langle\theta,\varphi(x)\right\rangle_{H}\), with norm \(\left\lVert f\right\rVert_{\mathcal{H}}=\left\lVert\theta\right\rVert_{H}\).1 A random feature network defines approximations of functions in this RKHS. With \(H=L^{2}(\pi)\), these functions can be written
Footnote 1: We shall always assume that \(\theta\) is the minimum-norm vector such that \(f(x)=\left\langle\theta,\varphi(x)\right\rangle_{H}\).
\[f(x)=\mathbb{E}_{w\sim\pi}[\theta(w)\,\rho(\langle x,w\rangle)]=\int\theta(w) \,\rho(\langle x,w\rangle)\,\mathrm{d}\pi(w).\]
This expression is equivalent to the mean-field limit of one-hidden-layer networks (Chizat and Bach, 2018; Mei et al., 2018; Rotskoff and Vanden-Eijnden, 2018; Sirignano and Spiliopoulos, 2020), which we will generalize to deep networks in Section 2.2.
Rotation alignment.We now introduce rotations which align approximate kernel feature vectors. By abuse of language, we use rotations as a synonym for orthogonal transformations, and also include improper rotations which are the composition of a rotation with a reflection.
We have seen that the kernel \(\hat{k}(x,x^{\prime})=\langle\hat{\varphi}(x),\hat{\varphi}(x^{\prime})\rangle\) converges to the kernel \(k(x,x^{\prime})=\langle\varphi(x),\varphi(x^{\prime})\rangle\). We thus expect (and will later prove) that there exists a rotation \(\hat{A}\) such that \(\hat{A}\,\hat{\varphi}\approx\varphi\) because all feature vectors of the kernel \(k\) are rotations of one another. The rotation \(\hat{A}\) is dependent on the random feature realization \(W\) and is thus random. The network activations \(\hat{\varphi}(x)\approx\hat{A}^{\mathrm{T}}\varphi(x)\) are therefore a random rotation of the deterministic feature vector \(\varphi(x)\). For the KPCA feature vector \(\varphi\), \(\hat{A}\) approximately computes an orthonormal change of coordinate of \(\hat{\varphi}(x)\) to its PCA basis.
For any function \(f(x)=\langle\theta,\varphi(x)\rangle_{H}\) in \(\mathcal{H}\), if the output layer weights are \(\hat{\theta}=\hat{A}^{\mathrm{T}}\theta\), then the network output is
\[\hat{f}(x)=\langle\hat{A}^{\mathrm{T}}\theta,\hat{\varphi}(x)\rangle=\langle \theta,\hat{A}\,\hat{\varphi}(x)\rangle_{H}\approx f(x).\]
This means that the final layer coefficients \(\hat{\theta}\) can cancel the random rotation \(\hat{A}\) introduced by \(W\), so that the random network output \(\hat{f}(x)\) converges when the width \(d_{1}\) increases to a fixed function in \(\mathcal{H}\). This propagation of rotations across layers is key to understanding the weight dependencies in deep networks. We now make the above arguments more rigorous and prove that \(\hat{\varphi}\) and \(\hat{f}\) respectively converge to \(\varphi\) and \(f\), for an appropriate choice of \(\hat{A}\).
We write \(\mathcal{O}(d_{1})\) the set of linear operators \(A\) from \(\mathbb{R}^{d_{1}}\) to \(H=\ell^{2}(\mathbb{N})\) which satisfy \(A^{\mathrm{T}}A=\mathrm{Id}_{d_{1}}\). Each \(A\in\mathcal{O}(d_{1})\) computes an isometric embedding of \(\mathbb{R}^{d_{1}}\) into \(H\), while \(A^{\mathrm{T}}\) is an orthogonal projection onto a \(d_{1}\)-dimensional subspace of \(H\) which can be identified with \(\mathbb{R}^{d_{1}}\). The alignment \(\hat{A}\) of \(\hat{\varphi}\) to \(\varphi\) is defined as the minimizer of the mean squared error:
\[\hat{A}=\operatorname*{arg\,min}_{\hat{A}\in\mathcal{O}(d_{1})}\ \mathbb{E}_{x}\Big{[}\big{\|}\hat{A}\,\hat{\varphi}(x)-\varphi(x)\big{\|}_{H}^ {2}\Big{]}. \tag{3}\]
This optimization problem, known as the (orthogonal) Procrustes problem (Hurley and Cattell, 1962; Schonemann, 1966), admits a closed-form solution, computed from a singular value decomposition of the (uncentered) cross-covariance matrix between \(\varphi\) and \(\hat{\varphi}\):
\[\hat{A}=UV^{\mathrm{T}}\ \ \text{with}\ \ \mathbb{E}_{x}\Big{[}\varphi(x)\, \hat{\varphi}(x)^{\mathrm{T}}\Big{]}=USV^{\mathrm{T}}. \tag{4}\]
The mean squared error (3) of the optimal \(\hat{A}\) (4) is then
\[\mathbb{E}_{x}\Big{[}\big{\|}\hat{A}\,\hat{\varphi}(x)-\varphi(x)\big{\|}_{H}^ {2}\Big{]}=\operatorname{tr}\mathbb{E}_{x}\Big{[}\hat{\varphi}(x)\,\hat{ \varphi}(x)^{\mathrm{T}}\Big{]}+\operatorname{tr}\mathbb{E}_{x}\Big{[}\varphi (x)\,\varphi(x)^{\mathrm{T}}\Big{]}-2\Big{\|}\mathbb{E}_{x}\Big{[}\varphi(x)\, \hat{\varphi}(x)^{\mathrm{T}}\Big{]}\Big{\|}_{1}, \tag{5}\]
where \(\big{\|}\cdot\big{\|}_{1}\) is the nuclear (or trace) norm, that is, the sum of the singular values. Equation (5) defines a distance between the representations \(\hat{\varphi}\) and \(\varphi\) which is related to various similarity measures used in the literature.2
The alignment rotation (3,4) was used by Haxby et al. (2011) to align fMRI response patterns of human visual cortex from different individuals, and by Smith et al. (2017) to align word embeddings from different languages. Alignment between network weights has also been considered in previous works, but it was restricted to permutation matrices (Entezari et al., 2022; Benzing et al., 2022; Ainsworth et al., 2022). Permutations have the advantage of commuting with pointwise non-linearities, and can therefore be introduced while exactly preserving the network output function. However, they are not sufficiently rich to capture the variability of random features. It is shown in Entezari et al. (2022) that the error after permutation alignment converges to zero with the number of random features \(d_{1}\) at a polynomial rate which is cursed by the dimension \(d_{0}\) of \(x\). On the contrary, the following theorem proves that the error after rotation alignment has a convergence rate which is independent of the dimension \(d_{0}\).
**Theorem 1**: _Assume that \(\mathbb{E}_{x}[\left\|x\right\|^{2}]<+\infty\), \(\rho\) is Lipschitz, and \(\pi\) has finite fourth order moments. Then there exists a constant \(c>0\) which does not depend on \(d_{0}\) nor \(d_{1}\) such that_
\[\mathbb{E}_{W,x^{\prime}}\Big{[}\big{|}\hat{k}(x,x^{\prime})-k(x,x^{\prime}) \big{|}^{2}\Big{]}\leq c\,d_{1}^{-1},\]
_where \(x^{\prime}\) is an i.i.d. copy of \(x\). Suppose that the sorted eigenvalues \(\lambda_{1}\geq\cdots\geq\lambda_{m}\geq\cdots\) of \(\mathbb{E}_{x}[\varphi(x)\,\varphi(x)^{T}]\) satisfy \(\lambda_{m}=O(m^{-\alpha})\) with \(\alpha>1\). Then the alignment \(\hat{A}\) defined in (3) satisfies_
\[\mathbb{E}_{W,x}\Big{[}\big{\|}\hat{A}\,\hat{\varphi}(x)-\varphi(x)\big{\|}_{ H}^{2}\Big{]}\leq c\,d_{1}^{-\eta}\ \ \text{with}\ \ \eta=\frac{\alpha-1}{2(2\alpha-1)}>0.\]
_Finally, for any \(f(x)=\langle\theta,\varphi(x)\rangle_{H}\) in \(\mathcal{H}\), if \(\hat{\theta}=\hat{A}^{T}\theta\) then_
\[\mathbb{E}_{W,x}\Big{[}\big{|}\hat{f}(x)-f(x)\big{|}^{2}\Big{]}\leq c\,\|f\|_{ \mathcal{H}}^{2}\,d_{1}^{-\eta}.\]
The proof is given in Appendix A. The convergence of the empirical kernel \(\hat{k}\) to the asymptotic kernel \(k\) is a direct application of the law of large numbers. The mean-square distance (5) between \(\hat{A}\,\hat{\varphi}\) and \(\varphi\) is then rewritten as the Bures-Wasserstein distance (Bhatia et al., 2019) between the kernel integral operators associated to \(\hat{k}\) and \(k\). It is controlled by their mean-square distance via an entropic regularization of the underlying optimal transport problem (see, e.g., Peyre and Cuturi, 2019). The convergence rate is then obtained by exploiting the eigenvalue decay of the kernel integral operator.
Theorem 1 proves that there exists a rotation \(\hat{A}\) which nearly aligns the hidden layer of a random feature network with any feature vector of the asymptotic kernel, with an error which converges to zero. The network output converges if that same rotation is applied on
the last layer weights. We will use this result in the next section to define deep rainbow networks, but we note that it can be of independent interest in the analysis of random feature representations. The theorem assumes a power-law decay of the covariance spectrum of the feature vector \(\varphi\) (which is independent of the choice of \(\varphi\) satisfying eq. (2)). Because \(\sum_{m=1}^{\infty}\lambda_{m}=\mathbb{E}_{x}[\left\|\varphi(x)\right\|^{2}]<+\infty\) (as shown in the proof), a standard result implies that \(\lambda_{m}=o(m^{-1})\), so the assumption \(\alpha>1\) is not too restrictive. The constant \(c\) is explicit and depends polynomially on the constants involved in the hypotheses (except for the exponent \(\alpha\)). The convergence rate \(\eta=\frac{\alpha-1}{2(2\alpha-1)}\) is an increasing function of the power-law exponent \(\alpha\). It vanishes in the critical regime when \(\alpha\to 1\), and increases to \(\frac{1}{4}\) when \(\alpha\to\infty\). This bound might be pessimistic in practice, as a heuristic argument suggests a rate of \(\frac{1}{2}\) when \(\alpha\to\infty\) based on the rate \(1\) on the kernels. A comparison with convergence rates of random features KPCA (Sriperumbudur and Sterge, 2022) indeed suggests it might be possible to improve the convergence rate to \(\frac{\alpha-1}{2\alpha-1}\). Although we give results in expectation for the sake of simplicity, bounds in probability can be obtained using Bernstein concentration bounds for operators (Tropp, 2012; Minsker, 2017) in the spirit of Rudi et al. (2013); Bach (2017).
### Deep rainbow networks
The previous section showed that the hidden layer of a random feature network converges to an infinite-dimensional feature vector, up to a rotation defined by the alignment \(\hat{A}\). This section defines deep fully-connected rainbow networks by cascading conditional random features, whose kernels also converge in the infinite-width limit. It provides a model of the joint probability distribution of weights of trained networks, whose layer dependencies are captured by alignment rotation matrices.
We consider a deep fully-connected neural network with \(J\) hidden layers, which iteratively transforms the input data \(x\in\mathbb{R}^{d_{0}}\) with weight matrices \(W_{j}\) of size \(d_{j}\times d_{j-1}\) and a pointwise non-linearity \(\rho\), to compute each activation layer of depth \(j\):
\[\hat{\phi}_{j}(x)=\rho W_{j}\,\cdots\,\rho W_{1}\,x.\]
\(\rho\) includes a division by \(\sqrt{d_{j}}\), which we do not write explicitly to simplify notations. After \(J\) non-linearities, the last layer outputs
\[\hat{f}(x)=\langle\hat{\theta},\hat{\phi}_{J}(x)\rangle.\]
Infinite-width rainbow networks.A rainbow model defines each \(W_{j}\) conditionally on the previous \((W_{\ell})_{\ell<j}\) as a random feature matrix. The distribution of random features at layer \(j\) is rotated to account for the random rotation introduced by \(\hat{\phi}_{j-1}\). We first introduce infinite-width rainbow networks which define the asymptotic feature vectors used to compute these rotations.
**Definition 1**: _An infinite-width rainbow network has activation layers defined in a separable Hilbert space \(H_{j}\) for any \(j\leq J\) by_
\[\phi_{j}(x)=\varphi_{j}(\varphi_{j-1}(\ldots\varphi_{1}(x)\ldots))\in H_{j}\ \text{ for }\ x\in H_{0}=\mathbb{R}^{d_{0}},\]
_where each \(\varphi_{j}\colon H_{j-1}\to H_{j}\) is defined from a probability distribution \(\pi_{j}\) on \(H_{j-1}\) by_
\[\left\langle\varphi_{j}(z),\varphi_{j}(z^{\prime})\right\rangle_{H_{j}}=\mathbb{ E}_{w\sim\pi_{j}}\Big{[}\rho(\left\langle z,w\right\rangle_{H_{j-1}})\,\rho( \left\langle z^{\prime},w\right\rangle_{H_{j-1}})\Big{]}\ \text{ for }\ z,z^{\prime}\in H_{j-1}. \tag{6}\]
_It defines a rainbow kernel_
\[k_{j}(x,x^{\prime})=\left\langle\phi_{j}(x),\phi_{j}(x^{\prime})\right\rangle _{H_{j}}.\]
_For \(\theta\in H_{J}\), the infinite-width rainbow network outputs_
\[f(x)=\left\langle\theta,\phi_{J}(x)\right\rangle_{H_{J}}\in\mathcal{H}_{J},\]
_where \(\mathcal{H}_{J}\) is the RKHS of the rainbow kernel \(k_{J}\) of the last layer. If all probability distributions \(\pi_{j}\) are Gaussian, then the rainbow network is said to be Gaussian._
Each activation layer \(\phi_{j}(x)\in H_{j}\) of an infinite-width rainbow network has an infinite dimension and is deterministic. We shall see that the cascaded feature maps \(\varphi_{j}\) are infinite-width limits of \(\rho W_{j}\) up to rotations. One can arbitrarily rotate a feature vector \(\varphi_{j}(z)\) which satisfies (6), which also rotates the Hilbert space \(H_{j}\) and \(\phi_{j}(x)\). If the distribution \(\pi_{j+1}\) at the next layer (or the weight vector \(\theta\) if \(j=J\)) is similarly rotated, this operation preserves the dot products \(\left\langle\phi_{j}(x),w\right\rangle_{H_{j}}\) for \(w\sim\pi_{j+1}\). It therefore does not affect the asymptotic rainbow kernels at each depth \(j\):
\[k_{j}(x,x^{\prime})=\mathbb{E}_{w\sim\pi_{j}}\Big{[}\rho(\left\langle\phi_{j- 1}(x),w\right\rangle_{H_{j-1}})\,\rho(\left\langle\phi_{j-1}(x^{\prime}),w \right\rangle_{H_{j-1}})\Big{]}, \tag{7}\]
as well as the rainbow network output \(f(x)\). We shall fix these rotations by choosing KPCA feature vectors. This imposes that \(H_{j}=\ell^{2}(\mathbb{N})\) and \(\mathbb{E}_{x}[\phi_{j}(x)\,\phi_{j}(x)^{\mathrm{T}}]\) is diagonal with decreasing values along the diagonal. The random feature distributions \(\pi_{j}\) are thus defined with respect to the PCA basis of \(\phi_{j}(x)\). Infinite-width rainbow networks are then uniquely determined by the distributions \(\pi_{j}\) and the last-layer weights \(\theta\).
The weight distributions \(\pi_{j}\) for \(j\geq 2\) are defined in the infinite-dimensional space \(H_{j-1}\) and some care must be taken. We say that a distribution \(\pi\) on a Hilbert space \(H\) has bounded second-order moments if its (uncentered) covariance operator \(\mathbb{E}_{w\sim\pi}[ww^{\mathrm{T}}]\) is bounded (for the operator norm). The expectation is to be understood in a weak sense: we assume that there exists a bounded operator \(C\) on \(H\) such that \(z^{\mathrm{T}}Cz^{\prime}=\mathbb{E}_{w\sim\pi}[\left\langle z,w\right\rangle_ {H}\langle z^{\prime},w\rangle_{H}]\) for \(z,z^{\prime}\in H\). We further say that \(\pi\) has bounded fourth-order moments if for every trace-class operator \(T\) (that is, such that \(\mathrm{tr}(T^{\mathrm{T}}T)^{1/2}<+\infty\)), \(\mathbb{E}_{w\sim\pi}[(w^{\mathrm{T}}Tw)^{2}]<+\infty\). We will assume that the weight distributions \(\pi_{j}\) have bounded second- and fourth-order moments. Together with our assumptions that \(\mathbb{E}_{x}[\left\|x\right\|^{2}]<+\infty\) and that \(\rho\) is Lipschitz, this verifies the existence of all the infinite-dimensional objects we will use in the sequel. For the sake of brevity, we shall not mention these verifications in the main text and defer them to Appendix B. Finally, we note that we can generalize rainbow networks to cylindrical measures \(\pi_{j}\), which define cylindrical random variables \(w\)(Vakhania et al., 1987, see also Riedle, 2011 or Gawarecki and Mandrekar, 2011, Section 2.1.1). Such cylindrical random variables \(w\) are linear maps such that \(w(z)\) is a real random variable for every \(z\in H_{j-1}\). \(w(z)\) cannot necessarily be written \(\left\langle z,w\right\rangle\) with a random \(w\in H_{j-1}\). We still write \(\left\langle z,w\right\rangle\) by abuse of notation, with the understanding that it refers to \(w(z)\). For example, we will see that finite-width networks at initialization converge to infinite-width rainbow networks with \(\pi_{j}=\mathcal{N}(0,\mathrm{Id})\), which is a cylindrical measure but not a measure when \(H_{j-1}\) is infinite-dimensional.
Dimensionality reduction.Empirical observations of trained deep networks show that they have approximately low-rank weight matrices (Martin and Mahoney, 2021; Thamm et al., 2022). They compute a dimensionality reduction of their input, which is characterized by the singular values of the layer weight \(W_{j}\), or equivalently the eigenvalues of the empirical weight covariance \(d_{j}^{-1}\,W_{j}^{\mathrm{T}}W_{j}\). For rainbow networks, the uncentered covariances \(C_{j}=\mathbb{E}_{w\sim\pi_{j}}[ww^{\mathrm{T}}]\) of the weight distributions \(\pi_{j}\) therefore capture the linear dimensionality reductions of the network. If \(C_{j}^{1/2}\) is the symmetric square root of \(C_{j}\), we can rewrite (6) with a change of variable as
\[\varphi_{j}(z)=\tilde{\varphi}_{j}\Big{(}C_{j}^{1/2}z\Big{)}\ \ \text{with}\ \ \langle\tilde{\varphi}_{j}(z),\tilde{\varphi}_{j}(z^{\prime})\rangle_{H_{j} }=\mathbb{E}_{w\sim\tilde{\pi}_{j}}\Big{[}\rho(\langle z,w\rangle)\,\rho( \langle z^{\prime},w\rangle)\Big{]},\]
where \(\tilde{\pi}_{j}\) has an identity covariance. Rainbow network activations can thus be written:
\[\phi_{j}(x)=\tilde{\varphi}_{j}\,C_{j}^{1/2}\,\cdots\tilde{\varphi}_{1}\,C_{ 1}^{1/2}\,x. \tag{8}\]
Each square root \(C_{j}^{1/2}\) performs a linear dimensionality reduction of its input, while the white random feature maps \(\tilde{\varphi}_{j}\) compute high-dimensional non-linear embeddings. Such linear dimensionality reductions in-between kernel feature maps had been previously considered in previous works (Cho and Saul, 2009; Mairal, 2016; Bietti, 2019).
Gaussian rainbow networks.The distributions \(\pi_{j}\) are entirely specified by their covariance \(C_{j}\) for Gaussian rainbow networks, where we then have
\[\pi_{j}=\mathcal{N}(0,C_{j}).\]
When the covariance \(C_{j}\) is not trace-class, \(\pi_{j}\) is a cylindrical measure as explained above. If \(\rho\) is a homogeneous non-linearity such as ReLU, on can derive (Cho and Saul, 2009) from (7) that Gaussian rainbow kernels can be written from a homogeneous dot-product:
\[k_{j}(x,x^{\prime})=\|z_{j}(x)\|\,\|z_{j}(x^{\prime})\|\,\kappa\Bigg{(}\frac{ \langle z_{j}(x),z_{j}(x^{\prime})\rangle}{\|z_{j}(x)\|\,\|z_{j}(x^{\prime}) \|}\Bigg{)}\ \ \text{with}\ \ z_{j}(x)=C_{j}^{1/2}\phi_{j-1}(x), \tag{9}\]
where \(\kappa\) is a scalar function which depends on the non-linearity \(\rho\). The Gaussian rainbow kernels \(k_{j}\) and the rainbow RKHS \(\mathcal{H}_{J}\) only depend on the covariances \((C_{j})_{j\leq J}\). If \(C_{j}=\mathrm{Id}\) for each \(j\), then \(k_{j}\) remains a dot-product kernel because \(\langle z_{j}(x),z_{j}(x^{\prime})\rangle=\langle\phi_{j-1}(x),\phi_{j-1}(x^{ \prime})\rangle=k_{j-1}(x,x^{\prime})\). If the norms \(\|z_{j}(x)\|\) concentrate, we then obtain \(k_{j}(x,x^{\prime})=\kappa(\ldots\kappa(\langle x,x^{\prime}\rangle)\ldots)\)(Daniely et al., 2016). Depth is then useless, as \(k_{j}\) has the same expressivity as \(k_{1}\)(Bietti and Bach, 2021). When \(C_{j}\neq\mathrm{Id}\), Gaussian rainbow kernels \(k_{j}\) cannot be written as a cascade of elementary kernels, but their square roots \(\phi_{j}\) are a cascade of kernel feature maps \(\varphi_{\ell}=\tilde{\varphi}_{\ell}\,C_{\ell}^{1/2}\) for \(\ell\leq j\). The white random feature maps \(\tilde{\varphi}_{j}\) have simple expressions as they arise from the homogeneous dot-product kernel:
\[\langle\tilde{\varphi}_{j}(z),\tilde{\varphi}_{j}(z^{\prime})\rangle_{H_{j}}= \|z\|\,\|z^{\prime}\|\,\kappa\Bigg{(}\frac{\langle z,z^{\prime}\rangle}{\|z \|\,\|z^{\prime}\|}\Bigg{)}.\]
This dot-product kernel implies that \(\tilde{\varphi}_{j}\) is equivariant to rotations, and hence symmetry properties on the network \(\phi_{j}\) as we will see in Section 2.3.
Finite-width rainbow networks.We now go back to the general case of arbitrary weight distributions \(\pi_{j}\) and introduce finite-width rainbow networks, which are random approximations of infinite-width rainbow networks. Each weight matrix \(W_{j}\) is iteratively defined conditionally on the previous weight matrices \((W_{\ell})_{\ell<j}\). Its conditional probability distribution is defined in order to preserve the key induction property of the rainbow convergence of the activations \(\hat{\phi}_{j}\). Informally, it states that \(\hat{A}_{j}\,\hat{\phi}_{j}\approx\phi_{j}\) where \(\hat{A}_{j}\colon\mathbb{R}^{d_{j}}\to H_{j}\) is an alignment rotation. Finite-width rainbow networks impose sufficient conditions to obtain this convergence at all layers, as we will show below.
The first layer \(W_{1}\) is defined as in Section 2.1. Suppose that \(W_{1},\,\ldots,W_{j-1}\) have been defined. By induction, there exists an alignment rotation \(\hat{A}_{j-1}\colon\mathbb{R}^{d_{j-1}}\to H_{j-1}\), defined by
\[\hat{A}_{j-1}=\operatorname*{arg\,min}_{\hat{A}\in\mathcal{O}(d_{j-1})}\ \mathbb{E}_{x}\bigg{[}\|\hat{A}\,\hat{\phi}_{j-1}(x)-\phi_{j-1}(x)\|_{H_{j-1}}^{ 2}\bigg{]}, \tag{10}\]
such that \(\hat{A}_{j-1}\,\hat{\phi}_{j-1}\approx\phi_{j-1}\). We wish to define \(W_{j}\) so that \(\hat{A}_{j}\,\hat{\phi}_{j}\approx\phi_{j}\). This can be achieved with a random feature approximation of \(\varphi_{j}\) composed with the alignment \(\hat{A}_{j-1}\). Consider a (semi-infinite) random matrix \(W_{j}^{\prime}\) of \(d_{j}\) i.i.d. rows in \(H_{j-1}\) distributed according to \(\pi_{j}\):
\[W_{j}^{\prime}=(w_{ji}^{\prime})_{i\leq d_{j}}\ \ \text{with i.i.d.}\ \ w_{ji}^{ \prime}\sim\pi_{j}.\]
We then have \(\hat{A}_{j}\,\rho W_{j}^{\prime}\approx\varphi_{j}\) for a suitably defined \(\hat{A}_{j}\), as in Section 2.1. Combining the two approximations, we obtain
\[\hat{A}_{j}\,\rho W_{j}^{\prime}\,\hat{A}_{j-1}\,\hat{\phi}_{j-1}\approx \varphi_{j}\,\phi_{j-1}=\phi_{j}.\]
We thus define the weight at layer \(j\) with the aligned random features
\[W_{j}=W_{j}^{\prime}\,\hat{A}_{j-1}.\]
It is a random weight matrix of size \(d_{j}\times d_{j-1}\), with rotated rows \(\hat{A}_{j-1}^{\mathrm{T}}w_{ji}^{\prime}\) that are independent and identically distributed when conditioned on the previous layers \((W_{\ell})_{\ell<j}\). This inverse rotation of random weights cancels the rotation introduced by the random features at the previous layer, and implies a convergence of the random features cascade as we will prove below. This qualitative derivation motivates the following definition of finite-width rainbow networks.
**Definition 2**: _A finite-width rainbow network approximation of an infinite-width rainbow network with weight distributions \((\pi_{j})_{j\leq J}\) is defined for each \(j\leq J\) by a random weight matrix \(W_{j}\) of size \(d_{j}\times d_{j-1}\) which satisfies_
\[W_{j}=(\hat{A}_{j-1}^{T}w_{ji}^{\prime})_{i\leq d_{j}}\ \ \text{with i.i.d.}\ \ w_{ji}^{\prime}\sim\pi_{j}, \tag{11}\]
_where \(\hat{A}_{j-1}\) is the rotation defined in (10). The last layer weight vector is \(\hat{\theta}=\hat{A}_{J}^{\mathrm{T}}\theta\) where \(\theta\) is the last layer weight of the infinite-width rainbow network._
The random weights \(W_{j}\) of a finite rainbow networks are defined as rotations and finite-dimensional projections of the \(d_{j}\) infinite-dimensional random vectors \(w^{\prime}_{ji}\), which are independent. The dependence on the previous layers \((W_{\ell})_{\ell<j}\) is captured by the rotation \(\hat{A}_{j-1}\). The rows of \(W_{j}\) are thus not independent, but they are independent when conditioned on \((W_{\ell})_{\ell<j}\).
The rotation and projection of the random weights (11) implies a similar rotation and projection on the moments of \(W_{j}\) conditionally on \((W_{\ell})_{\ell<j}\). In particular, the conditional covariance of \(W_{j}\) is thus
\[\hat{C}_{j}=\hat{A}_{j-1}^{\mathrm{T}}C_{j}\hat{A}_{j-1}. \tag{12}\]
\(W_{j}\) can then be factorized as the product of a white random feature matrix \(\tilde{W}_{j}\) with the covariance square root:
\[W_{j}=\tilde{W}_{j}\,\hat{C}_{j}^{1/2}\ \ \text{with i.i.d.}\ \ \tilde{w}_{ji}\ \text{ conditionally on }(W_{\ell})_{\ell<j}.\]
Note that the distribution of the white random features \(\tilde{w}_{ji}\) depends in general on \(\hat{A}_{j-1}\). However, for Gaussian rainbow networks with \(\pi_{j}=\mathcal{N}(0,C_{j})\), this dependence is limited to the covariance \(\hat{C}_{j}\) and \(\tilde{W}_{j}=G_{j}\) is a Gaussian white matrix with i.i.d. normal entries that are independent of the previous layer weights \((W_{\ell})_{\ell<j}\):
\[W_{j}=G_{j}\,\hat{C}_{j}^{1/2}\ \ \text{with i.i.d.}\ \ G_{jik}\sim\mathcal{N}(0,1). \tag{13}\]
Finite-width Gaussian rainbow networks are approximation models of deep networks that have been trained end-to-end by SGD on a supervised task. We will explain in Section 3 how each covariance \(C_{j}\) of the rainbow model can be estimated from the weights of one or several trained networks. The precision of a Gaussian rainbow model is evaluated by sampling new weights according to (13) and verifying that the resulting rainbow network has a similar performance as the original trained networks.
Convergence to infinite-width networks.The heuristic derivation used to motivate Definition 2 suggests that the weights rotation (11) guarantees the convergence of finite-width rainbow networks towards their infinite-width counterpart. This is proved by the next theorem, which builds on Theorem 1.
**Theorem 2**: _Assume that \(\mathbb{E}_{x}[\left\lVert x\right\rVert^{2}]<+\infty\) and \(\rho\) is Lipschitz. Let \((\phi_{j})_{j\leq J}\) be the activation layer of an infinite-width rainbow network with distributions \((\pi_{j})_{j\leq J}\) with bounded second- and fourth-order moments, and an output \(f(x)\). Let \((\hat{\phi}_{j})_{j\leq J}\) be the activation layers of sizes \((d_{j})_{j\leq J}\) of a finite-width rainbow network approximation, with an output \(\hat{f}(x)\). Let \(k_{j}(x,x^{\prime})=\langle\phi_{j}(x),\phi_{j}(x^{\prime})\rangle\) and \(\hat{k}_{j}(x,x^{\prime})=\langle\hat{\phi}_{j}(x),\hat{\phi}_{j}(x^{\prime})\rangle\). Suppose that the sorted eigenvalues of \(\mathbb{E}_{x}[\phi_{j}(x)\,\phi_{j}(x)^{\mathrm{T}}]\) satisfy \(\lambda_{j,m}=O(m^{-\alpha_{j}})\) with \(\alpha_{j}>1\). Then there exists \(c>0\) which does not depend upon \((d_{j})_{j\leq J}\) such that_
\[\mathbb{E}_{W_{1},\ldots,W_{j},x,x^{\prime}}\Big{[}\left|\hat{k}_ {j}(x,x^{\prime})-k_{j}(x,x^{\prime})\right|^{2}\Big{]} \leq c\,\Big{(}\varepsilon_{j-1}+d_{j}^{-1/2}\Big{)}^{2}\] \[\mathbb{E}_{W_{1},\ldots,W_{j},x}\Big{[}\left\lVert\hat{A}_{j}\, \hat{\phi}_{j}(x)-\phi_{j}(x)\right\rVert_{H_{j}}^{2}\Big{]} \leq c\,\varepsilon_{j}^{2}\] \[\mathbb{E}_{W_{1},\ldots,W_{J},x}\Big{[}\left|\hat{f}(x)-f(x) \right|^{2}\Big{]} \leq c\,\|f\|_{\mathcal{H}_{J}}^{2}\,\varepsilon_{J}^{2},\]
_where_
\[\varepsilon_{j}=\sum_{\ell=1}^{j}d_{\ell}^{-\eta_{\ell}/2}\ \ \text{with}\ \ \eta_{\ell}=\frac{\alpha_{\ell}-1}{2(2\alpha_{\ell}-1)}>0.\]
The proof is given in Appendix B. It applies iteratively Theorem 1 at each layer. As in Theorem 1, the constant \(c\) is explicit and depends polynomially on the constants involved in the hypotheses. For Gaussian weight distributions \(\pi_{j}=\mathcal{N}(0,C_{j})\), the theorem only requires that \(\|C_{j}\|_{\infty}\) is finite for each \(j\leq J\), where \(\left\|\cdot\right\|_{\infty}\) is the operator norm (i.e., the largest singular value).
This theorem proves that at each layer, a finite-width rainbow network has an empirical kernel \(\hat{k}_{j}\) which converges in mean-square to the deterministic kernel \(k_{j}\) of the infinite-width network, when all widths \(d_{\ell}\) grow to infinity. Similarly, after alignment, each activation layer \(\hat{\phi}_{j}\) also converges to the activation layer \(\phi_{j}\) of the infinite-width network. Finally, the finite-width rainbow output \(\hat{f}\) converges to a function \(f\) in the RKHS \(\mathcal{H}_{J}\) of the infinite-width network. This demonstrates that all finite-width rainbow networks implement the same deterministic function when they are wide enough. Note that any relative scaling between the layer widths is allowed, as the error decomposes as a sum over layer contributions: each layer converges independently. In particular, this includes the proportional case when the widths are defined as \(d_{j}=s\,d_{j}^{0}\) and the scaling factor \(s\) grows to infinity.
The asymptotic existence of rotations between any two trained networks has implications for the geometry of the loss landscape: if the weight distributions \(\pi_{j}\) are unimodal, which is the case for Gaussian distributions, alignment rotations can be used to build continuous paths in parameter space between the two rainbow network weights without encountering loss barriers (Freeman and Bruna, 2017; Draxler et al., 2018; Garipov et al., 2018). This could not be done with permutations (Entezari et al., 2022; Benzing et al., 2022; Ainsworth et al., 2022), which are discrete symmetries. It proves that under the rainbow assumptions, the loss landscape of wide-enough networks has a single connected basin, as opposed to many isolated ones.
Theorem 2 is a law-of-large-numbers result, which is different but complementary to the central-limit neural network Gaussian process convergence of Neal (1996); Williams (1996); Lee et al. (2018); Matthews et al. (2018). These works state that at initialization, random finite-dimensional projections of the activations \(\hat{\phi}_{j}\) converge to a random Gaussian process described by a kernel. In contrast, we show in a wider setting that the activations \(\hat{\phi}_{j}\) converge to a deterministic feature vector \(\phi_{j}\) described by a more general kernel, up to a random rotation. Note that this requires no assumptions of Gaussianity on the weights or the activations. The convergence of the kernels is similar to the results of Daniely et al. (2016), but here generalized to non-compositional kernels obtained with arbitrary weight distributions \(\pi_{j}\).
Theorem 2 can be considered as a multi-layer but static extension of the mean-field limit of Chizat and Bach (2018); Mei et al. (2018); Rotskoff and Vanden-Eijnden (2018); Sirignano and Spiliopoulos (2020). The limit is the infinite-width rainbow networks of Definition 1. It differs from other multi-layer extensions (Sirignano and Spiliopoulos, 2022; E and Wojtowytsch, 2020; Nguyen and Pham, 2020; Chen et al., 2022; Yang and Hu, 2021) because Definition 2 includes the alignment rotations \(\hat{A}_{j}\). We shall not model the
optimization dynamics of rainbow networks when trained with SGD, but we will make several empirical observations in Section 3.
Finally, Theorem 2 shows that the two assumptions of Definition 2, namely that layer dependencies are reduced to alignment rotations and that neuron weights are conditionally i.i.d. at each layer, imply the convergence up to rotations of network activations at each layer. We will verify numerically this convergence in Section 3 for several network architectures on image classification tasks, corroborating the results of Raghu et al. (2017) and Kornblith et al. (2019). It does not mean that the assumptions of Definition 2 are valid, and verifying them is challenging in high-dimensions beyond the Gaussian case where the weight distributions \(\pi_{j}\) are not known. We however note that the rainbow assumptions are satisfied at initialization with \(\pi_{j}=\mathcal{N}(0,\mathrm{Id})\), as eq. (12) implies that \(\hat{C}_{j}=\mathrm{Id}\) and thus that the weight matrices \(W_{j}=G_{j}\) are independent. Theorem 2 therefore applies at initialization. It is an open problem to show whether the existence of alignment rotations \(\hat{A}_{j}\) is preserved during training by SGD, or whether dependencies between layer weights are indeed reduced to these rotations. Regarding (conditional) independence between neuron weights, Sirignano and Spiliopoulos (2020) show that in one-hidden-layer networks, neuron weights remain independent at non-zero but finite training times in the infinite-width limit. In contrast, a result of Rotskoff and Vanden-Eijnden (2018) suggests that this is no longer true at diverging training times, as SGD leads to an approximation of the target function \(f\) with a better rate than Monte-Carlo. Neuron weights at a given layer remain however (conditionally) exchangeable due to the permutation equivariance of the initialization and SGD, and therefore have the same marginal distribution. Theorem 2 can be extended to dependent neuron weights \(w^{\prime}_{ji}\), e.g., with the more general assumption that their empirical distribution \(d_{j}^{-1}\sum_{i=1}^{d_{j}}\delta_{w^{\prime}_{ji}}\) converges weakly to \(\pi_{j}\) when the width \(d_{j}\) increases.
### Symmetries and convolutional rainbow networks
The previous sections have defined fully-connected rainbow networks. In applications, prior information on the learning problem is often available. Practitioners then design more constrained architectures which implement inductive biases. Convolutional networks are important examples, which enforce two fundamental properties: equivariance to translations, achieved with weight sharing, and local receptive fields, achieved with small filter supports (LeCun et al., 1989; LeCun and Bengio, 1995). We first explain how equivariance to general groups may be achieved in rainbow networks. We then generalize rainbow networks to convolutional architectures.
Equivariant rainbow networks.Prior information may be available in the form of a symmetry group under which the desired output is invariant. For instance, translating an image may not change its class. We now explain how to enforce symmetry properties in rainbow networks by imposing these symmetries on the weight distributions \(\pi_{j}\) rather than on the values of individual neuron weights \(w_{ji}\). For Gaussian rainbow networks, we shall see that it is sufficient to impose that the desired symmetries commute with the weight covariances \(C_{j}\).
Formally, let us consider \(G\) a subgroup of the orthogonal group \(O(d_{0})\), under whose action the target function \(f^{\star}\) is invariant: \(f^{\star}(gx)=f^{\star}(x)\) for all \(g\in G\). Such invariance
is generally achieved progressively through the network layers. In a convolutional network, translation invariance is built up by successive pooling operations. The output \(f(x)\) is invariant but intermediate activations \(\phi_{j}(x)\) are equivariant to the group action. Equivariance is more general than invariance. The activation map \(\phi\) is equivariant if there is a representation \(\sigma\) of \(G\) such that \(\phi(gx)=\sigma(g)\phi(x)\), where \(\sigma(g)\) is an invertible linear operator such that \(\sigma(gg^{\prime})=\sigma(g)\sigma(g^{\prime})\) for all \(g,g^{\prime}\in G\). An invariant function \(f(x)=\langle\theta,\phi(x)\rangle\) is obtained from an equivariant activation map \(\phi\) with a fixed point \(\theta\) of the representation \(\sigma\). Indeed, if \(\sigma(g)\theta=\theta\) for all \(g\in G\), then \(f(gx)=f(x)\).
We say that \(\sigma\) is an orthogonal representation of \(G\) if \(\sigma(g)\) is an orthogonal operator for all \(g\). When \(\sigma\) is orthogonal, we say that \(\phi\) is orthogonally equivariant. We also say that a distribution \(\pi\) is invariant under the action of \(\sigma\) if \(\sigma(g)^{\mathsf{T}}w\sim\pi\) for all \(g\in G\), where \(w\sim\pi\). We say that a linear operator \(C\) commutes with \(\sigma\) if it commutes with \(\sigma(g)\) for all \(g\in G\). Finally, a kernel \(k\) is invariant to the action of \(G\) if \(k(gx,gx^{\prime})=k(x,x^{\prime})\). The following theorem proves that rainbow kernels are invariant to a group action if each weight distribution \(\pi_{j}\) is invariant to the group representation on the activation layer \(\phi_{j-1}\), which inductively defines orthogonal representations \(\sigma_{j}\) at each layer.
**Theorem 3**: _Let \(G\) be a subgroup of the orthogonal group \(O(d_{0})\). If all weight distribution \((\pi_{j})_{j\leq J}\) are invariant to the inductively defined orthogonal representation of \(G\) on their input activations, then activations \((\phi_{j})_{j\leq J}\) are orthogonally equivariant to the action of \(G\), and the rainbow kernels \((k_{j})_{j\leq J}\) are invariant to the action of \(G\). For Gaussian rainbow networks, this is equivalent to imposing that all weight covariances \((C_{j})_{j\leq J}\) commute with the orthogonal representation of \(G\) on their input activations._
The proof is in Appendix C. The result is proved by induction. If \(\phi_{j}\) is orthogonally equivariant and \(\pi_{j+1}\) is invariant to its representation \(\sigma_{j}\), then the next-layer activations are equivariant. Indeed, for \(w\sim\pi_{j+1}\),
which defines an orthogonal representation \(\sigma_{j+1}\) on \(\phi_{j+1}\). Note that any distribution \(\pi_{j}\) which is invariant to an orthogonal representation \(\sigma_{j}\) necessarily has a covariance \(C_{j}\) which commutes with \(\sigma_{j}\). The converse is true when \(\pi_{j}\) is Gaussian, which shows that Gaussian rainbow networks have a maximal number of symmetries among rainbow networks with weight covariances \(C_{j}\).
Together with Theorem 2, Theorem 3 implies that finite-width rainbow networks can implement functions \(\hat{f}\) which are approximately invariant, in the sense that the mean-square error \(\mathbb{E}_{W_{1},\ldots,W_{J},x}[|\hat{f}(gx)-\hat{f}(x)|^{2}]\) vanishes when the layer widths grow to infinity, with the same convergence rate as in Theorem 2. The activations \(\hat{\phi}_{j}\) are approximately equivariant in a similar sense. This gives a relatively easy procedure to define neural networks having predefined symmetries. The usual approach is to impose that each weight matrix \(W_{j}\) is permutation-equivariant to the representation of the group action on each activation layer (Cohen and Welling, 2016; Kondor and Trivedi, 2018). This means that \(W_{j}\) is a group convolution operator and hence that the rows of \(W_{j}\) are invariant by this group action. This property requires weight-sharing or synchronization between weights of different neurons, which has been criticized as biologically implausible (Bartunov et al., 2018; Ott et al.,
2020; Pogodin et al., 2021). On the contrary, rainbow networks implement symmetries by imposing that the neuron weights are independent samples of a distribution which is invariant under the group action. The synchronization is thus only at a global, statistical level. It also provides representations with the orthogonal group, which is much richer than the permutation group, and hence increases expressivity. It comes however at the cost of an approximate equivariance for finite layer widths.
Convolutional rainbow networks.Translation-equivariance could be achieved in a fully-connected architecture by imposing stationary weight distributions \(\pi_{j}\). For Gaussian rainbow networks, this means that weight covariances \(C_{j}\) commute with translations, and are thus convolution operators. However, the weights then have a stationary Gaussian distribution and therefore cannot have a localized support. This localization has to be enforced with the architecture, by constraining the connectivity of the network. We generalize the rainbow construction to convolutional architectures, without necessarily imposing that the weights are Gaussian. It is achieved by a factorization of the weight layers, so that identical random features embeddings are computed for each patch of the input. As a result, all previous theoretical results carry over to the convolutional setting.
In convolutional networks, each \(W_{j}\) is a convolution operator which enforces both translation equivariance and locality. Typical architectures impose that convolutional filters have a predefined support with an output which may be subsampled. This architecture prior can be written as a factorization of the weight matrix:
\[W_{j}=L_{j}\,P_{j},\]
where \(P_{j}\) is a prior convolutional operator which only acts along space and is replicated over channels (also known as depthwise convolution), while \(L_{j}\) is a learned pointwise (or \(1\times 1\)) convolution which only acts along channels and is replicated over space. This factorization is always possible, and should not be confused with depthwise-separable convolutions (Sifre and Mallat, 2013; Chollet, 2017).
Let us consider a convolutional operator \(W_{j}\) having a spatial support of size \(s_{j}^{2}\), with \(d_{j-1}\) input channels and \(d_{j}\) output channels. The prior operator \(P_{j}\) then extracts \(d_{j-1}\) patches of size \(s_{j}\times s_{j}\) at each spatial location and reshapes them as a channel vector of size \(d_{j-1}^{\prime}=d_{j-1}s_{j}^{2}\). \(P_{j}\) is fixed during training and represents the architectural constraints imposed by the convolutional layer. The learned operator \(L_{j}\) is then a \(1\times 1\) convolutional operator, applied at each spatial location across \(d_{j-1}^{\prime}\) input channels to compute \(d_{j}\) output channels. This factorization reshapes the convolution kernel of \(W_{j}\) of size \(d_{j}\times d_{j-1}\times s_{j}\times s_{j}\) into a \(1\times 1\) convolution \(L_{j}\) with a kernel of size \(d_{j}\times d_{j-1}^{\prime}\times 1\times 1\). \(L_{j}\) can then be thought as a fully-connected operator over channels that is applied at every spatial location.
The choice of the prior operator \(P_{j}\) directly influences the learned operator \(L_{j}\) and therefore the weight distributions \(\pi_{j}\). \(P_{j}\) may thus be designed to achieve certain desired properties on \(\pi_{j}\). For instance, the operator \(P_{j}\) may also specify predefined filters, such as wavelets in learned scattering networks (Zarka et al., 2021; Guth et al., 2022). In a learned scattering network, \(P_{j}\) computes spatial convolutions and subsamplings, with \(q\) wavelet filters having different orientations and frequency selectivity. The learned convolution \(L_{j}\) then has \(d_{j-1}^{\prime}=d_{j-1}q\) input channels. This is further detailed in Appendix D, which explains that one can reduce the size of \(L_{j}\) by imposing that it commutes with \(P_{j}\), which amounts to factorizing \(W_{j}=P_{j}\,L_{j}\) instead.
The rainbow construction of Section 2.2 has a straightforward extension to the convolutional case, with a few adaptations. The activations layers \(\hat{\phi}_{j-1}\) should be replaced with \(P_{j}\hat{\phi}_{j-1}\) and \(W_{j}\) with \(L_{j}\), where it is understood that it represents a fully-connected matrix acting along channels and replicated pointwise across space. Similarly, the weight covariances \(C_{j}\) and its square roots \(C_{j}^{1/2}\) are \(1\times 1\) convolutional operators which act along the channels of \(P_{j}\hat{\phi}_{j-1}\), or equivalently are applied over patches of \(\hat{\phi}_{j-1}\). Finally, the alignments \(\hat{A}_{j-1}\) are \(1\times 1\) convolutions which therefore commute with \(P_{j}\) as they act along different axes. One can thus still define \(\hat{C}_{j}=\hat{A}_{j-1}^{\mathrm{T}}C_{j}\hat{A}_{j-1}\). Convolutional rainbow networks also satisfy Theorems 1 to 3 with appropriate modifications.
We note that the expression of the rainbow kernel is different for convolutional architectures. Equation (7) becomes
\[k_{j}(x,x^{\prime})=\sum_{u}\mathbb{E}_{w\sim\pi_{j}}\Big{[}\rho(\langle P_{j} \phi_{j-1}(x)[u],w\rangle)\rho(\langle P_{j}\phi_{j-1}(x^{\prime})[u],w\rangle )\Big{]},\]
where \(P_{j}\phi_{j-1}(x)[u]\) is a patch of \(\phi_{j-1}(x)\) centered at \(u\) and whose spatial size is determined by \(P_{j}\). In the particular case where \(\pi_{j}\) is Gaussian with a covariance \(C_{j}\), the dot-product kernel in eq. (9) becomes
\[k_{j}(x,x^{\prime})=\sum_{u}\|z_{u}(x)\|\,\|z_{u}(x^{\prime})\|\,\kappa\Bigg{(} \frac{\langle z_{u}(x),z_{u}(x^{\prime})\rangle}{\|z_{u}(x)\|\,\|z_{u}(x^{ \prime})\|}\Bigg{)}\ \ \text{with}\ \ z_{u}(x)=C_{j}^{1/2}P_{j}\phi_{j-1}(x)[u],\]
The sum on the spatial location \(u\) averages the local dot-product kernel values and defines a translation-invariant kernel. Observe that it differs from the fully-connected rainbow kernel (9) with weight covariances \(C_{j}^{\prime}=P_{j}^{\mathrm{T}}C_{j}P_{j}\), which is a global dot-product kernel with a stationary covariance. Indeed, the corresponding fully-connected rainbow networks have filters with global spatial support, while convolutional rainbow networks have localized filters. The covariance structure of depthwise convolutional filters has been investigated by Trockman et al. (2023).
The architecture plays an important role by modifying the kernel and hence the RKHS \(\mathcal{H}_{J}\) of the output (Daniely et al., 2016). Hierarchical convolutional kernels have been studied by Mairal et al. (2014); Anselmi et al. (2015); Bietti (2019). Bietti and Mairal (2019) have proved that functions in \(\mathcal{H}_{J}\) are stable to the action of diffeomorphisms (Mallat, 2012) when \(P_{j}\) also include a local averaging before the patch extraction. However, the generalization properties of such kernels are not well understood, even when \(C_{j}=\mathrm{Id}\). In that case, deep kernels with \(J>1\) hidden layers are not equivalent to shallow kernels with \(J=1\)(Bietti and Bach, 2021).
## 3 Numerical results
In this section, we validate the rainbow model on several network architectures trained on image classification tasks and make several observations on the properties of the learned weight covariances \(C_{j}\). As our first main result, we partially validate the rainbow model by showing that network activations converge up to rotations when the layer widths increase (Section 3.1). We then show in Section 3.2 that the empirical weight covariances \(\hat{C}_{j}\) converge up to rotations when the layer widths increase. Furthermore, the weight covariances are
typically low-rank and can be partially specified from the input activation covariances. Our second main result, in Section 3.3, is that the Gaussian rainbow model applies to scattering networks trained on the CIFAR-10 dataset. Generating new weights from the estimated covariances \(C_{j}\) leads to similar performance than SGD training when the network width is large enough. We further show that SGD only updates the weight covariance during training while preserving the white Gaussian initialization. It suggests a possible explanation for the Gaussian rainbow model, though the Gaussian assumption seems too strong to hold for more complex learning tasks for network widths used in practice.
### Convergence of activations in the infinite-width limit
We show that trained networks with different initializations converge to the same function when their width increases. More precisely, we show the stronger property that at each layer, their activations converge after alignment to a fixed deterministic limit when the width increases. Trained networks thus share the convergence properties of rainbow networks (Theorem 2). Section 3.3 will further show that scattering networks trained on CIFAR-10 indeed approximate Gaussian rainbow networks. In this case, the limit function is thus in the Gaussian rainbow RKHS (Definition 1).
Architectures and tasks.In this paper, we consider two architectures, learned scattering networks (Zarka et al., 2021; Guth et al., 2022) and ResNets (He et al., 2016), trained on two image classification datasets, CIFAR-10 (Krizhevsky, 2009) and ImageNet (Russakovsky et al., 2015).
Scattering networks have fixed spatial filters, so that their learned weights only operate across channels. This structure reduces the learning problem to channel matrices and plays a major role in the (conditional) Gaussianity of the learned weights, as we will see. The networks have \(J\) hidden layers, with \(J=7\) on CIFAR-10 and \(J=10\) on ImageNet. Each layer can be written \(W_{j}=L_{j}\,P_{j}\) where \(L_{j}\) is a learned \(1\times 1\) convolution, and \(P_{j}\) is a convolution with predefined complex wavelets. \(P_{j}\) convolves each of its \(d_{j-1}\) input channels with 5 different wavelet filters (1 low-frequency filter and 4 oriented high-frequency wavelets), thus generating \(d^{\prime}_{j-1}=5d_{j-1}\) channels. We shall still denote \(L_{j}\) with \(W_{j}\) to keep the notations of Section 2.2. The non-linearity \(\rho\) is a complex modulus with skip-connection, followed by a standardization (as computed by a batch-normalization). This architecture is borrowed from Guth et al. (2022) and is further detailed in Appendix D.
Our scattering network reaches an accuracy of 92% on the CIFAR-10 test set. As a comparison, ResNet-20 (He et al., 2016) achieves 91% accuracy, while most linear classification methods based on hierarchical convolutional kernels such as the scattering transform or the neural tangent kernel reach less than 83% accuracy (Mairal et al., 2014; Oyallon and Mallat, 2015; Li et al., 2019). On the ImageNet dataset (Russakovsky et al., 2015), learned scattering networks achieve 89% top-5 accuracy (Zarka et al., 2021; Guth et al., 2022), which is also the performance of ResNet-18 with single-crop testing.
We have made minor adjustments to the ResNet architecture for ease of analysis such as removing bias parameters (at no cost in performance), as explained in Appendix D. It can still be written \(W_{j}=L_{j}\,P_{j}\) where \(P_{j}\) is a patch extraction operator as explained in Section 2.3, and the non-linearity \(\rho\) is a ReLU.
Convergence of activations.We train several networks with a range of widths by simultaneously scaling the widths of all layers with a multiplicative factor \(s\) varying over a range of \(2^{6}=64\). We show that their activations \(\hat{\phi}_{j}\) converge after alignment to a fixed deterministic limit \(\phi_{j}\) when the width increases. The feature map \(\phi_{j}\) is approximated with the activations of a large network with \(s=2^{3}\).
We begin illustrating the behavior of activation spectra as a function of our width-scaling parameter \(s\), for seven-hidden-layer trained scattering networks on CIFAR-10. In the left panel of Figure 2, we show how activation spectra vary as a function of \(s\) for the layer \(j=4\) which has a behavior representative of all other layers. The spectra are obtained by doing a PCA of the activations \(\hat{\phi}_{j}(x)\), which corresponds to a KPCA of the input \(x\) with respect to the empirical kernel \(\hat{k}_{j}\). The \(\hat{\phi}_{j}\) covariance spectra for networks of various widths overlap at lower KPCA ranks, suggesting well-estimated components, while the variance then decays rapidly at higher ranks. Wider networks thus estimate a larger number of principal components of the feature vector \(\phi_{j}\). For the first layer \(j=1\), this recovers the random feature KPCA results of Sriperumbudur and Sterge (2022), but this convergence is observed at all layers. The overall trend as a function of \(s\) illustrates the infinite-width convergence. We also note that, as the width increases, the activation spectrum becomes closer to a power-law distribution with a slope of \(-1\). The right panel of the figure shows that this type of decay with KPCA rank \(m\) is observed at all layers of the infinite-width network \((\phi_{j})_{j\leq J}\). The power-law spectral properties of random feature activations have been studied theoretically by Scethon and Harchaoui (2021), and in connection with the scaling laws observed in large language models (Kaplan et al., 2020) by Maloney et al. (2022). Note that here we do not scale the dataset size nor training hyperparameters such as the learning rate or batch size with the network width, and a different experimental setup would likely influence the infinite-width limit (Yang et al., 2022; Hoffmann et al., 2022).
Figure 2: Convergence of spectra of activations \(\hat{\phi}_{j}\) of finite-width trained scattering networks towards the feature vector \(\phi_{j}\). The figure shows the covariance spectra of activations \(\hat{\phi}_{j}\) for a given layer \(j=4\) and various width scaling \(s\) (_left_) and of the feature vector \(\phi_{j}\) for the seven hidden layers \(j\in\{1,\dots,7\}\) (_right_). The covariance spectrum is a power law of index close to \(-1\).
We now directly measure the convergence of activations by evaluating the mean-square distance after alignment \(\mathbb{E}_{x}[\left\|\hat{A}_{j}\,\hat{\phi}_{j}(x)-\phi_{j}(x)\right\|^{2}]\). The left panel of Figure 3 shows that it does indeed decrease when the network width increases, for all layers \(j\). Despite the theoretical convergence rate of Theorem 2 vanishing when the activation spectrum exponent \(\alpha_{j}\) approaches 1, in practice we still observe convergence. Alignment rotations \(\hat{A}_{j}\) are computed on the train set while the mean-square distance is computed on the test set, so this decrease is not a result of overfitting. It demonstrates that scattering networks \(\hat{\phi}_{j}\) approximate the same deterministic network \(\phi_{j}\) no matter their initialization or width when it is large enough. The right panel of the figure evaluates this same convergence on a ResNet-18 trained on ImageNet. The mean-square distance after alignment decreases for most layers when the width increases. We note that the rate of decrease slows down for the last few layers. For these layers, the relative error after alignment is of the order of unity, indicating that the convergence is not observed at the largest width considered here. The overall trend however suggests that further increasing the width would reduce the error after alignment. The observations that networks trained from different initializations have similar activations had already been made by Raghu et al. (2017). Kornblith et al. (2019) showed that similarity increases with width, but with a weaker similarity measure. Rainbow networks, which we will show can approximate scattering networks, explain the source of these observations as a consequence of the law of large numbers applied to the random weight matrices with conditionally i.i.d. rows.
### Properties of learned weight covariances
We have established the convergence (up to rotations) of the activations \(\hat{\phi}_{j}\) in the infinite-width limit. Under the rainbow model, the weight matrices \(W_{j}\) are random and thus cannot converge. However, they define estimates \(\tilde{C}_{j}\) of the infinite-dimensional weight
Figure 3: Convergence of activations \(\hat{\phi}_{j}\) of finite-width networks towards the corresponding feature vector \(\phi_{j}\), for scattering networks trained on CIFAR-10 (_left_) and ResNet trained on ImageNet (_right_). Both panels show the relative mean squared error \(\mathbb{E}_{x}[\left\|\hat{A}_{j}\,\hat{\phi}_{j}(x)-\phi_{j}(x)\right\|^{2}]/ \mathbb{E}_{x}[\left\|\phi_{j}(x)\right\|^{2}]\) between aligned activations \(\hat{A}_{j}\,\hat{\phi}_{j}\) and the feature vector \(\phi_{j}\). The error decreases as a function of the width scaling \(s\) for all layers for the scattering network, and all but the last few layers for ResNet.
covariances \(C_{j}\). We show that these estimates \(\tilde{C}_{j}\) converge to the true covariances \(C_{j}\) when the width increases. We then demonstrate that the covariances \(C_{j}\) are effectively low-rank, and that their eigenspaces can be efficiently approximated by taking into account unsupervised information. The weight covariances are thus of low complexity, in the sense that they can be described with a number of parameters significantly smaller than their original size.
Estimation of the weight covariances.We estimate the weight covariances \(C_{j}\) from the learned weights of a deep network. This network has weight matrices \(W_{j}\) of size \(d_{j}\times d_{j-1}\) that have been trained end-to-end by SGD. The natural empirical estimate of the weight covariance \(\hat{C}_{j}\) of \(W_{j}\) is
\[\hat{C}_{j}\approx d_{j}^{-1}\,W_{j}^{\mathrm{T}}W_{j}. \tag{14}\]
It computes \(\hat{C}_{j}\) from \(d_{j}\) samples, which are conditionally i.i.d. under the rainbow model hypothesis. Although the number \(d_{j}\) of samples is large, their dimension \(d_{j-1}\) is also large. For many architectures \(d_{j}/d_{j-1}\) remains nearly constant and we shall consider in this section that \(d_{j}=s\,d_{j}^{0}\), so that when the scaling factor \(s\) grows to infinity \(d_{j}/d_{j-1}\) converges to a non-zero finite limit. This creates challenges in the estimation of \(\hat{C}_{j}\), as we now explain. We will see that the weight variance is amplified during training. The learned covariance can thus be modeled \(\hat{C}_{j}=\mathrm{Id}+\hat{C}_{j}^{\prime}\), where the magnitude of \(\hat{C}_{j}^{\prime}\) keeps increasing during training. When the training time goes to infinity, the initialization \(\mathrm{Id}\) becomes negligible with respect to \(\hat{C}_{j}^{\prime}\). However, at finite training time, only the eigenvectors of \(C_{j}^{\prime}\) with sufficiently high eigenvalues have been learned consistently, and \(\hat{C}_{j}^{\prime}\) is thus effectively low-rank. \(\hat{C}_{j}\) is then a spiked covariance matrix (Johnstone, 2001). A large statistical literature has addressed the estimation of spiked covariances when the number of parameters \(d_{j-1}\) and the number of observations \(d_{j}\) increases, with a constant ratio \(d_{j}/d_{j-1}\)(Baik et al., 2005; El Karoui, 2008a). Consistent estimators of the eigenvalues of \(\hat{C}_{j}\) can be computed, but not of its eigenvectors, unless we have other prior information such as sparsity of the covariance entries (El Karoui, 2008b) or its eigenvectors (Ma, 2013). In our setting, we shall see that prior information on eigenspaces of \(\hat{C}_{j}\) is available from the eigenspaces of the input activation covariances. We use the empirical estimator (14) for simplicity, but it is not optimal. Minimax-optimal estimators are obtained by shrinking empirical eigenvalues (Donoho et al., 2018).
We would like to estimate the infinite-dimensional covariances \(C_{j}\) rather than finite-dimensional projections \(\hat{C}_{j}\). Since \(\hat{C}_{j}=\hat{A}_{j-1}^{\mathrm{T}}C_{j}\hat{A}_{j-1}\), an empirical estimate of \(C_{j}\) is given by
\[\tilde{C}_{j}=\hat{A}_{j-1}\hat{C}_{j}\hat{A}_{j-1}^{\mathrm{T}}. \tag{15}\]
To compute the alignment rotation \(\hat{A}_{j-1}\) with eq. (4), we must estimate the infinite-width rainbow activations \(\phi_{j-1}\). As above, we approximate \(\phi_{j-1}\) with the activations \(\hat{\phi}_{j-1}\) of a finite but sufficiently large network, relying on the activation convergence demonstrated in the previous section. We then estimate \(C_{j}\) with eq. (15) and \(\hat{C}_{j}\approx d_{j}^{-1}\,W_{j}^{\mathrm{T}}W_{j}\). We further reduce the estimation error of \(C_{j}\) by training several networks of size \((d_{j})_{j\leq J}\), and by averaging the empirical estimators (15). Note that averaging directly the estimates (14) of \(\hat{C}_{j}\) with different networks would not lead to an estimate of \(C_{j}\), because the covariances \(\hat{C}_{j}\) are represented in different bases which must be aligned. The final layer weights \(\theta\) are also similarly computed with an empirical estimator from the trained weights \(\hat{\theta}\).
Convergence of weight covariances.We now show numerically that the weight covariance estimates \(\tilde{C}_{j}\) (15) converge to the true covariances \(C_{j}\). This performs a partial validation of the rainbow assumptions of Definition 2, as it verifies the rotation of the second-order moments of \(\pi_{j}\) (12) but not higher-moments nor independence between neurons. Due to computational limitations, we perform this verification on three-hidden-layer scattering networks trained on CIFAR-10, for which we can scale both the number of networks \(N\) we can average over, and their width \(s\). The main computational bottleneck here is the singular value decomposition of the cross-covariance matrix \(\mathbb{E}_{x}[\phi_{j}(x)\,\hat{\phi}_{j}(x)^{\mathrm{T}}]\) to com
Figure 4: The weight covariance estimate \(\tilde{C}_{j}\) converges towards the infinite-dimensional covariance \(C_{j}\) for a three-hidden-layer scattering network trained on CIFAR-10. The first three panels show the behavior of the layer \(j=2\). _Upper left_: spectra of empirical weight covariances \(\tilde{C}_{j}\) as a function of the network sample size \(N\) showing the transition from an exponential decay (fitted by the dashed line for \(N=1\)) to the Marchenko-Pastur spectrum (fitted by the dotted lines). _Lower left_: test classification performance on CIFAR-10 of the trained networks as a function of the maximum rank of its weight covariance \(\tilde{C}_{j}\). Most of the performance is captured with the first eigenvectors of \(\tilde{C}_{j}\). The curves for different network sample sizes \(N\) when estimating \(\tilde{C}_{j}\) overlap and are offset for visual purposes. _Upper right_: spectrum of empirical weight covariances \(\tilde{C}_{j}\) as a function of the network width scaling \(s\). The dashed line is a fit to an exponential decay at low rank. _Lower right_: relative distance between empirical and true covariances \(\|\hat{C}_{j}-C_{j}\|_{\infty}/\|C_{j}\|_{\infty}\), as a function of the width scaling \(s\).
pute the alignment \(\hat{A}_{j}\), which requires \(O(Ns^{3})\) time and \(O(Ns^{2})\) memory. These shallower networks reach a test accuracy of \(84\%\) at large width.
We begin by showing that empirical covariance matrices \(\tilde{C}_{j}\) estimated from the weights of different networks share the same eigenspaces of large eigenvalues. To this end, we train \(N\) networks of the same finite width (\(s=1\)) and compare the covariances \(\tilde{C}_{j}\) estimated from these \(N\) networks as a function of \(N\). As introduced above, the estimated covariances \(\tilde{C}_{j}\) are well modeled with a spiked-covariance model. The upper-left panel of Figure 4 indeed shows that the covariance spectrum interpolates between an exponential decay at low ranks (indicated by the dashed line, corresponding to the "spikes" resulting from training, as will be shown in Section 3.3), and a Marchenko-Pastur tail at higher ranks (indicated by dotted lines, corresponding to the initialization with identity covariance). Note that we show the eigenvalues as a function of their rank rather than a spectral density in order to reveal the exponential decay of the spike positions with rank, which was missed in previous works (Martin and Mahoney, 2021; Thamm et al., 2022). The exponential regime is present even in the covariance estimated from a single network, indicating its stability across training runs, while the Marchenko-Pastur tail becomes flatter as more samples are used to estimate the empirical covariance. Here, the feature vector \(\phi_{j}\) has been estimated with a scattering network of same width \(s=1\) for simplicity of illustration.
As shown in the lower-left panel, only the exponential regime contributes to the classification accuracy of the network: the neuron weights can be projected on the first principal components of \(\tilde{C}_{j}\), which correspond to the learned spikes, without harming performance. The informative component of the weights is thus much lower-dimensional (\(\approx 30\)) than the network width (128), and this dimension appears to match the characteristic scale of the exponential decay of the covariance eigenvalues. The number \(N\) of trained networks used to compute \(\tilde{C}_{j}\) has no appreciable effect on the approximation accuracy, which again shows that the empirical covariance matrices of all \(N\) networks share this common informative component. This presence of a low-dimensional informative weight component is in agreement with the observation that the Hessian of the loss at the end of training is dominated by a subset of its eigenvectors (LeCun et al., 1989; Hassibi and Stork, 1992). These Hessian eigenvectors could indeed be related to the weight covariance eigenvectors. Similarly, the dichotomy in weight properties highlighted by our analysis could indicate why the eigenvalue distribution of the loss Hessian separates into two distinct regimes (Sagun et al., 2016, 2017; Papyan, 2019): the "bulk" (with small eigenvalues corresponding to uninformative flat directions of the loss landscape) is related to the Marchenko-Pastur tail of our weight covariance spectrum and the "top" (or spiked) components correspond to the exponential regime found at the lowest ranks of the covariance spectrum.
We now demonstrate that the weight covariances \(\tilde{C}_{j}\) converge to an infinite-dimensional covariance operator \(C_{j}\) when the widths of the scattering networks increase. Here, the weight covariances \(\tilde{C}_{j}\) are estimated from the weights of \(N=10\) networks with the same width scaling \(s\), and we estimate \(C_{j}\) from the weights of \(N=10\) wide scattering networks with \(s=2^{5}\). We first illustrate this convergence on the spectrum of \(\tilde{C}_{j}\) in the upper-right panel of Figure 2. The entire spectrum of \(\tilde{C}_{j}\) converges to a limiting spectrum which contains both the informative exponential part resulting from training and the uninformative Marchenko-Pastur tail coming from the initialization. The characteristic scale of the exponential regime grows with network width but converges to a finite value as the width increases to infinity.
We then confirm that the estimated covariances \(\tilde{C}_{j}\) indeed converge to the covariance \(C_{j}\) when the width increases in the lower-right panel. The distance converges to zero as a power law of the width scaling. The first layer \(j=1\) has a different convergence behavior (not shown) as its input dimension does not increase with \(s\).
In summary, in the context considered here, networks trained from different initializations share the same informative weight subspaces (after alignment) described by the weight covariances at each layer, and they converge to a deterministic limit when the width increases. The following paragraphs then demonstrate several properties of the weight covariances.
Dimensionality reduction in deep networks.We now consider deeper networks and show that they also learn low-rank covariances. Comparing the spectra of weights and activations reveals the alternation between dimensionality reduction with the colored weight covariances \(C_{j}\) and high-dimensional embeddings with the white random features which are captured in the rainbow model. We do so with two architectures: a ten-hidden-layer scattering network and a slightly modified ResNet-18 trained on ImageNet (specified in Appendix D), which both reach 89% top-5 test accuracy.
We show the spectra of covariances of activations \(\phi_{j}\) in the left panels of Figure 5 and of the weight covariances \(C_{j}\) in the right panels. For both networks, we recover the trend that activation spectra are close to power laws of slope \(-1\) and the weight spectra show a transition from a learned exponential regime to a decay consistent with the Marchenko-Pastur expectation, which is almost absent for ResNet-18. Considering them in sequence, as a function of depth, the input activations are thus high-dimensional (due to the power-law of index close to \(-1\)) while the subsequent weights perform a dimensionality reduction using an exponential bottleneck with a characteristic scale much smaller than the width. Next, the dimensionality is re-expanded with the non-linearity, as the activations at the next layer again have a power-law covariance spectrum. Considering the weight spectra, we observe that the effective exponential scale increases with depth, from about 10 to 60 for both the scattering network and the ResNet. This increase of dimensionality with depth is expected: in convolutional architectures, the weight covariances \(C_{j}\) are only defined on small patches of activations \(\phi_{j-1}\) because of the prior operator \(P_{j}\). However, these patches correspond to a larger receptive field in the input image \(x\) as the depth \(j\) increases. The rank of the covariances is thus to be compared with the size of this receptive field. Deep convolutional networks thus implement a sequence of dimensionality contractions (with the learned weight covariances) and expansions (with the white random features and non-linearity). Without the expansion, the network would reduce the dimensionality of the data exponentially fast with depth, thus severely limiting its ability to process information on larger spatial scales (deeper layers), while without the contraction, its parameter count and learning sample complexity would increase exponentially fast with depth. This contraction/expansion strategy allows the network to maintain a balanced representation at each scale.
The successive increases and decreases in dimensionality due to the weights and non-linearity across deep network layers have been observed by Recanatesi et al. (2019) with a different dimensionality measure. The observation that weight matrices of trained networks are low-rank has been made in several works which exploited it for model compression (Denil et al., 2013; Denton et al., 2014; Yu et al., 2017), while the high-dimensional embedding
property of random feature maps is well-known via the connection to their kernel (Rahimi and Recht, 2007; Seetbon and Harchaoui, 2021). The rainbow model integrates these two properties. In neuroscience, high-dimensional representations with power-law spectra have been measured in the mouse visual cortex by Stringer et al. (2019). Such representations in deep networks have been demonstrated to lead to increased predictive power of human fMRI cortical responses (Elmoznino and Bonner, 2022) and generalization in self-supervised learning (Agrawal et al., 2022).
Unsupervised approximations of weight covariances.The learning complexity of a rainbow network depends upon the number of parameters needed to specify the weight covariances \((C_{j})_{j\leq J}\) to reach a given performance. After having shown that their informative subspace is of dimension significantly lower than the network width, we now show that this subspace can be efficiently approximated by taking into account unsupervised information.
We would like to define a representation of the weight covariances \(C_{j}\) which can be accurately approximated with a limited number of parameters. We chose to represent
Figure 5: Covariance spectra of activations and weights of an ten-hidden-layer scattering network (_top_) and ResNet-18 (_bottom_) trained on ImageNet. In both cases, activation spectra (_left_) mainly follow power-law distribution with index roughly \(-1\). Weight spectra (_right_) show a transition from an exponential decay with a characteristic scale increasing with depth to the Marchenko-Pastur spectral distribution. These behaviors are captured by the rainbow model. For visual purposes, activation and weight spectra are offset by a factor depending on \(j\). In addition, we do not show the first layer nor the \(1\times 1\) convolutional residual branches in ResNet as they have different layer properties.
the infinite-width activations \(\phi_{j}\) as KPCA feature vectors, whose uncentered covariances \(\mathbb{E}_{x}[\phi_{j}(x)\,\phi_{j}(x)^{\text{T}}]\) are diagonal. In that case, the weight covariances \(C_{j}\) for \(j>1\) are operators defined on \(H_{j-1}=\ell^{2}(\mathbb{N})\). It amounts to representing \(C_{j}\) relatively to the principal components of \(\phi_{j-1}\), or equivalently, the kernel principal components of \(x\) with respect to \(k_{j-1}\). This defines unsupervised approximations of the weight covariance \(C_{j}\) by considering its projection on these first principal components. We now evaluate the quality of this approximation.
Here, we consider a seven-hidden-layer scattering network trained on CIFAR-10, and weight covariances estimated from \(N=50\) same-width networks. The upper panels of Figure 6 shows the amount of variance in \(C_{j}\) captured by the first \(m\) basis directions as a function of \(m\), for three different orthogonal bases. The speed of growth of this variance as
Figure 6: Unsupervised information defines low-dimensional approximations of the learned weight covariances. Each column shows a different layer \(j=2,\,4,\,6\) of a seven-hidden-layer scattering network trained on CIFAR-10. For each \(r\), we consider projections of the network weights on the first \(r\) principal components of the weight covariances (red), the kernel principal components of the input activations (orange), or random orthogonal vectors (green). _Top_: weight variance explained by the first \(r\) basis vectors as a function of \(r\). _Bottom_: classification accuracy after projection of the \(j\)-th layer weights on the first \(r\) basis vectors, as function of \(r\).
a function of \(m\) defines the quality of the approximation: a faster growth indicates that the basis provides an efficient low-dimensional approximation of the covariance. The PCA basis of \(C_{j}\) provides optimal such approximations, but it is not known before supervised training. In contrast, the KPCA basis is computed from the previous layer activations \(\phi_{j-1}\) without the supervision of class label information. Figure 6 demonstrates that the \(\phi_{j-1}\) KPCA basis provides close to optimal approximations of \(C_{j}\). This approximation is more effective for earlier layers, indicating that the supervised information becomes more important for the deeper layers. The lower panels of Figure 6 show a similar phenomenon when measuring classification accuracy instead of weight variance.
In summary, the learned weight matrices are low-rank, and a low-dimensional bottleneck can be introduced without harming performance. Further, unsupervised information (in the form of a KPCA) gives substantial prior information on this bottleneck: high-variance components of the weights are correlated with high-variance components of the activations. This observation was indirectly made by Raghu et al. (2017), who showed that network activations can be projected on stable subspaces, which are in fact aligned with the high-variance kernel principal components. It demonstrates the importance of self-supervised learning within supervised learning tasks (Bengio, 2012), and corroborates the empirical success of self-supervised pre-training for many supervised tasks. The effective number of parameters that need to be learned in a supervised manner is thus much smaller than the total number of trainable parameters.
### Gaussian rainbow approximations
We now show that the Gaussian rainbow model applies to scattering networks trained on the CIFAR-10 dataset, by exploiting the fixed wavelet spatial filters incorporated in the architecture. The Gaussian assumption thus only applies to weights along channels. We make use of the factorization \(W_{j}=G_{j}\,\hat{C}_{j}^{1/2}\) (13) of trained weights, where \(\hat{C}_{j}\) results from an estimation of \(C_{j}\) from several trained networks. We first show that the distribution of \(G_{j}\) can be approximated with random matrices of i.i.d. normal coefficients. We then show that Gaussian rainbow networks, which replace \(G_{j}\) with such a white Gaussian matrix, achieve similar classification accuracy as trained networks when the width is large. Finally, we show that in the same context, the SGD training dynamics of the weight matrices \(W_{j}\) are characterized by the evolution of the weight covariances \(\hat{C}_{j}\) only, while \(G_{j}\) remains close to its initial value. The Gaussian approximation deteriorates at small widths or on more complex datasets, suggesting that its validity regime is when the network width is large compared to the task complexity.
Comparison between trained weights and Gaussian matrices.We show that statistics of trained weights are reasonably well approximated by the Gaussian rainbow model. To do so, we train \(N=50\) seven-hidden-layer scattering networks and estimate weight covariances \((C_{j})_{j\leq J}\) by averaging eq. (15) over the trained networks as explained in Section 3.2. We then retrieve \(G_{j}=W_{j}\,\hat{C}_{j}^{-1/2}\) with \(\hat{C}_{j}=\hat{A}_{j}^{\mathrm{T}}C_{j}\hat{A}_{j}\) as in eq. (12). Note that we use a single covariance \(C_{j}\) to whiten the weights of all \(N\) networks: this will confirm that the covariances of weights of different networks are indeed related through rotations, as was shown in Section 3.2 through the convergence of weight covariance estimates. The rainbow
feature vectors \((\phi_{j})_{j\leq J}\) at each layer are approximated with the activations of one of the \(N\) networks.
As a first (partial) Gaussianity test, we compare marginal distributions of whitened weights \(G_{j}\) with the expected normal distribution in Figure 7. We present results for a series of layers \((j=2,4,6)\) across the network. Other layers present similar results, except for \(j=1\) which has more significant deviations from Gaussianity (not shown), as its input dimension is constrained by the data dimension. We shall however not focus on this first layer as we will see that it can still be replaced by Gaussian realizations when generating new weights. The weights at the \(j\)-th layer \((w_{ji})_{i\leq d_{j}}\) of the \(N\) networks are projected along the \(r\)-th eigenvector of \(C_{j}\) and normalized by the square root of the corresponding eigenvalue. This global view shows that specific one-dimensional marginals are reasonably well approximated by a normal distribution. We purposefully remain not quantitative, as the goal is not to demonstrate that trained weights are statistically indistinguishable from Gaussian realizations (which is false), but to argue that the latter is an acceptable model for the former.
To go beyond one-dimensional marginals, we now compare in the bottom panels of Figure 8 the spectral density of the whitened weights \(G_{j}\) to the theoretical Marchenko-Pastur distribution (Marcenko and Pastur, 1967), which describes the limiting spectral density of matrices with i.i.d. normal entries. We note a good agreement for the earlier layers, which deteriorates for deeper layers (as well as the first layer,
Figure 7: Marginal distributions of the weights of \(N=50\) seven-hidden-layer scattering networks trained on CIFAR-10. The weights at the \(j\)-th layer \((w_{ji})_{i\leq d_{j}}\) of the \(N\) networks are projected along the \(r\)-th eigenvector of \(C_{j}\) and normalized by the square root of the corresponding eigenvalue. The distribution of the \(Nd_{j}\) projections (blue histograms) is approximately normal (red curves). Each column shows a different layer \(j\), and each row shows a different rank \(r\).
has a different behavior). Importantly, the proportion of eigenvalues outside the Marchenko-Pastur support is arguably negligible (\(<10\%\) at all layers), which is not the case for the non-whitened weights \(W_{j}\) (upper panels) where it can be \(>25\%\) for \(j=6\). As observed by Martin and Mahoney (2021) and Thamm et al. (2022), trained weights have non-Marchenko-Pastur spectral statistics. Our results show that these deviations are primarily attributable to correlations introduced by the non-identity covariance matrices \(C_{j}\), as opposed to power-law distributions as hypothesized by Martin and Mahoney (2021). We however note that due to the universality of the Marchenko-Pastur distribution, even a perfect agreement is not sufficient to claim that trained networks have conditionally Gaussian weights. It merely implies that the Gaussian rainbow model provides a satisfactory description of a number of weight statistical properties. Despite the observed deviations from Gaussianity at later layers, we now show that generating new Gaussian weights at all layers simultaneously preserves most of the classification accuracy of the network.
Performance of Gaussian rainbow networks.While the above tests indicate some level of validation that the whitened weights \(G_{j}\) are matrices with approximately i.i.d. normal entries, it is not statistically feasible to demonstrate that this property is fully satisfied in high-dimensions. We thus sample network weights from the Gaussian rainbow model and verify that most of the performance can be recovered. This is done with the procedure described in Definition 2, using the covariances \(C_{j}\), rainbow activations \(\phi_{j}\) and final layer weights \(\theta\) here estimated from a single trained network (having shown in Sections 3.1 and 3.2 that all networks define similar rainbow parameters if they are wide enough). New weights
Figure 8: Spectral density of empirical covariances of trained (_top_) and whitened weights (_bottom_). Eigenvalue outside the support of the Marchenko-Pastur distribution (shown in red) are indicated with spikes of amplitude proportional to their bin count. After whitening, the number of outliers are respectively \(2\%\), \(4\%\), and \(8\%\) for the layers \(j=2\), \(4\), and \(6\).
\(W_{j}\) are sampled iteratively starting from the first layer with a covariance \(\hat{C}_{j}=\hat{A}_{j-1}^{\text{T}}C_{j}\hat{A}_{j-1}\), after computing the alignment rotation \(\hat{A}_{j-1}\) between the activations \(\hat{\phi}_{j-1}(x)\) of the partially sampled network and the activations \(\phi_{j-1}(x)\) of the trained network. The alignment rotations are computed using the CIFAR-10 train set, while network accuracy is evaluated on the test set, so that the measured performance is not a result of overfitting.
We perform this test using a series of seven-hidden-layer scattering networks trained on CIFAR-10 with various width scalings. We present results in Figure 9 for two sets of Gaussian rainbow networks: a first set for which both the convolutional layers and the final layer are sampled from the rainbow model (which corresponds to aligning the classifier of the trained model to the sampled activations \(\hat{\phi}_{J}(x)\)), and another set for which we retrain the classifier after sampling the convolutional layers (which preserves the Gaussian rainbow RKHS). We observe that the larger the network, the better it can be approximated by a Gaussian rainbow model. At the largest width considered here, the Gaussian rainbow network achieves 85% accuracy and 89% with a retrained classifier, and recovers most of the performance of the trained network which reaches 92% accuracy. This performance is non-trivial, as it is beyond most methods based on non-learned hierarchical convolutional kernels which obtain less than 83% accuracy (Mairal et al., 2014; Oyallon and Mallat, 2015; Li et al., 2019). This demonstrates the importance of the learned weight covariances \(C_{j}\), as has been observed by Pandey et al. (2022) for modeling sensory neuron receptive fields. It also demonstrates that the covariances \(C_{j}\) are sufficiently well-estimated from a single network to preserve classification accuracy. We note however that Shankar et al. (2020) achieve a classification accuracy of 90% with a non-trained kernel corresponding to an infinite-width convolutional network.
A consequence of our results is that these trained scattering networks have rotation invariant non-linearities, in the sense that the non-linearity can be applied in random direc
Figure 9: Performance of seven-hidden-layer scattering networks on CIFAR-10 as a function of network width for a trained network (blue), its rainbow network approximation with and without classifier retraining (red solid and dashed). The larger the width, the better the sampled rainbow model approximates the original network.
tions, provided that the next layer is properly aligned. This comes in contrast to the idea that neuron weights individually converge to salient features of the input data. For large enough networks, the relevant information learned at the end of training is therefore not carried by individual neurons but encoded through the weight covariances \(C_{j}\).
For smaller networks, the covariance-encoding property no longer holds, as Figure 9 suggests that trained weights becomes non-Gaussian. Networks trained on more complex tasks might require larger widths for the Gaussian rainbow approximation to be valid. We have repeated the analysis on scattering networks trained on the ImageNet dataset (Rusakovsky et al., 2015), which reveals that the Gaussian rainbow approximation considered here is inadequate at widths used in practice. This is corroborated by many empirical observations of (occasional) semantic specialization in deep networks trained on ImageNet (Olah et al., 2017; Bau et al., 2020; Dobs et al., 2022). A promising direction is to consider Gaussian mixture rainbow models, as used by Dubreuil et al. (2022) to model the weights of linear RNNs. Finally, we note that the Gaussian approximation also critically rely on the fixed wavelet spatial filters of scattering networks. Indeed, the spatial filters learned by standard CNNs display frequency and orientation selectivity (Krizhevsky et al., 2012) which cannot be achieved with a single Gaussian distribution, and thus require adapted weight distributions \(\pi_{j}\) to be captured in a rainbow model.
Training dynamics.The rainbow model is a static model, which does not characterize the evolution of weights from their initialization during training. We now describe the SGD training dynamics of the seven-hidden-layer scattering network trained on CIFAR-10 considered above. This dynamic picture provides an empirical explanation for the validity of the Gaussian rainbow approximation.
We focus on the \(j\)-th layer weight matrix \(W_{j}(t)\) as the training time \(t\) evolves. To measure its evolution, we consider its projection along the principal components of the final learned covariance \(\hat{C}_{j}\). More precisely, we project the \(d_{j}\) neuron weights \(w_{ji}(t)\), which are the rows of \(W_{j}(t)\), in the direction of the \(r\)-th principal axis \(e_{jr}\) of \(\hat{C}_{j}\). This gives a vector \(u_{r}(t)\in\mathbb{R}^{d_{j}}\) for each PCA rank \(r\) and training time \(t\), dropping the index \(j\) for simplicity:
\[u_{r}(t)=\big{(}\langle w_{ji}(t),e_{jr}\rangle\big{)}_{i\leq d_{j}}.\]
Its squared magnitude is proportional to the variance of the neuron weights along the \(r\)-th principal direction, which should be of the order of \(1\) at \(t=0\) due to the white noise initialization, and evolves during training to reach the corresponding \(\hat{C}_{j}\) eigenvalue. On the opposite, the direction of \(u_{r}(t)\) encodes the sampling of the marginal distribution of the neurons along the \(r\)-th principal direction: a large entry \(u_{r}(t)[i]\) indicates that neuron \(i\) is significantly correlated with the \(r\)-th principal component of \(\hat{C}_{j}\). This view allows considering the evolution of the weights \(W_{j}(t)\) separately for each principal component \(r\). It offers a simpler view than focusing on each individual neuron \(i\), because it gives an account of the population dynamics across neurons. It separates the weight matrix by columns \(r\) (in the weight PCA basis) rather than rows \(i\). We emphasize that we consider the PCA basis of the final covariance \(\hat{C}_{j}\), so that we analyze the training dynamics along the fixed principal axes \(e_{jr}\) which do not depend on the training time \(t\).
We now characterize the evolution of \(u_{r}(t)\) during training for each rank \(r\). We separate changes in magnitude, which correspond to changes in weight variance (overall stretch), from
changes in direction, which correspond to internal motions of the neurons which preserve their variance. We thus define two quantities to compare \(u_{r}(t)\) to its initialization \(u_{r}(0)\), namely the amplification ratio \(a_{r}(t)\) and cosine similarity \(c_{r}(t)\):
\[a_{r}(t)=\frac{\left\|u_{r}(t)\right\|}{\left\|u_{r}(0)\right\|}\ \text{and}\ \ c_{r}(t)=\frac{ \left\langle u_{r}(t),u_{r}(0)\right\rangle}{\left\|u_{r}(t)\right\|\left\|u_{ r}(0)\right\|}. \tag{16}\]
We evaluate these quantities using our seven-hidden-layer scattering network trained on CIFAR-10. In Figure 10, we present the results for the intermediate layer \(j=4\) (similar behavior is observed for the other layers). We show the two quantities \(a_{r}(t)\) and \(c_{r}(t)\) in the top row of Figure 10 as a function of the training epoch \(t\). We observe that the motion of the weight vector is mainly an amplification effect operating in a sequence starting with the first eigenvectors, as the cosine similarity remains of order unity. Given the considered
Figure 10: The learning dynamic of a seven-hidden-layer scattering network trained on CIFAR-10 is mainly a low-dimensional linear amplification effect that preserves most of the positional information of the initialization. We present results for layer \(j=4\) (similar behavior is observed for the other layers). _Upper left:_ amplification (overall stretch) of the weight variance as a function of rank. _Upper right:_ cosine similarity (internal motion) as a function of rank. _Lower panels:_ projections of individual neurons along pairs of principal components. Each neuron is represented as a point in the plane, whose trajectory during training is shown as a connected line (color indicates training time).
dimensionality (\(d_{j}=512\)), the observed departure from unity is rather small: the solid angle subtended by this angular change of direction covers a vanishingly small surface of the unit sphere in \(d_{j}\) dimensions. We thus have \(u_{r}(t)\approx a_{r}(t)\,u_{r}(0)\).
These results show that the weight evolution can be written
\[W_{j}(t)\approx G_{j}\,\hat{C}_{j}^{1/2}(t),\]
where \(G_{j}=W_{j}(0)\) is the initialization and the weight covariance \(\hat{C}_{j}(t)\) evolves by amplification in its fixed PCA basis:
\[\hat{C}_{j}(t)=\sum_{r}a_{r}(t)^{2}\,e_{jr}e_{jr}^{\rm T}.\]
In other words, the weight evolution during training is an ensemble motion of the neuron population, with negligible internal motion of individual neurons relative to the population: training amounts to learning the weight covariance. Surprisingly, the weight configuration at the end of training thus retains most of the information of its random initialization: the initial configuration can be practically recovered by whitening the trained weights. In addition, the stochasticity introduced by SGD and data augmentation appears to be negligible, as it does not affect the relative positions of individual neurons during training. This observation has two implications. First, the alignment rotations \(\hat{A}_{j}\) which describe the trained network relative to its infinite-width rainbow counterpart (as \(\hat{\phi}_{j}\approx\hat{A}_{j}^{\rm T}\phi_{j}\)) are entirely determined by the initialization. Second, it provides an empirical explanation for the validity of the Gaussian rainbow approximation. While this argument seems to imply that the learned weight distributions \(\pi_{j}\) depend significantly on the initialization scheme, note that significantly non-Gaussian initializations might not be preserved by SGD or could lead to poor performance.
The bottom row of Figure 10 illustrates more directly the evolution of individual neurons during training. Although each neuron of \(W_{j}(t)\) is described by a \(d_{j-1}\)-dimensional weight vector, it can be projected along two principal directions to obtain a two-dimensional picture. We then visualize the trajectories of each neuron projected in this plane. The trajectories are almost straight lines, as the learning dynamics only amplify variance along the principal directions while preserving the relative positions of the neurons. Projections on principal components of higher ranks give a more static picture as the amplification along these directions is smaller.
A large literature has characterized properties of SGD training dynamics. Several works have observed that dynamics are linearized after a few epochs (Jastrzebski et al., 2020; Leclerc and Madry, 2020), so that the weights remain in the same linearly connected basin thereafter (Frankle et al., 2020). It has also been shown that the empirical neural tangent kernel evolves mostly during this short initial phase (Fort et al., 2020) and aligns itself with discriminative directions (Baratin et al., 2021; Atanasov et al., 2022). Our results indicate that this change in the neural tangent kernel is due to the large amplification of the neuron weights along the principal axes of \(\hat{C}_{j}\), which happen early during training. The observation that neural network weights have a low-rank departure from initialization has been made in the lazy regime by Thamm et al. (2022), for linear RNNs by Schuessler et al. (2020), and for large language-model adaptation by Hu et al. (2022). The sequential emergence of
the weight principal components has been derived theoretically in linear networks by Saxe et al. (2014, 2019).
## 4 Conclusion
We have introduced rainbow networks as a model of the probability distribution of weights of trained deep networks. The rainbow model relies on two assumptions. First, layer dependencies are reduced to alignment rotations. Second, neurons are independent when conditioned on the previous layer weights. Under these assumptions, trained networks converge to a deterministic function in the corresponding rainbow RKHS when the layer widths increase. We have verified numerically the convergence of activations after alignment for scattering networks and ResNets trained on CIFAR-10 and ImageNet. We conjecture that this convergence conversely implies the rotation dependency assumption of the rainbow model. We have verified this rotation on the second-order moments of the weights through the convergence of their covariance after alignment (for scattering networks trained on CIFAR-10 due to computational limitations).
The data-dependent kernels which describe the infinite-width rainbow networks, and thus their functional properties, are determined by the learned distributions \(\pi_{j}\). Mathematically, we have shown how the symmetry properties of these distributions are transferred on the network. Numerically, we have shown that their covariances \(C_{j}\) compute projections in a low-dimensional "informative" subspace that is shared among networks, is low-dimensional, and can be approximated efficiently with an unsupervised KPCA. It reveals that networks balance low learning complexity with high expressivity by computing a sequence of reductions and increases in dimensionality.
In the Gaussian case, the distributions \(\pi_{j}\) are determined by their covariances \(C_{j}\). We have validated that factorizing the learned weights with fixed wavelet filters is sufficient to obtain Gaussian rainbow networks on CIFAR-10, using scattering networks. In this setting, we can generate new weights and have shown that the weight covariances \(C_{j}\) are sufficient to capture most of the performance of the trained networks. Further, the training dynamics are reduced to learning these covariances while preserving memory of the initialization in the individual neuron weights.
Our work has several limitations. First, we have not verified the rainbow assumptions of rotation dependence between layers beyond second-order moments, and conditional independence between neurons beyond the Gaussian case. A complete model would incorporate the training dynamics and show that such statistical properties are satisfied at all times. Second, our numerical experiments have shown that the Gaussian rainbow approximation of scattering networks gradually degrades when the network width is reduced. When this approximation becomes less accurate, it raises the question whether incorporating more prior information in the architecture could lead to Gaussian rainbow networks. Finally, even in the Gaussian case, the rainbow model is not completely specified as it requires to estimate the weight covariances \(C_{j}\) from trained weights. A major mathematical issue is to understand the properties of the resulting rainbow RKHS which result from properties of these weight covariances.
By introducing the rainbow model, this work provides new insights towards understanding the inner workings of deep networks.
## Acknowledgments
This work was partially supported by a grant from the PRAIRIE 3IA Institute of the French ANR-19-P3IA-0001 program. BM acknowledges support from the David and Lucile Packard Foundation. We thank the Scientific Computing Core at the Flatiron Institute for the use of their computing resources. BM thanks Chris Olah and Eric Vanden-Eijnden for inspiring discussions. FG would like to thank Francis Bach and Gabriel Peyre for helpful pointers for the proof of Theorem 1. We also thank Nathanael Cuvelle-Magar and Etienne Lempereur for feedback on the manuscript.
## Appendix A Proof of Theorem 1
We prove a slightly more general version of Theorem 1 which we will need in the proof of Theorem 2. We allow the input \(x\) to be in a possibly infinite-dimensional Hilbert space \(H_{0}\) (the finite-dimensional case is recovered with \(H_{0}=\mathbb{R}^{d_{0}}\)). We shall assume that the random feature distribution \(\pi\) has bounded second- and fourth-order moments in the sense of Section 2.2: it admits a bounded uncentered covariance operator \(C=\mathbb{E}_{w\sim\pi}[ww^{\mathrm{T}}]\) and \(\mathbb{E}_{w\sim\pi}[(w^{\mathrm{T}}Tw)^{2}]<+\infty\) for every trace-class operator \(T\) on \(H_{0}\). Without loss of generality, we assume that the non-linearity \(\rho\) is \(1\)-Lipschitz and that \(\rho(0)=0\). These last assumptions simplify the constants involved in the analysis. They can be satisfied for any \(L\)-Lipschitz non-linearity \(\rho\) by replacing it with \((\rho-\rho(0))/L\), which does not change the linear expressivity of the network.
We give the proof outline in Appendix A.1. It relies on several lemmas, which are proven in Appendices A.2 to A.5. We write \(\left\lVert\cdot\right\rVert_{\infty}\) the operator norm, \(\left\lVert\cdot\right\rVert_{2}\) the Hilbert-Schmidt norm, and \(\left\lVert\cdot\right\rVert_{1}\) the nuclear (or trace) norm.
### Proof outline
The convergence of the activations \(\hat{\varphi}(x)\) to the feature vector \(\varphi(x)\) relies on the convergence of the empirical kernel \(\hat{k}\) to the asymptotic kernel \(k\). We thus begin by reformulating the mean-square error \(\mathbb{E}_{x}[\left\lVert\hat{A}\,\hat{\varphi}(x)-\varphi(x)\right\rVert_{H }^{2}]\) in terms of the kernels \(\hat{k}\) and \(k\). More precisely, we will consider the integral operators \(\hat{T}\) and \(T\) associated to the kernels. These integral operators are the infinite-dimensional equivalent of Gram matrices \(\left(k(x_{i},x_{i^{\prime}})\right)_{1\leq i,i^{\prime}\leq n}\).
Let \(\mu\) be the distribution of \(x\). We define the integral operator \(T\colon L^{2}(\mu)\to L^{2}(\mu)\) associated to the asymptotic kernel \(k\) as
\[(Tf)(x)=\mathbb{E}_{x^{\prime}}\Big{[}k(x,x^{\prime})\,f(x^{\prime})\Big{]},\]
where \(x^{\prime}\) is an i.i.d. copy of \(x\) and \(\mu\) is the law of \(x\). Similarly, we denote \(\hat{T}\) the integral operator defined by \(\hat{k}\). Their standard properties are detailed in the next lemma. Moreover, the definition of \(\hat{T}\) entails that it is the average of \(d_{1}\) i.i.d. integral operators defined by the individual random features \((w_{i})_{i\leq d_{1}}\) of \(\hat{\varphi}\). The law of large numbers then implies a mean-square convergence of \(\hat{T}\) to \(T\), as proven in the following lemma.
**Lemma 1**: \(T\) _and \(\hat{T}\) are trace-class non-negative self-adjoint operators on \(L^{2}(\mu)\), with_
\[\operatorname{tr}(T)\leq\left\lVert C\right\rVert_{\infty}\mathbb{E}_{x}[ \left\lVert x\right\rVert^{2}].\]
_The eigenvalues of \(T\) and \(\hat{T}\) are the same as their respective activation covariance matrices \(\mathbb{E}_{x}[\varphi(x)\,\varphi(x)^{\mathrm{T}}]\) and \(\mathbb{E}_{x}[\hat{\varphi}(x)\,\hat{\varphi}(x)^{\mathrm{T}}]\). Besides, it holds that \(\mathbb{E}_{W}[\hat{T}]=T\) and_
\[\sqrt{\mathbb{E}_{W}\Big{[}\|\hat{T}-T\|_{2}^{2}\Big{]}}=\sqrt{\mathbb{E}_{W,x,x^{\prime}}\Big{[}|\hat{k}(x,x^{\prime})-k(x,x^{\prime})|^{2}\Big{]}}=c\,d_{1 }^{-1/2},\]
_with some constant \(c<+\infty\)._
We defer the proof, which relies on standard properties and a direct calculation of the variance of \(\hat{T}\) around its mean \(T\), to Appendix A.2. In the following, we shall write \(c=\kappa\,\|C\|_{\infty}\,\mathbb{E}_{x}[\|x\|^{2}]\) to simplify calculations for the proof of Theorem 2, where \(C=\mathbb{E}_{w\sim\pi}[ww^{\mathrm{T}}]\) is the uncentered covariance of \(\pi\), and \(\kappa\) is a constant. When \(\pi\) is Gaussian, Appendix A.2 further shows that \(\kappa\leq\sqrt{3}\).
The mean-square error between \(\hat{\varphi}\) and \(\varphi\) after alignment can then be expressed as a different distance between \(\hat{T}\) and \(T\), as proven in the next lemma.
**Lemma 2**: _The alignment error between \(\hat{\varphi}\) and \(\varphi\) is equal to the Bures-Wasserstein distance \(\mathrm{BW}\) between \(\hat{T}\) and \(T\):_
\[\min_{\hat{A}\in\mathcal{O}(d_{1})}\mathbb{E}_{x}\Big{[}\|\hat{A}\,\hat{ \varphi}(x)-\varphi(x)\|_{H}^{2}\Big{]}=\mathrm{BW}(\hat{T},T)^{2}.\]
The Bures-Wasserstein distance (Bhatia et al., 2019) is defined, for any trace-class non-negative self-adjoint operators \(\hat{T}\) and \(T\), as
\[\mathrm{BW}(\hat{T},T)^{2}=\min_{\hat{A}\in\mathcal{O}\left(L^{2}(\mu)\right) }\|\hat{A}\,\hat{T}^{1/2}-T^{1/2}\|_{2}^{2}=\mathrm{tr}\bigg{(}\hat{T}+T-2 \Big{(}T^{1/2}\hat{T}T^{1/2}\Big{)}^{1/2}\bigg{)}.\]
The minimization in the first term is done over unitary operators of \(L^{2}(\mu)\), and can be solved in closed-form with a singular value decomposition of \(T^{1/2}\hat{T}^{1/2}\) as in eqs. (3) and (4). A direct calculation then shows that the minimal value is equal to the expression in the second term, as in eq. (5). The Bures-Wasserstein distance arises in optimal transport as the Wasserstein-2 distance between two zero-mean Gaussian distributions of respective covariance operators \(\hat{T}\) and \(T\), and in quantum information as the Bures distance, a non-commutative generalization of the Hellinger distance. We refer the interested reader to Bhatia et al. (2019) for more details. We defer the proof of Lemma 2 to Appendix A.3.
It remains to establish the convergence of \(\hat{T}\) towards \(T\) for the Bures-Wasserstein distance, which is a distance on the square roots of the operators. The main difficulty comes from the fact that the square root is Lipschitz only when bounded away from zero. This lack of regularity in the optimization problem can be seen from the fact that the optimal alignment rotation \(\hat{A}\) is obtained by setting all singular values of some operator to one, which is unstable when this operator has vanishing singular values. We thus consider an entropic regularization of the underlying optimal transport problem over \(\hat{A}\) with a parameter \(\lambda>0\) that will be adjusted with \(d_{1}\). It penalizes the entropy of the coupling so that singular values smaller than \(\lambda\) are not amplified. It leads to a bound on the Bures-Wasserstein distance, as shown in the following lemma.
**Lemma 3**: _Let \(\hat{T}\) and \(T\) be two trace-class non-negative self-adjoint operators. For any \(\lambda>0\), we have_
\[\mathrm{BW}(\hat{T},T)^{2}\leq\frac{\|T\|_{2}\|\hat{T}-T\|_{2}}{ \lambda}+\mathrm{tr}(\hat{T}-T)+2\,\mathrm{tr}\bigg{(}T+\lambda\mathrm{Id}- \Big{(}T^{2}+\lambda^{2}\mathrm{Id}\Big{)}^{1/2}\bigg{)}. \tag{17}\]
We defer the proof to Appendix A.4.
The first two terms in eq. (17) are controlled in expectation with Lemma 1. The last term, when divided by \(\lambda\), has a similar behavior to another quantity which arises in least-squares regression, namely the degrees of freedom \(\mathrm{tr}(T(T+\lambda\mathrm{Id})^{-1})\)(Hastie and Tibshirani, 1987; Caponnetto and De Vito, 2007). It can be calculated by assuming a decay rate for the eigenvalues of \(T\), as done in the next lemma.
**Lemma 4**: _Let \(T\) be a trace-class non-negative self-adjoint operator whose eigenvalues satisfy \(\lambda_{m}\leq c\,m^{-\alpha}\) for some \(\alpha>1\) and \(c>0\). Then it holds:_
\[\mathrm{tr}\bigg{(}T+\lambda\mathrm{Id}-\Big{(}T^{2}+\lambda^{2} \mathrm{Id}\Big{)}^{1/2}\bigg{)}\leq c^{\prime}\,\lambda^{1-1/\alpha},\]
_where the constant \(c^{\prime}=\frac{c^{1/\alpha}}{1-1/\alpha}\)._
The proof is in Appendix A.5.
We now put together Lemmas 1 to 4. We have for any \(\lambda>0\),
\[\mathbb{E}_{W,x}\Big{[}\|\hat{A}\,\hat{\varphi}(x)-\varphi(x)\|_ {H}^{2}\Big{]}=\mathbb{E}_{W}\Big{[}\mathrm{BW}(\hat{T},T)^{2}\Big{]}\leq \frac{\kappa\,\left\|C\right\|_{\infty}^{2}\mathbb{E}_{x}[\left\|x\right\|^{2 }]^{2}}{\lambda\sqrt{d_{1}}}+\frac{2c^{1/\alpha}}{1-1/\alpha}\lambda^{1-\frac{ 1}{\alpha}},\]
where we have used the Cauchy-Schwarz inequality to bound \(\mathbb{E}_{W}[\left\|\hat{T}-T\right\|_{2}]\leq\sqrt{\mathbb{E}_{W}[\left\| \hat{T}-T\right\|_{2}^{2}]}\) and the fact that \(\left\|T\right\|_{2}\leq\mathrm{tr}\,T\leq\left\|C\right\|_{\infty}\mathbb{E }_{x}[\left\|x\right\|^{2}]\). We then optimize the upper bound with respect to \(\lambda\) by setting
\[\lambda=\left(\frac{2c^{1/\alpha}\sqrt{d_{1}}}{\kappa\left\|C\right\|_{\infty} ^{2}\mathbb{E}_{x}[\left\|x\right\|^{2}]^{2}}\right)^{-\alpha/(2\alpha-1)},\]
which yields
\[\mathbb{E}_{W,x}\Big{[}\|\hat{A}\,\hat{\varphi}(x)-\varphi(x)\|_ {H}^{2}\Big{]}\leq c^{\prime\prime}\,d_{1}^{-(\alpha-1)/(4\alpha-2)},\]
with a constant
\[c^{\prime\prime}=\frac{2\kappa^{(\alpha-1)/(2\alpha-1)}}{(\alpha-1)/(2\alpha -1)}\bigg{(}\frac{c}{\left\|C\right\|_{\infty}\mathbb{E}_{x}[\left\|x\right\|^ {2}]}\bigg{)}^{1/(2\alpha-1)}\left\|C\right\|_{\infty}\mathbb{E}_{x}[\left\|x \right\|^{2}].\]
Finally, the function \(\hat{f}\) can be written
\[\hat{f}(x)=\langle\hat{A}^{T}\theta,\hat{\varphi}(x)\rangle= \langle\theta,\hat{A}\,\hat{\varphi}(x)\rangle_{H},\]
so that
\[\left|\hat{f}(x)-f(x)\right|^{2}=\left|\langle\theta,\hat{A}\, \hat{\varphi}(x)-\varphi(x)\rangle_{H}\right|^{2}\leq\|\theta\|_{H}^{2}\|\hat{A }\,\hat{\varphi}(x)-\varphi(x)\|_{H}^{2}.\]
Rewriting \(\left\|\theta\right\|_{H}=\left\|f\right\|_{\mathcal{H}}\), assuming that \(\theta\) is the minimum-norm vector such that \(f(x)=\langle\theta,\varphi(x)\rangle_{H}\), and using the convergence of \(\hat{A}\,\hat{\varphi}\) towards \(\varphi\) then yields
\[\mathbb{E}_{W,x}\Big{[}|\hat{f}(x)-f(x)|^{2}\Big{]}\leq c^{ \prime\prime}\|f\|_{\mathcal{H}}^{2}\,d_{1}^{-(\alpha-1)/(4\alpha-2)}.\]
### Proof of Lemma 1
We define the linear operator \(\Phi\colon L^{2}(\mu)\to H\) by
\[\Phi f=\mathbb{E}_{x}[f(x)\,\varphi(x)].\]
Its adjoint \(\Phi^{\mathrm{T}}\colon H\to L^{2}(\mu)\) is then given by
\[(\Phi^{\mathrm{T}}u)(x)=\langle u,\varphi(x)\rangle,\]
so that \(T=\Phi^{\mathrm{T}}\Phi\). This proves that \(T\) is self-adjoint and non-negative. On the other hand, we have \(\Phi\Phi^{\mathrm{T}}=\mathbb{E}_{x}[\varphi(x)\,\varphi(x)^{\mathrm{T}}]\) the uncentered covariance matrix of the feature map \(\varphi\) associated to the kernel \(k\). This shows that \(T\) and this uncentered covariance matrix have the same eigenvalues.
Moreover, we have
\[\mathrm{tr}(T)=\mathbb{E}_{x}[k(x,x)]=\mathrm{tr}\Big{(}\Phi^{\mathrm{T}}\Phi \Big{)}=\left\|\Phi\right\|_{2}^{2}=\mathbb{E}_{x}\Big{[}\big{\|}\varphi(x) \big{\|}^{2}\Big{]},\]
and using the definition of \(k\),
where \(w\sim\pi\) independently from \(x\), \(\left|\rho(t)\right|\leq\left|t\right|\) by assumption on \(\rho\), and the last step follows from Holder's inequality. This proves that \(T\) is trace-class and \(\Phi\) is Hilbert-Schmidt, with an explicit upper bound on the trace.
The above remarks are also valid for \(\hat{T}\) with an appropriate definition of \(\hat{\Phi}\colon L^{2}(\mu)\to\mathbb{R}^{d_{1}}\). We have \(\mathbb{E}_{W}[\hat{T}]=T\) because \(\mathbb{E}_{W}[\hat{k}(x,x^{\prime})]=k(x,x^{\prime})\). Therefore, \(\mathrm{tr}(\hat{T})=\left\|\hat{\Phi}\right\|_{2}^{2}\) is almost surely finite because
\[\mathbb{E}_{W}[\mathrm{tr}(\hat{T})]=\mathrm{tr}(T)<+\infty.\]
Let \(\hat{k}_{i}(x,x^{\prime})=\rho(\langle x,w_{i}\rangle)\,\rho(\langle x^{ \prime},w_{i}\rangle)\) where \((w_{i})_{i\leq d_{j}}\) are the rows of \(W\), and \(\hat{T}_{i}\) the associated integral operators. The \(\hat{T}_{i}\) are i.i.d. with \(\mathbb{E}_{W}[\hat{T}_{i}]=T\) as for \(\hat{T}\), and we have \(\hat{T}=d_{1}^{-1}\sum_{i=1}^{d_{1}}\hat{T}_{i}\). It then follows by standard variance calculations that
\[\mathbb{E}_{W}\Big{[}\|\hat{T}-T\|_{2}^{2}\Big{]}=\frac{1}{d_{1}}\Big{(} \mathbb{E}_{W}\Big{[}\|\hat{T}_{1}\|_{2}^{2}\Big{]}-\|T\|_{2}^{2}\Big{)}=\frac {c}{d_{1}},\]
with a constant \(c\) such that
We then have, using the assumption on the fourth moments of \(\pi\),
\[\mathbb{E}_{W}\bigg{[}\mathbb{E}_{x}\Big{[}\big{|}\langle x,w_{1}\rangle\big{|} ^{2}\Big{]}^{2}\bigg{]}=\mathbb{E}_{W}\bigg{[}\Big{(}w_{1}^{\mathrm{T}} \mathbb{E}_{x}\Big{[}xx^{\mathrm{T}}\Big{]}w_{1}\Big{)}^{2}\bigg{]}<+\infty,\]
because \(\operatorname{tr}\mathbb{E}_{x}[xx^{\mathrm{T}}]=\mathbb{E}_{x}[\|x\|^{2}]<+\infty\). When \(\pi\) is Gaussian, we further have
\[\mathbb{E}_{W}\Big{[}\mathbb{E}_{x}\Big{[}|\langle x,w_{1}\rangle| ^{2}\Big{]}^{2}\Big{]}\] \[\leq 3\big{(}\operatorname{tr}\!\left(C\,\mathbb{E}_{x}\Big{[} xx^{\mathrm{T}}\Big{]}\right)\big{)}^{2}\] \[\leq 3\,\|C\|_{\infty}^{2}\,\mathbb{E}_{x}\Big{[}\|x\|^{2}\Big{]} ^{2},\]
by classical fourth-moment computations of Gaussian random variables.
### Proof of Lemma 2
The alignment error can be rewritten in terms of the linear operators \(\Phi\) and \(\hat{\Phi}\) defined in Appendix A.2:
\[\mathbb{E}_{x}\Big{[}\|\hat{A}\,\hat{\varphi}(x)-\varphi(x)\|_{H}^{2}\Big{]}= \|\hat{A}\,\hat{\Phi}-\Phi\|_{2}^{2}.\]
We then expand
\[\|\hat{A}\,\hat{\Phi}-\Phi\|_{2}^{2}=\|\hat{\Phi}\|_{2}^{2}+\|\Phi\|_{2}^{2}-2 \operatorname{tr}\!\left(\Phi^{\mathrm{T}}\hat{A}\hat{\Phi}\right).\]
The first two terms are respectively equal to \(\operatorname{tr}\hat{T}\) and \(\operatorname{tr}T\) per Appendix A.2. The alignment error is minimized with \(\hat{A}=UV^{\mathrm{T}}\) from the SVD decomposition (Bhatia et al., 2019):
\[\Phi\hat{\Phi}^{\mathrm{T}}=\mathbb{E}_{x}\Big{[}\varphi(x)\,\hat{\varphi}(x)^ {\mathrm{T}}\Big{]}=USV^{\mathrm{T}},\]
for which we then have
\[\operatorname{tr}\!\left(\Phi^{\mathrm{T}}\hat{A}\hat{\Phi}\right)= \operatorname{tr}\!\left(\hat{\Phi}\Phi^{\mathrm{T}}\hat{A}\right)= \operatorname{tr}\!\left(VSU^{\mathrm{T}}UV^{\mathrm{T}}\right)= \operatorname{tr}\!\left(S\right).\]
This can further be written
\[\operatorname{tr}\!\left(S\right)=\operatorname{tr}\!\left(\left(US^{2}U^{ \mathrm{T}}\right)^{1/2}\right)=\operatorname{tr}\!\left(\left(\Phi\hat{ \Phi}^{\mathrm{T}}\hat{\Phi}\Phi^{\mathrm{T}}\right)^{1/2}\right)= \operatorname{tr}\!\left(\left(\Phi\hat{T}\Phi^{\mathrm{T}}\right)^{1/2} \right).\]
To rewrite this in terms of \(T\), we perform a polar decomposition of \(\Phi\): there exists a unitary operator \(P\colon L^{2}(\mu)\to H\) such that \(\Phi=PT^{1/2}\). We then have
\[\operatorname{tr}\!\left(\left(\Phi\hat{T}\Phi^{\mathrm{T}}\right)^ {1/2}\right) =\operatorname{tr}\!\left(\left(PT^{1/2}\hat{T}T^{1/2}P^{\mathrm{ T}}\right)^{1/2}\right)\] \[=\operatorname{tr}\!\left(P\!\left(T^{1/2}\hat{T}T^{1/2}\right)^{1 /2}\!P^{\mathrm{T}}\right)\] \[=\operatorname{tr}\!\left(\left(T^{1/2}\hat{T}T^{1/2}\right)^{1/2 }\right).\]
Putting everything together, we have
\[\mathbb{E}_{x}\Big{[}\|\hat{A}\,\hat{\varphi}(x)-\varphi(x)\|_{H}^{2}\Big{]}= \operatorname{tr}\!\left(\hat{T}+T-2\!\left(T^{1/2}\hat{T}T^{1/2}\right)^{1/2} \right)\!.\]
### Proof of Lemma 3
The Bures-Wasserstein distance can be rewritten as a minimum over contractions rather than unitary operators:
\[\text{BW}(\hat{T},T)^{2}=\min_{\left\|\hat{A}\right\|_{\infty}\leq 1} \text{tr}\Big{(}\hat{T}+T-2T^{1/2}\hat{A}\hat{T}^{1/2}\Big{)},\]
which holds because of Holder's inequality:
\[\text{tr}\Big{(}T^{1/2}\hat{A}\hat{T}^{1/2}\Big{)}=\text{tr}\Big{(}\hat{T}^{1/ 2}T^{1/2}\hat{A}\Big{)}\leq\left\|\hat{T}^{1/2}T^{1/2}\right\|_{1}\left\|\hat{A }\right\|_{\infty}=\text{tr}\bigg{(}\Big{(}T^{1/2}\hat{T}T^{1/2}\Big{)}^{1/2} \bigg{)}\left\|\hat{A}\right\|_{\infty}.\]
Rather than optimizing over contractions \(\hat{A}\), which leads to a unitary \(\hat{A}\), we shall use a non-unitary \(\hat{A}\) with \(\left\|\hat{A}\right\|_{\infty}<1\).
We introduce an "entropic" regularization: let \(\lambda>0\), and define
\[\text{BW}_{\lambda}(\hat{T},T)^{2}=\min_{\left\|\hat{A}\right\|_{\infty}\leq 1} \text{tr}\Big{(}\hat{T}+T-2T^{1/2}\hat{A}\hat{T}^{1/2}\Big{)}+\lambda\log\det \bigg{(}\Big{(}\text{Id}-\hat{A}^{\text{T}}\hat{A}\Big{)}^{-1}\bigg{)}.\]
The second term corresponds to the negentropy of the coupling in the underlying optimal transport formulation of the Bures-Wasserstein distance. It can be minimized in closed-form by calculating the fixed-point of Sinkhorn iterations (Janati et al., 2020), or with a direct SVD calculation as in Appendix A.3. It is indeed clear that the minimum is attained at some \(\hat{A}_{\lambda}=US_{\lambda}V^{\text{T}}\) with \(T^{1/2}\hat{T}^{1/2}=USV^{\text{T}}\), and this becomes a separable quadratic problem over the singular values \(S_{\lambda}\). We thus find
\[S_{\lambda} =\bigg{(}\Big{(}S^{2}+\lambda^{2}\text{Id}\Big{)}^{1/2}-\lambda \text{Id}\bigg{)}S^{-1},\] \[\hat{A}_{\lambda} =\bigg{(}\Big{(}T^{1/2}\hat{T}T^{1/2}+\lambda^{2}\text{Id}\Big{)} ^{1/2}-\lambda\text{Id}\bigg{)}T^{-\frac{1}{2}}\hat{T}^{-\frac{1}{2}},\]
and one can verify that we indeed have \(\|\hat{A}_{\lambda}\|_{\infty}<1\). When plugged in the original distance, it gives the following upper bound:
\[\text{BW}(\hat{T},T)^{2}\leq\text{tr}\bigg{(}\hat{T}+T-2\bigg{(}\Big{(}T^{1/ 2}\hat{T}T^{1/2}+\lambda^{2}\text{Id}\Big{)}^{1/2}-\lambda\text{Id}\bigg{)} \bigg{)}.\]
The term \(\lambda^{2}\text{Id}\) in the square root makes this a Lipschitz function of \(\hat{T}\). Indeed, define the function \(g\) by
\[g(\hat{T})=\text{tr}\bigg{(}\Big{(}T^{1/2}\hat{T}T^{1/2}+\lambda^{2}\text{Id} \Big{)}^{1/2}-\lambda\text{Id}\bigg{)}.\]
Standard calculations (Bhatia et al., 2019; Janati et al., 2020) then show that
\[\nabla g(\hat{T})=\frac{1}{2}T^{1/2}\Big{(}T^{1/2}\hat{T}T^{1/2}+\lambda^{2} \text{Id}\Big{)}^{-1/2}T^{1/2}.\]
It implies that
\[0\preccurlyeq\nabla g(\hat{T})\preccurlyeq\frac{1}{2\lambda}T,\]
where have used that \(T^{1/2}\hat{T}T^{1/2}\succcurlyeq 0\) in the second inequality, and finally,
\[\left\|\nabla g(\hat{T})\right\|_{2}\leq\frac{\left\|T\right\|_{2}}{2\lambda}.\]
This last inequality follows from
\[\|\nabla g(\hat{T})\|_{2}^{2}=\operatorname{tr}\Bigl{(}\nabla g(\hat{T})^{ \mathrm{T}}\nabla g(\hat{T})\Bigr{)}\leq\operatorname{tr}\Bigl{(}\nabla g( \hat{T})^{\mathrm{T}}\frac{1}{2\lambda}T\Bigr{)}\leq\|\nabla g(\hat{T})\|_{2} \frac{\|T\|_{2}}{2\lambda},\]
where we have used the operator-monotonicity of the map \(M\mapsto\operatorname{tr}\Bigl{(}\nabla g(\hat{T})^{\mathrm{T}}M\Bigr{)}\), which holds because \(\nabla g(\hat{T})\succcurlyeq 0\).
Using the bound on the Lipschitz constant of \(g\), we can then write
\[|g(\hat{T})-g(T)|\leq\frac{\|T\|_{2}}{2\lambda}\|\hat{T}-T\|_{2}.\]
This leads to an inequality on the Bures-Wasserstein distance:
\[\operatorname{BW}(\hat{T},T)^{2} \leq\operatorname{tr}(\hat{T}+T)-2g(\hat{T})\] \[=2(\operatorname{tr}(T)-g(T))+\operatorname{tr}(\hat{T}-T)-2(g (\hat{T})-g(T))\] \[\leq 2(\operatorname{tr}(T)-g(T))+\operatorname{tr}(\hat{T}-T)+ \frac{\|T\|_{2}}{\lambda}\|\hat{T}-T\|_{2},\]
which concludes the proof.
### Proof of Lemma 4
We have
\[\operatorname{tr}\Bigl{(}T+\lambda\mathrm{Id}-\Bigl{(}T^{2}+\lambda^{2} \mathrm{Id}\Bigr{)}^{1/2}\Bigr{)}=\sum_{m=1}^{\infty}\Bigl{(}\lambda_{m}+ \lambda-\sqrt{\lambda_{m}^{2}+\lambda^{2}}\Bigr{)}\]
We have the following inequality
\[\lambda_{m}+\lambda-\sqrt{\lambda_{m}^{2}+\lambda^{2}}\leq\min(\lambda_{m}, \lambda),\]
by using \(\sqrt{\lambda_{m}^{2}+\lambda^{2}}\geq\max(\lambda_{m},\lambda)\).
We have \(\lambda_{m}\leq c\,m^{-\alpha}\) for all \(m\). We split the sum at \(M=\lfloor(\lambda/c)^{-1/\alpha}\rfloor\) (so that \(c\,M^{-\alpha}\approx\lambda\)), and we have
\[\sum_{m=1}^{M}\Bigl{(}\lambda_{m}+\lambda-\sqrt{\lambda_{m}^{2}+ \lambda^{2}}\Bigr{)} \leq\sum_{m=1}^{M}\lambda=M\lambda,\] \[\sum_{m=M+1}^{\infty}\Bigl{(}\lambda_{m}+\lambda-\sqrt{\lambda_{ m}^{2}+\lambda^{2}}\Bigr{)} \leq\sum_{m=M+1}^{\infty}\lambda_{m}\leq c\sum_{m=M+1}^{\infty}m ^{-\alpha}\leq c\frac{M^{1-\alpha}}{\alpha-1},\]
Finally,
\[\sum_{1=1}^{\infty}\Bigl{(}\lambda_{m}+\lambda-\sqrt{\lambda_{m}^{2}+\lambda^{ 2}}\Bigr{)}\leq\Bigl{(}\frac{\lambda}{c}\Bigr{)}^{-1/\alpha}\lambda+\frac{c}{ \alpha-1}\Bigl{(}\frac{\lambda}{c}\Bigr{)}^{1-1/\alpha}=\frac{c^{1/\alpha}}{1- 1/\alpha}\,\lambda^{1-1/\alpha}.\]
## Appendix B Proof of Theorem 2
In this section, expectations are taken with respect to both the weights \(W_{1},\ldots,W_{j}\) and the input \(x\). We remind that \(W_{j}=W_{j}^{\prime}\,\hat{A}_{j-1}\) with \(W_{j}^{\prime}\) having i.i.d. rows \(w_{ji}^{\prime}\sim\pi_{j}\). Let \(C_{j}=\mathbb{E}_{w_{j}\sim\pi_{j}}[w_{j}w_{j}^{\text{T}}]\) be the uncentered covariance of \(\pi_{j}\). Similarly to Appendix A, we assume without loss of generality that \(\rho\) is 1-Lipschitz and that \(\rho(0)=0\).
Let \(\tilde{\phi}_{j}=\rho W_{j}^{\prime}\,\phi_{j-1}\). Let \(A_{j}\in\mathcal{O}(d_{j})\) to be adjusted later. We have by definition of \(\hat{A}_{j}\):
\[\sqrt{\mathbb{E}\big{[}\norm{\hat{A}_{j}\,\hat{\phi}_{j}(x)-\phi_ {j}(x)}^{2}\big{]}} \leq\sqrt{\mathbb{E}\big{[}\norm{A_{j}\,\hat{\phi}_{j}(x)-\phi_ {j}(x)}^{2}\big{]}}\] \[\leq\sqrt{\mathbb{E}\big{[}\norm{A_{j}\hat{\phi}_{j}(x)-A_{j} \tilde{\phi}_{j}(x)}^{2}\big{]}}+\sqrt{\mathbb{E}\big{[}\norm{A_{j}\tilde{ \phi}_{j}(x)-\phi_{j}(x)}^{2}\big{]}}, \tag{18}\]
where the last step follows by the triangle inequality. We now bound separately each term.
To bound the first term, we compute the Lipschitz constant of \(\rho W_{j}^{\prime}\) (in expectation). For any \(z\), \(z^{\prime}\in H_{j-1}\), we have:
\[\mathbb{E}\big{[}\norm{\rho W_{j}^{\prime}z-\rho W_{j}^{\prime}z^ {\prime}}^{2}\big{]} \leq\frac{1}{d_{j}}\mathbb{E}\Big{[}\norm{W_{j}^{\prime}(z-z^{ \prime})}^{2}\Big{]}\] \[=(z-z^{\prime})^{\text{T}}C_{j}(z-z^{\prime})\] \[\leq\norm{C_{j}}_{\infty}\norm{z-z^{\prime}}^{2},\]
where we have used the fact that \(\rho\) is 1-Lipschitz, and have made explicit the normalization factor of \(d_{j}^{-1}\). We can therefore bound the first term in eq. (18):
\[\sqrt{\mathbb{E}\big{[}\norm{A_{j}\hat{\phi}_{j}(x)-A_{j}\tilde{ \phi}_{j}(x)}^{2}\big{]}} =\sqrt{\mathbb{E}\big{[}\norm{(\rho W_{j}^{\prime})\hat{A}_{j-1} \hat{\phi}_{j-1}(x)-(\rho W_{j}^{\prime})\phi_{j-1}(x)}^{2}\big{]}}\] \[\leq\norm{C_{j}}_{\infty}^{1/2}\sqrt{\mathbb{E}\big{[}\norm{\hat{A} _{j-1}\hat{\phi}_{j-1}(x)-\phi_{j-1}(x)}^{2}\big{]}}.\]
We define \(A_{j}\), which was arbitrary, as the minimizer of the second term in eq. (18) over \(\mathcal{O}(d_{j})\). We can then apply Theorem 1 to \(z=\phi_{j-1}(x)\). Indeed, \(\mathbb{E}_{z}[\varphi_{j}(z)\,\varphi_{j}(z)^{\text{T}}]=\mathbb{E}_{x}[\phi_ {j}(x)\,\phi_{j}(x)^{\text{T}}]\) is trace-class with eigenvalues \(\lambda_{j,m}=O(m^{-\alpha_{j}})\), and \(\pi_{j}\) has bounded second- and fourth-order moments. Therefore, there exists a constant \(c_{j}\) such that
\[\sqrt{\mathbb{E}\big{[}\norm{A_{j}\tilde{\phi}_{j}(x)-\phi_{j}(x )}^{2}\big{]}} =\sqrt{\mathbb{E}\big{[}\norm{A_{j}\,\rho W_{j}^{\prime}\,\phi_{j-1 }(x)-\varphi_{j}\,\phi_{j-1}(x)}^{2}\big{]}}\] \[\leq\norm{C_{j}}_{\infty}^{1/2}\sqrt{\mathbb{E}[\norm{\phi_{j-1}( x)}^{2}]}\,c_{j}\,d_{j}^{-\eta_{j}/2},\]
with \(\eta_{j}=\frac{\alpha_{j}-1}{2(2\alpha_{j}-1)}\). We have made explicit the factors \(\norm{C_{j}}_{\infty}^{1/2}\sqrt{\mathbb{E}[\norm{\phi_{j-1}(x)}^{2}]}\) in the constant coming from Theorem 1to simplify the expressions in the sequel. We can further
bound \(\sqrt{\mathbb{E}[\left\|\phi_{j-1}(x)\right\|^{2}]}\) by iteratively applying Lemma 1 from Appendix A:
\[\sqrt{\mathbb{E}[\left\|\phi_{j-1}(x)\right\|^{2}]}\leq\|C_{j-1}\|_{\infty}^{1/2 }\cdots\|C_{1}\|_{\infty}^{1/2}\sqrt{\mathbb{E}[\left\|x\right\|^{2}]}.\]
We thus have shown:
\[\sqrt{\mathbb{E}\big{[}\left\|\hat{A}_{j}\,\hat{\phi}_{j}(x)-\phi _{j}(x)\right\|^{2}\big{]}} \leq\|C_{j}\|_{\infty}^{1/2}\sqrt{\mathbb{E}\big{[}\left\|\hat{A} _{j-1}\hat{\phi}_{j-1}(x)-\phi_{j-1}(x)\right\|^{2}\big{]}}\] \[\quad+\|C_{j}\|_{\infty}^{1/2}\cdots\|C_{1}\|_{\infty}^{1/2} \sqrt{\mathbb{E}[\left\|x\right\|^{2}]}\,c_{j}\,d_{j}^{-\eta_{j}/2}.\]
It then follows by induction:
\[\sqrt{\mathbb{E}\big{[}\left\|\hat{A}_{j}\,\hat{\phi}_{j}(x)-\phi _{j}(x)\right\|^{2}\big{]}}\leq\|C_{j}\|_{\infty}^{1/2}\cdots\|C_{1}\|_{ \infty}^{1/2}\sqrt{\mathbb{E}[\left\|x\right\|^{2}]}\,\sum_{\ell=1}^{j}c_{\ell }\,d_{\ell}^{-\eta_{\ell}/2}.\]
We conclude like in the proof of Theorem 1:
\[\sqrt{\mathbb{E}\big{[}\left|\hat{f}(x)-f(x)\right|^{2}\big{]}}\leq\|f\|_{ \mathcal{H}_{j}}\|C_{J}\|_{\infty}^{1/2}\cdots\|C_{1}\|_{\infty}^{1/2}\sqrt{ \mathbb{E}[\left\|x\right\|^{2}]}\,\sum_{j=1}^{J}c_{j}\,d_{j}^{-\eta_{j}/2}.\]
We finally show the convergence of the kernels. Let \(\tilde{k}_{j}\) be the kernel defined by the feature map \(\tilde{\phi}_{j}\). Expectations are now also taken with respect to \(x^{\prime}\), an i.i.d. copy of \(x\). We have by the triangle inequality:
\[|\hat{k}_{j}(x,x^{\prime})-k_{j}(x,x^{\prime})|\leq|\hat{k}_{j}(x,x^{\prime}) -\tilde{k}_{j}(x,x^{\prime})|+|\tilde{k}_{j}(x,x^{\prime})-k_{j}(x,x^{\prime} )|. \tag{19}\]
For the first term on the right-hand side:
\[|\hat{k}_{j}(x,x^{\prime})-\tilde{k}_{j}(x,x^{\prime})| =|\langle\hat{\phi}_{j}(x),\hat{\phi}_{j}(x^{\prime})\rangle- \langle\tilde{\phi}_{j}(x),\tilde{\phi}_{j}(x^{\prime})\rangle|\] \[\leq|\langle\hat{\phi}_{j}(x),\hat{\phi}_{j}(x^{\prime})-\tilde{ \phi}_{j}(x^{\prime})\rangle+\langle\hat{\phi}_{j}(x)-\tilde{\phi}_{j}(x), \tilde{\phi}_{j}(x^{\prime})\rangle|\] \[\leq\|\hat{\phi}_{j}(x)\|\|\hat{\phi}_{j}(x^{\prime})-\tilde{ \phi}_{j}(x^{\prime})\|+\|\tilde{\phi}_{j}(x^{\prime})\|\|\hat{\phi}_{j}(x)- \tilde{\phi}_{j}(x)\|.\]
We thus have, because \(x,x^{\prime}\) are i.i.d.,
\[\sqrt{\mathbb{E}\big{[}\big{[}\hat{k}_{j}(x,x^{\prime})-\tilde{ k}_{j}(x,x^{\prime})\big{]}^{2}\big{]}}\] \[\leq\sqrt{\mathbb{E}\big{[}\big{[}\hat{\phi}_{j}(x)\big{]}^{2} \big{]}\mathbb{E}\big{[}\big{[}\hat{\phi}_{j}(x^{\prime})-\tilde{\phi}_{j}(x^ {\prime})\big{\|}^{2}\big{]}}+\sqrt{\mathbb{E}\big{[}\big{\|}\tilde{\phi}_{j}(x ^{\prime})\big{\|}^{2}\big{]}\mathbb{E}\big{[}\big{\|}\hat{\phi}_{j}(x)-\tilde {\phi}_{j}(x)\big{\|}^{2}\big{]}}.\]
Using the Lipschitz constant of \(\rho W_{j}^{\prime}\) in expectation as above:
\[\sqrt{\mathbb{E}\big{[}\big{[}\hat{k}_{j}(x,x^{\prime})-\tilde{ k}_{j}(x,x^{\prime})\big{]}^{2}\big{]}}\leq 2\|C_{j}\|_{\infty}\sqrt{ \mathbb{E}\big{[}\big{[}\|\phi_{j-1}(x)\big{\|}^{2}\big{]}\mathbb{E}\big{[} \big{[}\|\hat{\phi}_{j-1}(x)-\phi_{j-1}(x)\big{\|}^{2}\big{]}\big{]}}.\]
The factors on the right-hand side can be bounded using the above, to yield
\[\sqrt{\mathbb{E}\big{[}\big{[}\hat{k}_{j}(x,x^{\prime})-\tilde{ k}_{j}(x,x^{\prime})\big{]}^{2}\big{]}}\leq 2\|C_{j}\|_{\infty}\cdots\|C_{1}\|_{ \infty}\mathbb{E}[\|x\|^{2}]\sum_{\ell=1}^{j-1}c_{\ell}\,d_{\ell}^{-\eta_{\ell}/2}.\]
The second term on the right-hand side of eq. (19) can be bounded with Theorem 1 applied to \(z=\phi_{j-1}(x)\) as before:
\[\sqrt{\mathbb{E}\big{[}\big{|}\bar{k}_{j}(x,x^{\prime})-k_{j}(x,x^{ \prime})\big{|}^{2}\big{]}}\leq\kappa_{j}\|C_{j}\|_{\infty}\cdots\|C_{1}\|_{ \infty}\mathbb{E}[\|x\|^{2}]\,d_{j}^{-1/2}.\]
where we have again used the upper bound on \(\sqrt{\mathbb{E}[\big{|}\phi_{j-1}(x)\big{|}^{2}]}\).
We thus have shown that
\[\sqrt{\mathbb{E}\big{[}\big{|}k_{j}(x,x^{\prime})-k_{j}(x,x^{ \prime})\big{|}^{2}\big{]}}\leq\|C_{j}\|_{\infty}\cdots\|C_{1}\|_{\infty} \mathbb{E}[\|x\|^{2}]\,\Bigg{(}2\sum_{\ell=1}^{j-1}c_{\ell}\,d_{\ell}^{-\eta_ {\ell}/2}+\kappa_{j}\,d_{j}^{-1/2}\Bigg{)}.\]
## Appendix C Proof of Theorem 3
We prove the result by induction on the layer index \(j\). We initialize with \(\phi_{0}(x)=x\), which admits an orthogonal representation \(\sigma_{0}(g)=g\). Now suppose that \(\phi_{j-1}\) admits an orthogonal representation \(\sigma_{j-1}\). Let \(w\sim\pi_{j}\), we have that \(\sigma_{j-1}(g)^{\mathrm{T}}w\sim\pi_{j}\) for all \(g\in G\) by hypothesis. When \(\pi_{j}=\mathcal{N}(0,C_{j})\), this is equivalent to \(\sigma_{j-1}(g)^{\mathrm{T}}C_{j}\sigma_{j-1}(g)=C_{j}\), i.e. \(\sigma_{j-1}(g)C_{j}=C_{j}\sigma_{j-1}(g)\). We begin by showing that \(\phi_{j}\) then admits an orthogonal representation \(\sigma_{j}\).
We have
\[\phi_{j}(gx)=\varphi_{j}(\phi_{j-1}(gx))=\varphi_{j}(\sigma_{j-1} (g)\phi_{j-1}(x)).\]
For simplicity, here we define the feature map \(\varphi_{j}\) with \(\varphi_{j}(z)(w)=\rho(\langle z,w\rangle)\) with \(H_{j}=L^{2}(\pi_{j})\) (the result of the theorem does however not depend on this choice, as all feature maps are related by a rotation). Then,
\[\phi_{j}(gx)(w)=\rho\big{(}\langle\sigma_{j-1}(g)\phi_{j-1}(x),w \rangle\big{)}=\rho(\langle\phi_{j-1}(x),\sigma_{j-1}(g)^{\mathrm{T}}w\rangle).\]
For each \(g\in G\), we thus define the operator \(\sigma_{j}(g)\) by its action on \(\psi\in H_{j}\):
\[(\sigma_{j}(g)\psi)(w)=\psi(\sigma_{j-1}(g)^{\mathrm{T}}w).\]
It is obviously linear, and bounded as \(\big{\|}\sigma_{j}(g)\big{\|}_{\infty}=1\):
\[\big{\|}\sigma_{j}(g)\psi\big{\|}_{H_{j}}^{2}=\mathbb{E}_{w}\big{[} \psi(\sigma_{j-1}(g)^{\mathrm{T}}w)^{2}\Big{]}=\mathbb{E}_{w}\Big{[}\psi(w)^{ 2}\Big{]}=\|\psi\|_{H_{j}}^{2},\]
where we have used that \(\sigma_{j-1}(g)^{\mathrm{T}}w\sim w\). We further verify that \(\sigma_{j}(gg^{\prime})=\sigma_{j}(g)\sigma_{j}(g^{\prime})\):
\[(\sigma_{j}(gg^{\prime})\psi)(w) =\psi(\sigma_{j-1}(gg^{\prime})^{\mathrm{T}}w)=\psi(\sigma_{j-1} (g^{\prime})^{\mathrm{T}}\sigma_{j-1}(g)^{\mathrm{T}}w)\] \[=(\sigma_{j}(g^{\prime})\psi)(\sigma_{j-1}(g)^{\mathrm{T}}w)=( \sigma_{j}(g)\sigma_{j}(g^{\prime})\psi)(w).\]
We can thus write \(\phi_{j}(gx)=\sigma_{j}(g)\phi_{j}(x)\), which shows that \(\phi_{j}\) admits a representation.
It remains to show that \(\sigma_{j}(g)\) is orthogonal. The adjoint \(\sigma_{j}(g)^{\mathrm{T}}\) is equal to \(\sigma_{j}(g^{\mathrm{T}})\):
\[\Big{\langle}\sigma_{j}(g)\psi,\psi^{\prime}\Big{\rangle}_{H_{j}} =\mathbb{E}_{w}\Big{[}\psi(\sigma_{j-1}(g)^{\mathrm{T}}w)\psi^{ \prime}(w)\Big{]}=\mathbb{E}_{w}\Big{[}\psi(w)\psi^{\prime}(\sigma_{j-1}(g)w) \Big{]}=\Big{\langle}\psi,\sigma_{j}(g^{\mathrm{T}})\psi^{\prime}\Big{\rangle} _{H_{j}},\]
where we have used \(\sigma_{j-1}(g)^{\mathrm{T}}=\sigma_{j-1}(g^{\mathrm{T}})\) since \(\sigma_{j-1}\) is a group homomorphism. It is then straightforward that \(\sigma_{j}(g)\sigma_{j}(g)^{\mathrm{T}}=\sigma_{j}(g)^{\mathrm{T}}\sigma_{j}(g) =\mathrm{Id}\) by using again the fact that \(\sigma_{j}\) is a group homomorphism. This proves that \(\sigma_{j}(g)\in O(H_{j})\).
We finally show that the rainbow kernel \(k_{j}\) is invariant. We have
\[k_{j}(gx,gx^{\prime}) =\left\langle\phi_{j}(gx),\phi_{j}(gx^{\prime})\right\rangle_{H_ {j}}=\left\langle\sigma_{j}(g)\phi_{j}(x),\sigma_{j}(g)\phi_{j}(x^{\prime}) \right\rangle_{H_{j}}\] \[=\left\langle\phi_{j}(x),\phi_{j}(x^{\prime})\right\rangle_{H_{j }}=k_{j}(x,x^{\prime}),\]
which concludes the proof.
## Appendix D Experimental details
Normalization.In all the networks considered in this paper, after each non-linearity \(\rho\), a 2D batch-normalization layer (Ioffe and Szegedy, 2015) without learned affine parameters sets the per-channel mean and variance across space and data samples to 0 and 1 respectively. After training, we multiply the learned standard deviations by \(1/\sqrt{d_{j}}\) and the learned weight matrices \(L_{j+1}\) by \(\sqrt{d_{j}}\) as per our normalization conventions. This ensures that \(\mathbb{E}_{x}[\hat{\phi}_{j}(x)]=0\) and \(\mathbb{E}_{x}[\|\hat{\phi}_{j}(x)\|^{2}]=1\), which enables more direct comparisons between networks of different sizes. When evaluating activation convergence for ResNet-18, we explicitly compute these expectations on the training set and standardize the activations \(\hat{\phi}_{j}(x)\) after training for additional numerical stability. When sampling weights from the Gaussian rainbow model, the mean and variance parameters of the normalization layers are computed on the training set before alignment and sampling of the next layer.
Scattering networks.We use the learned scattering architecture of Guth et al. (2022), with several simplifications based on the setting.
The prior operator \(P_{j}\) performs a convolution of every channel of its input with predefined filters: one real low-pass Gabor filter \(\phi\) (a Gaussian window) and 4 oriented Morlet wavelets \(\psi_{\theta}\) (complex exponentials localized with a Gaussian window). \(P_{j}\) also implements a subsampling by a factor 2 on even layer indices \(j\), with a slight modification of the filters to compute wavelet coefficients at intermediate scales. See Guth et al. (2022, Appendix G) for a precise definition of the filters. The learned weight matrices \(L_{j}\) are real for CIFAR-10 experiments, and complex for ImageNet experiments.
We impose a commutation property between \(P_{j}\) and \(L_{j}\), so that we implement \(W_{j}=P_{j}\,L_{j}\). It is equivalent to having \(W_{j}=L_{j}\,P_{j}\), with the constraint that \(L_{j}\) is applied pointwise with respect to the channels created by \(P_{j}\). The non-linearity \(\rho\) is a complex modulus, which is only applied on the high-frequency channels. A scattering layer writes:
\[\rho W_{j}z=\left(L_{j}z*\phi,\left|L_{j}z*\psi_{\theta}\right|\right)_{\theta}.\]
The input (and therefore output) of \(L_{j}\) are then both real when \(L_{j}\) is real.
We apply a pre-processing \(\rho P_{0}\) to the input \(x\) before feeding it to the network. The fully-connected classifier \(\theta\) is preceded with a learned \(1\times 1\) convolution \(L_{J+1}\) which reduces the channel dimension. The learned scattering architecture thus writes:
\[\hat{f}(x)=\theta^{\mathrm{T}}L_{J+1}\,\rho P_{J}L_{J}\,\cdots\,\rho P_{1}L_{1} \,\rho P_{0}x.\]
The number of output channels of \(L_{j}\) is given in Table 1.
As explained above, we include a 2D batch-normalization layer without learned affine parameters after each non-linearity \(\rho\), as well as before the classifier \(\theta\). Furthermore, after each operator \(L_{j}\), a divisive normalization sets the norm along channels at each spatial location to 1 (except in Figures 4, 5 and 10). There are no learned biases in the architecture beyond the unsupervised channel means.
The non-linearity \(\rho\) includes a skip-connection in Figures 5 and 9, in which case a scattering layer computes
\[\rho W_{j}z=\big{(}L_{j}z*\phi,L_{j}z*\psi_{\theta},\big{|}L_{j}z*\phi\big{|}, \big{|}L_{j}z*\psi_{\theta}\big{|}\big{)}_{\theta}.\]
In this case, the activations \(\phi_{j}(x)\) are complex. The rainbow model extends to this case by adding complex conjugates at appropriate places. For instance, the alignment matrices become complex unitary operators when both activations and weights are complex.
ResNet.\(P_{j}\) is the patch-extraction operator defined in Section 2.3. The non-linearity \(\rho\) is a ReLU. We have trained a slightly different ResNet with no bias parameters. In addition, the batch-normalization layers have no learned affine parameters, and are placed after the non-linearity to be consistent with our normalization conventions. The top-5 test accuracy on ImageNet remains at 89% like the original model.
Training.Network weights are initialized with i.i.d. samples from an uniform distribution (Glorot and Bengio, 2010) with so-called Kaiming variance scaling (He et al., 2015), which is the default in the PyTorch library (Paszke et al., 2019). Despite the uniform initialization, weight marginals become Gaussian after a single training epoch. Scattering networks are trained for 150 epochs with an initial learning rate of 0.01 which is divided by 10 every 50 epochs, with a batch size of 128. ResNets are trained for 90 epochs with an initial learning rate of 0.1 which is divided by 10 every 30 epochs, with a batch size of 256. We use the optimizer SGD with a momentum of 0.9 and a weight decay of \(10^{-4}\) (except for Figures 4 and 10 where weight decay has been disabled). We use classical data augmentations: horizontal flips and random crops for CIFAR, random resized crops of size 224 and horizontal flips for ImageNet. The classification error on the ImageNet validation set is computed on a single center crop of size 224.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline & \(j\) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline
**CIFAR-10 (\(J=3\))** & \(d_{j}\) & 64 & 128 & 256 & 512 & - & - & - & - & - & - & - \\
**CIFAR-10 (\(J=7\))** & \(d_{j}\) & 64 & 128 & 256 & 512 & 512 & 512 & 512 & 512 & - & - & - \\ \hline
**ImageNet (\(J=10\))** & \(d_{j}\) & 32 & 64 & 64 & 128 & 256 & 512 & 512 & 512 & 512 & 256 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number \(d_{j}\) of output channels of \(L_{j}\), \(1\leq j\leq J+1\). The total number of projectors is \(J+1=4\) or \(J+1=8\) for CIFAR-10 and \(J+1=11\) for ImageNet.
Activation covariances.The covariance of the activations \(\hat{\phi}_{j}(x)\) is computed over channels and averaged across space. Precisely, we compute
\[\mathbb{E}_{x}\biggl{[}\sum_{u}\hat{\phi}_{j}(x)[u]\,\hat{\phi}_{j}(x)[u]^{\rm T }\biggr{]},\]
where \(\hat{\phi}_{j}(x)[u]\) is a channel vector of dimension \(d^{\prime}_{j}\) at spatial location \(u\). It yields a matrix of dimension \(d^{\prime}_{j}\times d^{\prime}_{j}\). For scattering networks, the \(d^{\prime}_{j}\) channels correspond to the \(d_{j}\) output channels of \(L_{j}\) times the 5 scattering channels computed by \(P_{j}\) (times 2 when \(\rho\) includes a skip-connection). For ResNet, \(\hat{\phi}_{j}(x)[u]\) is a patch of size \(s_{j}\times s_{j}\) centered at \(u\) due to the operator \(P_{j}\). \(d_{j}\) is thus equal to the number \(d^{\prime}_{j}\) of channels of \(\hat{\phi}_{j}\) multiplied by \(s_{j}^{2}\).
|
2307.16346 | $p$-torsion for unramified Artin--Schreier covers of curves | Let $Y\to X$ be an unramified Galois cover of curves over a perfect field $k$
of characteristic $p>0$ with $\mathrm{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}$,
and let $J_X$ and $J_Y$ be the Jacobians of $X$ and $Y$ respectively. We
consider the $p$-torsion subgroup schemes $J_X[p]$ and $J_Y[p]$, analyze the
Galois-module structure of $J_Y[p]$, and find restrictions this structure
imposes on $J_Y[p]$ (for example, as manifested in its Ekedahl--Oort type)
taking $J_X[p]$ as given. | Bryden Cais, Douglas Ulmer | 2023-07-30T23:52:25Z | http://arxiv.org/abs/2307.16346v2 | # \(p\)-torsion for unramified Artin-Schreier covers of curves
###### Abstract.
Let \(Y\to X\) be an unramified Galois cover of curves over a perfect field \(k\) of characteristic \(p>0\) with \(\operatorname{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}\), and let \(J_{X}\) and \(J_{Y}\) be the Jacobians of \(X\) and \(Y\) respectively. We consider the \(p\)-torsion subgroup schemes \(J_{X}[p]\) and \(J_{Y}[p]\). The three main themes are: the Galois-module structure of \(J_{Y}[p]\); restrictions this structure imposes on \(J_{Y}[p]\) (for example, as manifested in its Ekedahl-Oort type) taking \(J_{X}[p]\) as given; and methods for explicitly computing the group schemes \(J_{X}[p]\) and \(J_{Y}[p]\).
Key words and phrases:Curve, finite field, unramified cover, Jacobian, \(p\)-torsion, group scheme, de Rham cohomology, Dieudonne module, Frobenius, Verschiebung, Ekedahl-Oort type 2
###### Contents
* 1 Introduction
[MISSING_PAGE_POST]
_Push forward by_ \(\pi\) _induces a canonical isomorphism_ \(J_{Y}[p]_{m}/\delta\cong J_{X}[p]_{m}\) _which identifies the exact sequences_ (1.7) _and_ (1.4)_._
3. _We have equalities_ \[\delta^{i}J_{Y}[p]_{ll}=J_{Y}[p]_{ll}[\delta^{p-i}]\] _for_ \(i=0,\ldots,p\)_, as well as canonical isomorphisms_ \[\frac{\delta^{i}J_{Y}[p]_{ll}}{\delta^{i+1}J_{Y}[p]_{ll}}\cong\frac{J_{Y}[p]_{ ll}[\delta^{p-i}]}{J_{Y}[p]_{ll}[\delta^{p-i-1}]}\cong J_{X}[p]_{ll}\quad\text{ for }i=0,\ldots,p-1.\]
As we will see in Section 4, the asymmetry between the kernels and cokernels of \(\delta\) (i.e., (1.5) vs (1.6) and (1.7) vs (1.8)) is significant. Indeed, although \(J_{Y}[p]_{elt}[\delta]\) and \(J_{Y}[p]_{elt}/\delta\) have the same order and are closely related, they are not in general isomorphic. Similarly for \(J_{Y}[p]_{m}[\delta]\) and \(J_{Y}[p]_{m}/\delta\). Readers are referred to Figures 1 and 2 in Section 4 for pictorial versions of Theorems 1.3 and 1.6 in terms of Dieudonne modules. Among other things, the figures show how the two filtrations (by images and kernels of \(\delta\)) interact.
When \(k=\overline{k}\), we may recover the isomorphisms (1.1) and (1.2) from parts (1) and (2) of the theorem using the fact that the category of \(p\)-torsion etale (resp. multiplicative) group schemes over \(k\) is semi-simple with unique simple object \(\mathbb{Z}/p\mathbb{Z}\) (resp. \(\mu_{p}\)). The situation for the local-local part is much more complicated even when \(k\) is algebraically closed and will be discussed in more detail below.
We now consider a certain freeness property of \(p\)-torsion group schemes with \(G\) action.
**Definition-Lemma 1.4**.: _Let \(\mathcal{G}\) be a finite commutative group scheme over \(k\) killed by \(pp\) and equipped with an action of \(G=\mathbb{Z}/p\mathbb{Z}\), i.e., a group scheme equipped with the structure of a module over \(\mathbb{F}_{p}[G]\). We say \(\mathcal{G}\) is \(G\)-free if the following equivalent conditions are satisfied:_
1. _the Dieudonne module_ \(M(\mathcal{G})\) _is free over the group ring_ \(k[G]\)__
2. \(\mathcal{G}[\delta]/\delta^{p-1}\mathcal{G}=0\)__
3. \(\mathcal{G}[\delta^{p-1}]/\delta\mathcal{G}=0\)__
4. \(\delta^{p-1}\) _induces an isomorphism_ \(\mathcal{G}/\delta\stackrel{{\sim}}{{\rightarrow}}\mathcal{G}[\delta]\)__
See Lemma 3.1 for the equivalence of the various conditions in this definition.
**Corollary 1.5**.: \(J_{Y}[p]_{ll}\) _is \(G\)-free._
Proof.: Conditions (2), (3), and (4) in Definition 1.4 follow immediately from part (3) of Theorem 1.3. The corollary also follows from Theorem 1.6 just below.
There is an elegant, uniform variant of Theorem 1.3 provided that \(X\) has a \(k\)-rational point.
**Theorem 1.6**.: _Suppose that \(X\) has a \(k\)-rational point \(S\), and let \(T=\pi^{-1}(S)\) viewed as a closed subscheme of \(Y\). Then there is a self-dual \(BT_{1}\) group scheme \(\mathcal{H}\) equipped with the structure of a module over \(\mathbb{F}_{p}[G]\) with the following properties:_
1. \(\mathcal{H}\) _is_ \(G\)_-free in the sense of Definition-Lemma_ 1.4_._
2. _There are equalities_ \(\delta^{i}\mathcal{H}=\mathcal{H}[\delta^{p-i}]\) _for_ \(i=1,\ldots,p\)_, as well as canonical isomorphisms_ \[\frac{\delta^{i}\mathcal{H}}{\delta^{i+1}\mathcal{H}}\cong\frac{\mathcal{H}[ \delta^{p-i}]}{\mathcal{H}[\delta^{p-i-1}]}\cong J_{X}[p]\quad\text{for }i=0,\ldots,p-1.\]
3. _There are canonical exact sequences_ \[0\to J_{Y}[p]_{\epsilon t}\to\mathcal{H}_{\epsilon t}\to\operatorname{Res}_{T/S} \mathbb{Z}/p\mathbb{Z}\to\mathbb{Z}/p\mathbb{Z}\to 0,\] _and_ \[0\to\mu_{p}\to\operatorname{Res}_{T/S}\mu_{p}\to\mathcal{H}_{m}\to J_{Y}[p]_{m}\to 0,\] _and a canonical isomorphism_ \[\mathcal{H}_{ll}\cong J_{Y}[p]_{ll}.\]
_Remarks 1.7_.:
1. The restriction of scalars \(\operatorname{Res}_{T/S}\) and the adjunction morphisms to and from it will be defined in Section 5.
2. The theorem says that a certain extension \(\mathcal{H}\) of \(J_{Y}[p]\) is \(G\)-free with minimal subquotients isomorphic to \(J_{X}[p]\), and we recover again that \(J_{Y}[p]_{ll}\) is \(G\)-free.
3. The group scheme \(\mathcal{H}\) depends on the choice of \(S\) in an interesting way, see Section 7.7.
4. See Remark 4.5 for another version where \(S\) is allowed to be any effective divisor.
5. See Remark 4.4 for an interpretation of the exact sequences in part (3) as a 3-step filtration on \(\mathcal{H}\) which for \(i=1,\ldots,p-2\) induces (via the isomorphisms in part (2)) the 3-step filtration on \(J_{X}[p]\) implicit in Definition 1.1.
Theorems 1.3 and 1.6 identify the minimal subquotients of \(J_{Y}[p]\) and \(\mathcal{H}\) as \(\mathbb{F}_{p}[G]\)-modules, and one might hope to "reassemble" the group schemes from this information. However, the category of \(BT_{1}\) group schemes is not well behaved with respect to extensions (even when \(k\) is algebraically closed), so even taking \(J_{X}[p]\) as known, the structure of the repeated extensions \(J_{Y}[p]\) and \(\mathcal{H}\) can be quite intricate. See Section 6 for more details.
### Analysis of the etale part of \(J_{Y}[p]\)
We now consider freeness and related splitting questions for the etale part of \(J_{Y}[p]\). Similar results hold for the multiplicative part by Cartier duality, and we leave it to the reader to make them explicit.
Note that equation (1.1) implies that when \(k\) is algebraically closed, \(J_{Y}[p]_{\epsilon t}\) is the direct sum of \(\mathbb{Z}/p\mathbb{Z}\) and a \(G\)-free group scheme. The following result gives criteria for the same structural result to hold over a general \(k\).
**Proposition 1.8**.:
1. _The exact sequence (_1.5_) splits if and only if there is an exact sequence of_ \(k\)_-group schemes_ \[0\to\mathbb{Z}/p\mathbb{Z}\to J_{Y}[p]_{\epsilon t}\to\mathcal{Q}\to 0\] _where_ \(\mathcal{Q}\) _is_ \(G\)_-free._
2. _The exact sequence (_1.6_) splits if and only if there is an exact sequence of_ \(k\)_-group schemes_ \[0\to\mathcal{K}\to J_{Y}[p]_{\epsilon t}\to\mathbb{Z}/p\mathbb{Z}\to 0\] _where_ \(\mathcal{K}\) _is_ \(G\)_-free._
3. _The exact sequences (_1.5_) and (_1.6_) both split if and only if_ \(J_{Y}[p]_{\epsilon t}\) _is the direct sum of_ \(\mathbb{Z}/p\mathbb{Z}\) _and a_ \(G\)_-free group scheme._
The proof will be given in Section 5. We will see in Example 7.1 that (1.5) and (1.6) may or may not split, and splitting of one does not in general imply splitting of the other; similarly for (1.7) and (1.8).
For a commutative \(p\)-torsion group scheme \(\mathcal{G}\) over \(k\), define the _arithmetic \(p\)-rank of \(\mathcal{G}\)_, denoted \(\nu(G)\), by
\[p^{\nu(\mathcal{G})}=|\mathcal{G}(k)|=|\mathcal{G}_{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{\mathcal{ \mathcal{\mathcal{ \mathcal{ \mathcal{ \mathcal{ }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\]]]]]]]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\]\)\]\]\]\]\]\]\]\)\]\]\]\]\]\]\]\]\]\]\]\)\]\]\]\]\]\)\]\]\]\]\]\]\]\)\]\]\]\]\]\]\)\]\]\]\]\]\)\]\]\]\]\)\]\)\]\]\]\]\)\]\]\]\]\]\)\]\]\]\]\]\)\)\]\]\]\]\]\]\)\]\)\]\]\]\]\)\]\]\]\]\)\]\)\]\]\]\]\]\)\)\]\]\]\)\]\]\)\]\]\]\)\]\]\)\]\)\]\]\)\]\]\]\]\)\)\]\)\]\]\]\]\)\]\]\]\)\]\)\]\]\)\]\)\]\]\)\]\]\)\]\)\]\)\]\]\)\]\]\)\]\]\]\)\]\]\)\)\]\)\]\]\]\)\)\]\]\)\]\)\)\]\]\)\]\]\]\)\)\]\]\]\]\)\]\]\)\)\]\)\]\)\]\]\]\]\)\]\)\)\]\]\)\]\]\]\)\]\]\)\]\)\)\)\]\)\]\]\)\]\]\]\)\)\)\]\]\]\)\]\)\)\]\]\)\]\]\)\]\)\]\)\]\)\]\)\]\]\)\]\]\]\]\]\]\)\)\]\)\)\]\)\)\]\)\]\)\]\]\]\]\]\)\)\]\)\)\)\]\)\]\)\]\]\)\]\]\)\]\)\)\)\]\]\]\)\]\]\]\]\)\)\)\]\]\]\]\]\)\)\)\]\]\)\]\)\)\]\)\]\)\]\)\]\]\)\)\]\]\]\)\)\]\]\]\)\)\)\)\]\)\]\]\]\]\)\)\]\)\]\)\]\]\)\]\]\)\)\]\)\]\)\]\)\]\]\)\)\]\]\)\]\)\]\]\)\]\]\)\]\)\]\)\]\]\)\]\)\)\]\)\]\]\)\)\]\]\)\)\]\)\]\)\)\]\]\)\]\]\)\]\]\)\]\]\)\]\)\)\)\]\]\]\)\)\)\]\)\]\)\]\]\)\)\]\]\]\)\]\)\)\]\)\)\]\]\]\]\)\]\]\)\]\)\]\)\)\]\)\)\]\]\]\)\)\]\)\]\)\]\)\]\)\]\]\]\)\)\)\]\]\)\]\)\)\]\)\]\)\]\]\]\]\)\]\)\]\]\)\)\]\)\]\)\]\)\)\)\]\]\)\]\)\]\]\]\)\]\)\)\]\]\]\)\]\)\]\)\)\]\]\]\]\)\]\)\)\)\]\]\]\)\)\)\)\]\)\]\]\]\)\]\]\)\]\)\)\]\)\]\)\]\]\)\]\)\]\)\]\)\]\]\)\)\]\]\]\]\)\]\)\)\]\)\)\]\)\]\]\]\]\]\)\)\)\]\)\)\]\]\)\]\)\]\)\]\)\]\]\]\)\)\)\]\)\]\]\)\)\]\)\]\]\)\]\)\]\]\]\)\)\]\)\]\]\)\]\)\]\]\)\]\)\]\]\)\]\)\]\)\]\]\]\)\)\)\]\)\]\]\)\]\)\)\]\]\)\)\]\]\)\]\)\]\)\]\]\)\]\)\]\]\)\)\]\]\)\]\)\]\)\]\)\]\)\]\]\)\]\)\]\]\)\)\]\]\]\)\)\]\]\)\]\)\]\)\]\)\]\]\)\]\)\]\)\]\]\)\]\)\]\)\]\]\)\)\]\]\)\]\]\)\]\)\]\)\]\)\]\]\)\]\)\]\)\]\]\)\]\)\]\)\]\]\)\]\]\)\)\]\]\)\]\)\]\]\)\]\)\]\)\)\]\]\)\)\]\]\)\]\]\)\)\]\)\]\]\]\)\]\)\]\]\)\]\)\)\]\)\]\]\]\)\]\]\)\]\)\]\]\)\]\)\]\)\]\)\]\]\)\)\]\]\)\]\)\]\]\)\]\)\]\)\]\]\)\]\)\]\]\)\]\)\]\]\)\]\)\]\]\)\)\]\]\)\]\)\]\)\]\)\]\]\)\]\]\)\]\)\]\)\]\)\]\]\]\)\)\]\)\]\]\)\)\]\]\]\)\]\)\]\]\)\)\]\]\]\)\)\]\)\]\)\]\]\)\]\]\]\)\)\)\]\]\]\]\)\]\]\)\)\]\]\)\]\]\)\]\)\)\)\]\]\)\]\)\]\]\]\\]\]\)\]\)\]\)\]\]\\)\]\)\]\]\)\]\)\]\)\]\]\)\]\\\]\)\]\\)\]\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\
_where \(\mathcal{G}\) is \(G\)-free of rank 1 and \(\nu(\mathcal{G})=0\). There is a nontrivial extension \(k^{\prime}\) of \(k\) of degree dividing \(p-1\) such that \(|\mathcal{G}(k^{\prime})|=p^{\mu}\) where \(1\leq\mu\leq p\), and there is an extension \(k^{\prime\prime}\) of \(k\) of degree dividing \(p\) such that \(\mathcal{G}\cong(\mathcal{G}^{\prime})^{p}\) over \(k^{\prime\prime}\) where \(\mathcal{G}^{\prime}\) has rank 1 and \(\nu(\mathcal{G}^{\prime})=0\). Over \(k^{\prime}k^{\prime\prime}\), \(J_{Y}[p]_{\ell t}\) is completely split._
* \(\nu_{X}=1\) _and_ \(\nu_{Y}>1\)_. In this case,_ \(\nu_{Y}<p+1\) _(so_ \(J_{Y}[p]_{\ell t}\) _is not completely split over_ \(k\)_), and_ \(J_{Y}[p]_{\ell t}\) _is completely split over the extension of_ \(k\) _of degree_ \(p\)_._
* \(\nu_{X}=2\)_. In this case,_ \(2\leq\nu_{Y}\leq p+1\) _and_ \(J_{Y}[p]_{\ell t}\) _is completely split over an extension of degree dividing_ \(p\)_._
_If \(p=2\), then exactly one of the following holds:_
* \(\nu_{X}=1\)_. In this case,_ \(1\leq\nu_{Y}<3\) _(so_ \(J_{Y}[p]_{\ell t}\) _is not completely split over_ \(k\)_), and_ \(J_{Y}[p]_{\ell t}\) _splits completely over an extension of_ \(k\) _of degree dividing 4._
* \(\nu_{X}=2\)_. In this case,_ \(2\leq\nu_{Y}\leq 3\)_, there is an exact sequence_
\[0\to\mathbb{Z}/p\mathbb{Z}\to J_{Y}[p]_{\ell t}\to\mathcal{Q}\to 0\]
_where_ \(\nu(\mathcal{Q})\geq 1\)_, and_ \(J_{Y}[p]_{\ell t}\) _splits completely over an extension of_ \(k\) _of degree dividing_ \(p\)_._
See the proof in Section 7 for additional information on the structure of \(J_{Y}[p]_{\ell t}\) in each case.
### Analysis of the local-local part of \(J_{y}[p]\)
For the local-local part, it seems hopeless to give a full analysis, even when \(k\) is algebraically closed. (See Section 6 for comments on some of the difficulties.) Nevertheless, certain cases can be described rather explicitly. Recall that the \(a\)-number of a finite group scheme \(\mathcal{J}\) over \(k\) is the largest integer \(a\) such that there is an injection \(\alpha_{p}^{a}\hookrightarrow\mathcal{J}\). (When \(\mathcal{J}\) is local-local, the \(a\)-number also has an interpretation as the number of generators and relations of the Dieudonne module of \(\mathcal{J}\).) Write \(a_{X}\) and \(a_{Y}\) for the \(a\)-numbers of \(J_{X}[p]\) and \(J_{Y}[p]\) respectively. Boher and Cais [1, SS6E] have observed that in our context, \(a_{X}\leq a_{Y}\leq pa_{X}\). We can improve and refine this in some cases.
**Theorem 1.12**.: _Suppose that \(p>2\), \(k\) is algebraically closed, and \(f_{X}=g_{X}-1\). (This implies that \(a_{X}=1\).) Then \(a_{Y}\in\{2,4,\ldots,p-1,p\}\). Moreover the local-local part \(\mathcal{L}\) of \(J_{Y}[p]\) has an explicit description in terms of generators and relations depending only on \(a_{Y}\)._
The precise description in terms of generators and relations is given using Dieudonne modules in Theorem 8.6. Machine computation suggests that when \(p=2\), we should have \(a_{Y}=2\), but we are currently not able to prove this.
In certain cases, we can use restrictions on \(J_{X}[p]_{ll}\) to place strong restrictions on \(J_{Y}[p]_{ll}\). This is most easily stated in terms of Ekedahl-Oort structures. (See [1] or [2] for background.)
**Theorem 1.13**.: _Suppose that \(J_{X}[p]_{ll}\) is superspecial, i.e., \(J_{X}[p]_{ll}\cong E_{ss}[p]^{h}\) where \(h=g_{X}-f_{X}\) and \(E_{ss}\) is a supersingular elliptic curve over \(k\). Then the Ekedahl-Oort structure of \(J_{Y}[p]_{ll}\) starts with \(h\) zeroes, i.e., it has the form \([0,0,\ldots,0,\psi_{h+1},\ldots,\psi_{ph}]\). The Ekedahl-Oort structure of \(J_{Y}[p]\) has the form_
\[[[1,2,\ldots,f_{Y},f_{Y},\ldots,f_{Y},f_{Y}+\psi_{h+1},\ldots,f_{Y}+\psi_{ph}].\]
(_In the notation of [2, 7.2], this is \([\bigtriangledown^{f_{Y}},\rightarrow^{h}\ldots]\)._)
The theorem reduces the number of possibilities for \(J_{X}[p]_{ll}\) from \(2^{ph}\) to \(2^{(p-1)h}\).
**Tools for computing \(H^{1}_{dR}(Y)\) as a Dieudonne module.** Another theme in the paper is to develop tools for explicit computation of \(J_{Y}[p]\) which are suitable for machine implementation. It turns out (see Section 9) that the cover \(\pi:Y\to X\) is determined by a class in \(H^{1}(X,\mathcal{O}_{X})\) plus a small amount of additional data, and this presentation can be used (see Section 10) to give efficient algorithms for computing \(H^{1}_{dR}(Y)\) as a \(\mathbb{D}[G]\)-module. The new observation is that we may compute \(H^{1}_{dR}(Y)\) (with its \(\mathbb{D}_{k}[G]\)-module structure) purely in terms of Riemann-Roch spaces (of bounded dimension) _on the base curve \(X\)_. See Proposition 10.7 for a precise statement. Throughout the paper, we report on numerous examples and counterexamples computed in Magma, using code built upon [20].
### Outline of the paper.
We now describe the main outlines of the paper. By a theorem of Oda, the Dieudonne module of \(J_{Y}[p]\) is isomorphic to the de Rham cohomology \(H^{1}_{dR}(Y)\), so we are lead to study the action of \(G\) on this and related cohomology groups. After discussing preliminaries on Dieudonne modules and \(k[G]\)-modules in Sections 2 and 3, we will prove crucial results of Chevalley-Weil type on the \(G\)-module structure of various flat and coherent cohomology groups in Section 4 and deduce the Dieudonne module version of Theorems 1.3 and 1.6. In particular, we show that \(H^{1}_{dR}(Y)\) is close to being a free module over the group ring \(k[G]\), and we control the Dieudonne structure of the "errors". See Propositions 4.1 and 4.3 for the precise statements. The translation from modules to groups is given in Section 5.
In Section 6, we review some of the difficulties in recovering \(J_{Y}[p]\) from its associated graded as given in Theorem 1.3.
Our results on the etale part of \(J_{Y}[p]\) (Proposition 1.8 and Theorems 1.9, 1.10, and 1.11) are proven in Section 7. Our results on the local-local part of \(J_{Y}[p]\) (Theorems 1.12, and 1.13) are proven in Section 8. All these results come from a careful study of the restrictions that \(k[G]\)-freeness places on a Dieudonne module.
In Sections 9 and 10, we analyze the geometry of unramified Artin-Schreier covers and develop a method for explicitly calculating \(H^{1}_{dR}(Y)\) as a Dieudonne module. There is an intentional (small) overlap in the expositions of Sections 4 and 9 whose purpose is to make it possible to read Sections 9 and 10 independently of Sections 4 through 8.
### Standing notation.
We fix the following notation and hypotheses for the rest of the paper: \(k\) is a perfect field of characteristic \(p>0\) with algebraic closure \(\overline{k}\); \(X\) is a smooth, proper, geometrically irreducible curve of genus \(g_{X}\) over \(k\); \(\pi:Y\to X\) is an unramified Galois covering with group \(\mathbb{Z}/p\mathbb{Z}\) such that \(Y\) has genus \(g_{Y}\) and \(H^{0}(Y,\mathcal{O}_{Y})=k\); and we fix an isomorphism \(G:=\operatorname{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}\).
## 2. Dieudonne theory
We will use Dieudonne theory to study \(J_{X}[p]\) and \(J_{Y}[p]\). We refer to [14] for the basic facts recalled below.
Let \(\mathbb{D}_{k}\) be the associative \(k\)-algebra generated by symbols \(F\) and \(V\) with relations
\[FV=VF=0,\qquad F\alpha=\alpha^{p}F,\quad\text{and}\quad\alpha V=V\alpha^{p}\]
for all \(\alpha\in k\).
A \(p\)_-group scheme over \(k\)_ is by definition a finite commutative group scheme over \(k\) which is annihilated by \(p\).
Recall that Dieudonne theory gives a contravariant equivalence between the category of \(p\)-group schemes over \(k\) and the category of left \(\mathbb{D}_{k}\)-modules of finite length. If \(G\) is a \(p\)-group scheme over \(k\), we write \(M(G)\) for the corresponding \(\mathbb{D}_{k}\)-module.
By definition, a \(\mathbb{D}_{k}\)-module \(M\) is _self-dual_ if it admits a non-degenerate pairing of \(\mathbb{D}_{k}\)-modules, i.e., a non-degenerate \(k\)-bilinear pairing \(\langle\cdot,\cdot\rangle\) with the properties that
\[\langle Fx,y\rangle=\langle x,Vy\rangle^{p}\quad\text{and}\quad\langle Vx,y \rangle=\langle x,Fy\rangle^{1/p} \tag{2.1}\]
for all \(x,y\in M\). By definition, a \(\mathbb{D}_{k}\)-module \(M\) is a \(BT_{1}\)_module_ if \(\operatorname{Ker}(F)=\operatorname{Im}(V)\) (or equivalently, if \(\operatorname{Im}(F)=\operatorname{Ker}(V)\)).
By definition, a \(p\)-group scheme \(G\) over \(k\) is _self-dual_ (resp. a \(BT_{1}\)_group scheme_) if \(M(G)\) is self-dual (resp. a \(BT_{1}\) module).
Any finite-dimensional \(\mathbb{D}_{k}\)-module \(N\) decomposes as
\[N\cong N_{\ell t}\oplus N_{m}\oplus N_{ll} \tag{2.2}\]
where \(N_{\ell t}\) is etale (\(F\) is bijective and \(V=0\)), \(N_{m}\) is multiplicative (\(F=0\) and \(V\) is bijective), and \(N_{ll}\) is "local-local" (\(F\) and \(V\) are nilpotent). (Choose a sufficiently large integer \(a\) and set \(N_{\ell t}=\operatorname{Im}F^{a}\), \(N_{m}=\operatorname{Im}V^{a}\), and \(N_{ll}=\operatorname{Ker}F^{a}\cap\operatorname{Ker}V^{a}\).) Clearly this decomposition is compatible with change of base field: if \(k^{\prime}/k\) is an extension of perfect fields,
\[\left(N\otimes_{k}k^{\prime}\right)_{\ell t}=N_{\ell t}\otimes_{k}k^{\prime},\quad\left(N\otimes_{k}k^{\prime}\right)_{m}=N_{m}\otimes_{k}k^{\prime}, \quad\text{and}\quad\left(N\otimes_{k}k^{\prime}\right)_{ll}=N_{ll}\otimes_{k} k^{\prime}.\]
The assignments \(N\rightsquigarrow N_{\ell t},N_{m},N_{ll}\) are exact functors on the category of \(\mathbb{D}_{k}\)-modules. We denote the corresponding functors on \(p\)-torsion group schemes by \(\mathcal{G}\rightsquigarrow\mathcal{G}_{\ell t},\mathcal{G}_{m}\) and \(\mathcal{G}_{ll}\).
If \(N\) is a \(\mathbb{D}_{k}[G]\)-module, the decomposition is also respected by the \(G\) action since \(G\) commutes with \(F\) and \(V\). Also, \(N\) is self-dual if and only if \(N_{\ell t}\) is dual to \(N_{m}\) and \(N_{ll}\) is self-dual; and \(N\) is a \(BT_{1}\) module if and only if \(N_{ll}\) is a \(BT_{1}\) module.
A theorem of Oda [11, Cor. 5.11] says that for a smooth, proper, irreducible curve \(Z\) over \(k\) with Jacobian \(J_{Z}\), the \(p\)-torsion subgroup \(J_{Z}[p]\) is a self-dual \(BT_{1}\) group scheme, and \(M(J_{Z}[p])\cong H^{1}_{dR}(Z)\) where \(H^{1}_{dR}(Z)\) is equipped with a natural \(\mathbb{D}_{k}\)-module structure.1 We will use this result to prove the Theorems 1.3 and 1.6 as statements about \(H^{1}_{dR}(X)\) and \(H^{1}_{dR}(Y)\) and related Dieudonne modules.
Footnote 1: Oda’s result requires that \(Z\) have a \(k\)-rational point. This is of course no restriction when \(k\) is algebraically closed. When \(k\) is only assumed to be perfect, [12, Prop. 5.4] shows that Oda’s result continues to hold even without a rational point.
A recent preprint of Moonen [11] gives an alternative approach to self-dual \(BT_{1}\) modules which is convenient for calculations and which we will use to compute examples; see Section 10.
## 3. \(G\)-modules
Consider the group rings \(k[G]\) or \(\mathbb{F}_{p}[G]\) where as before \(G=\operatorname{Gal}(Y/X)\) and we have fixed an isomorphism \(G\cong\mathbb{Z}/p\mathbb{Z}\). Let \(\gamma\in G\) be the element corresponding to \(1\in\mathbb{Z}/p\mathbb{Z}\). Then
\[k[G]\cong k[\gamma]/(\gamma^{p}-1)\cong k[\delta]/(\delta^{p})\]
where \(\delta:=\gamma-1\). Note that
\[\delta^{p-1}=\gamma^{p-1}+\cdots+1\]
is the trace element of \(k[G]\).
It is easily checked that up to isomorphism the indecomposable \(k[G]\)-modules are
\[V_{i}:=k[\delta]/(\delta^{i})\quad\text{for $i=1,\ldots,p$.}\]
(See, e.g., [1, Lemma 64.2]). By the Krull-Schmidt theorem, every finitely generated \(k[G]\)-module is (non-canonically) isomorphic to a direct sum of indecomposable modules.
**Lemma 3.1**.: _Let \(M\) be a finitely generated \(k[G]\)-module. Then the following conditions are equivalent:_
1. \(M\) _is free over_ \(k[G]\)_,_
2. \(M[\delta]=\delta^{p-1}M\)_,_
3. \(M[\delta^{p-1}]=\delta M\)_,_
4. \(\delta^{p-1}\) _induces an isomorphism_ \(M/\delta M\cong M[\delta]\)_._
Proof.: This follows immediately from the classification of indecomposable \(k[G]\)-modules and a straightforward calculation.
Equivalence of the conditions in Definition-Lemma 1.4 follows from Lemma 3.1 by applying the Dieudonne functor.
Define the dual of \(V_{i}\) as \(V_{i}^{*}=\operatorname{Hom}_{k}(V_{i},k)\) with action \((\gamma\phi)(v)=\phi(\gamma^{-1}v)\) for \(\phi\in V_{i}^{*}\) and \(v\in V_{i}\). Then \(V_{i}^{*}\cong V_{i}\) (non-canonically) as \(k[G]\)-modules.
A non-degenerate, bilinear form \(\langle\cdot,\cdot\rangle\) on \(M\) such that \(\langle\gamma m,\gamma n\rangle=\langle m,n\rangle\) for all \(m,n\in M\) induces an isomorphism \(M\cong M^{*}\) of \(k[G]\)-modules.
Defining \(\tilde{\delta}:=\gamma^{-1}-1=-\gamma^{-1}\delta\), we have
\[\langle m,\delta n\rangle=\langle m,\gamma n\rangle-\langle m,n\rangle=\langle \gamma^{-1}m,n\rangle-\langle m,n\rangle=\langle\tilde{\delta}m,n\rangle.\]
for all \(m,n\in M\). Note as well that \(\delta\) and \(\tilde{\delta}\) have the same image and kernel on any \(k[G]\)-module.
Parallel definitions and results hold for \(\mathbb{F}_{p}[G]\)-modules. We write \(W_{j}\) for the module \(\mathbb{F}_{p}[\delta]/(\delta^{j})\) over \(\mathbb{F}_{p}[G]\cong\mathbb{F}_{p}[\delta]/(\delta^{p})\).
## 4. de Rham cohomology as a \(\mathbb{D}_{k}[G]\)-module
Readers are assumed to be familiar with the flat, coherent, and de Rham cohomology of curves over perfect fields, and in particular with the semi-linear endomorphisms \(F\) and \(V\) of the de Rham cohomology of a curve. We recommend [10], [11], and [12] as basic references.
Suppose that \(Z\) is a smooth, proper, irreducible curve over \(k\). Then we have coherent cohomology groups \(H^{s}(Z,\mathcal{O}_{Z})\) and \(H^{s}(Z,\Omega^{1}_{Z})\), as well as de Rham cohomology groups \(H^{s}_{dR}(Z)\). These are finite-dimensional \(k\) vector spaces, and there is an exact sequence
\[0\to H^{0}(Z,\Omega^{1}_{Z})\to H^{1}_{dR}(Z)\to H^{1}(Z,\mathcal{O}_{Z})\to 0. \tag{4.1}\]
There is a cup product on \(H^{1}_{dR}(Z)\) which induces a perfect alternating pairing
\[H^{1}_{dR}(Z)\times H^{1}_{dR}(Z)\to H^{2}_{dR}(Z)=k\]
denoted \(\langle\cdot,\cdot\rangle_{Z}\). The subspace \(H^{0}(Z,\Omega^{1}_{Z})\) is isotropic, and the pairing restricts to the (perfect) Serre duality pairing
\[H^{0}(Z,\Omega^{1}_{Z})\times H^{1}(Z,\mathcal{O}_{Z})\to k.\]
There are also semi-linear operators \(F\) and \(V\) on \(H^{s}_{dR}(Z)\) making it into a \(\mathbb{D}_{k}\)-module. Explicitly,
\[H^{0}_{dR}(Z)\cong k,\text{ with }F\alpha=\alpha^{p}\text{ and }V\alpha=0,\]
\[H^{2}_{dR}(Z)\cong k,\text{ with }F\alpha=0\text{ and }V\alpha=\alpha^{1/p},\]
in other words
\[H^{0}_{dR}(Z)\cong M(\mathbb{Z}/p\mathbb{Z})\quad\text{and}\quad H^{2}_{dR}(Z )\cong M(\mu_{p}).\]
If \((\omega_{i},f_{ij})\) is a hypercocycle for an affine open cover \(\{U_{i}\}\) of \(Z\) representing a class \(c\in H^{1}_{dR}(Z)\), then \(Fc\) and \(Vc\) are represented by
\[(0,f^{p}_{ij})\quad\text{and}\quad(\mathcal{C}\omega_{i},0) \tag{4.2}\]
respectively, where \(\mathcal{C}\) is the Cartier operator. See [1] for more details.
We have
\[\operatorname{Im}\big{(}V:H^{1}_{dR}(Z)\to H^{1}_{dR}(Z)\big{)}=\operatorname {Ker}\big{(}F:H^{1}_{dR}(Z)\to H^{1}_{dR}(Z)\big{)}=H^{0}(Z,\Omega^{1}),\]
so \(H^{1}_{dR}(Z)\) is a \(BT_{1}\) module. The pairing is compatible with the \(\mathbb{D}_{k}\)-module structure in the sense of equation (2.1), so \(H^{1}_{dR}(Z)\) is a self-dual \(BT_{1}\) module.
If \(\pi\) is a finite surjective map of smooth, projective curves, we have maps \(\pi^{*}\) and \(\pi_{*}\) on de Rham cohomology which are compatible with the \(\mathbb{D}_{k}\)-module structures. Also, if \(\pi\) is a Galois cover, \(\pi^{*}\pi_{*}\) is the trace map. Applied to \(\pi:Y\to X\), this means
\[\pi^{*}\pi_{*}=1+\gamma+\dots+\gamma^{p-1}=\delta^{p-1} \tag{4.3}\]
as endomorphisms of de Rham cohomology.
With data \(\pi:Y\to X\), \(G=\operatorname{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}\) as usual, we will prove two results on the cohomology of \(X\) and \(Y\) (Propositions 4.1 and 4.3) which will yield Theorems 1.3 and 1.6.
Recall that the \(p\)-rank of a curve \(Z\) is by definition the integer \(f_{Z}\) such that \(J_{Z}[p](\overline{k})\cong(\mathbb{Z}/p\mathbb{Z})^{f_{Z}}\). It is also equal to the dimension over \(k\) of \(H^{1}_{dR}(Z)_{\ell t}\).
**Proposition 4.1**.:
1. _There are canonical homomorphisms of_ \(\mathbb{D}_{k}\)_-modules_ \[M(\mathbb{Z}/p\mathbb{Z})\hookrightarrow H^{1}_{dR}(X)_{\ell t}\hookrightarrow H ^{1}_{dR}(X)\quad\text{and}\quad H^{1}_{dR}(X)\twoheadrightarrow H^{1}_{dR}(X)_ {m}\twoheadrightarrow M(\mu_{p})\] _which are exchanged by Cartier duality. The Dieudonne module of the group_ \(\mathcal{G}_{X}\) _in Definition_ 1.1 _is_ \[\mathcal{M}_{X}:=\frac{\operatorname{Ker}\left(H^{1}_{dR}(X)\twoheadrightarrow M (\mu_{p})\right)}{\operatorname{Im}\left(M(\mathbb{Z}/p\mathbb{Z}) \hookrightarrow H^{1}_{dR}(X)\right)}.\]
2. _There are_ \((\)_non-canonical_\()\) _isomorphisms of_ \(k[G]\)_-modules_ \[H^{1}_{dR}(Y)_{\ell t} \cong V_{1}\oplus V^{f_{X}-1}_{p},\] \[H^{1}_{dR}(Y)_{m} \cong V_{1}\oplus V^{f_{X}-1}_{p},\] \[H^{1}_{dR}(Y)_{ll} \cong V^{2h_{X}}_{p},\] _and_ \[H^{1}_{dR}(Y) \cong V^{2}_{1}\oplus V^{2g_{X}-2}_{p},\] _where_ \(h_{X}=g_{X}-f_{X}\)_._
3. \(\pi^{*}\) _induces isomorphisms of_ \(\mathbb{D}_{k}\)_-modules_ \[H^{1}_{dR}(X)_{m}\stackrel{{\pi^{*}}}{{\to}}H^{1}_{dR}(Y)_{m}[ \delta]\quad\text{and}\quad H^{1}_{dR}(X)_{ll}\stackrel{{\pi^{*}}}{{ \to}}H^{1}_{dR}(Y)_{ll}[\delta],\] _and an exact sequence_ \[0\to M(\mathbb{Z}/p\mathbb{Z})\to H^{1}_{dR}(X)_{\acute{e}t}\stackrel{{ \pi^{*}}}{{\to}}H^{1}_{dR}(Y)_{\acute{e}t}[\delta]\to M(\mathbb{Z}/p\mathbb{Z}) \to 0.\] _The image_ \(\pi^{*}\left(H^{1}_{dR}(X)_{\acute{e}t}\right)\) _is equal to_ \(\delta^{p-1}H^{1}_{dR}(Y)_{\acute{e}t}\)_._
4. \(\pi_{*}\) _induces an isomorphism_ \[H^{1}_{dR}(Y)_{\acute{e}t}/\delta H^{1}_{dR}(Y)_{\acute{e}t}\stackrel{{ \pi_{*}}}{{\to}}H^{1}_{dR}(X)_{\acute{e}t}\] _which identifies the line_ \[\operatorname{Im}\left(H^{1}_{dR}(Y)_{\acute{e}t}[\delta]\to H^{1} _{dR}(Y)_{\acute{e}t}/\delta H^{1}_{dR}(Y)_{\acute{e}t}\right)\\ =\operatorname{Ker}\left(H^{1}_{dR}(Y)_{\acute{e}t}/\delta H^{1}_ {dR}(Y)_{\acute{e}t}\stackrel{{\delta^{p-1}}}{{\longrightarrow}}H ^{1}_{dR}(Y)_{\acute{e}t}[\delta]\right)\] _with the line_ \[\operatorname{Im}\left(M(\mathbb{Z}/pZ)\hookrightarrow H^{1}_{dR}(X)_{\acute{e }t}\right)\] _defined in part (_1_)._
_Remarks 4.2_.:
1. It is straightforward to check that the cup product on de Rham cohomology induces a duality between \(H^{1}_{dR}(Y)_{\acute{e}t}\) and \(H^{1}_{dR}(Y)_{m}\) and its restriction to \(H^{1}_{dR}(Y)_{ll}\) is perfect and gives the latter the structure of a self-dual module. It is also easy to see that \(H^{1}_{dR}(Y)_{ll}\) is a \(BT_{1}\). Since the pairing satisfies the compatibility \(\left\langle\delta m,n\right\rangle_{Y}=\left\langle m,\tilde{\delta}n\right\rangle _{Y}\) (see Section 3), we find that the orthogonal complement of \(\delta^{i}H^{1}_{dR}(Y)_{ll}\) is \(\delta^{p-i}H^{1}_{dR}(Y)_{ll}\). This implies that \[\frac{H^{1}_{dR}(Y)_{ll}}{\delta^{i}H^{1}_{dR}(Y)_{ll}}\quad\text{is dual to}\quad H^{1}_{dR}(Y)_{ll}[\delta^{i}].\] On the other hand, \(\delta^{p-i}\) induces an isomorphism \[\frac{H^{1}_{dR}(Y)_{ll}}{\delta^{i}H^{1}_{dR}(Y)_{ll}}\to H^{1}_{dR}(Y)_{ll}[ \delta^{i}].\] Therefore, each of the submodules \(H^{1}_{dR}(Y)_{ll}[\delta^{i}]\) is self-dual, and they are easily seen to be \(BT_{1}\) modules as well. (One slight subtlety: These pairings are not compatible with restriction. Indeed, the restriction to \(H^{1}_{dR}(Y)_{ll}[\delta^{i-1}]\) of the pairing just constructed on \(H^{1}_{dR}(Y)_{ll}[\delta^{i}]\) is degenerate for \(i>1\).)
2. The maps in part (1) may also be interpreted as a 3-step filtration on \(H^{1}_{dR}(X)\) with \[H^{1}_{dR}(X)^{3} =H^{1}_{dR}(X),\] \[H^{1}_{dR}(X)^{2} =\operatorname{Ker}\left(H^{1}_{dR}(X)\to M(\mu_{p})\right),\] \[H^{1}_{dR}(X)^{1} =\operatorname{Im}\left(M(\mathbb{Z}/p\mathbb{Z})\to H^{1}_{dR}(X) \right),\] and \[H^{1}_{dR}(X)^{0} =0.\] The subquotients are \(M(\mu_{p})\), \(\mathcal{M}_{X}\), and \(M(\mathbb{Z}/p\mathbb{Z})\).
3. The filtration above is self-dual in the sense that \(H^{1}_{dR}(X)^{2}\) and \(H^{1}_{dR}(X)^{1}\) are orthogonal complements of one another.
Figure 1 may help to digest the statement of the Proposition. It illustrates the case \(p=5\), \(g_{X}=5\), and \(f_{X}=4\). Each box represents a one-dimensional subspace of \(H^{1}_{dR}(Y)\) (the upper group) or \(H^{1}_{dR}(X)\) (the lower group). On \(H^{1}_{dR}(Y)\), the action of \(\delta\) shifts a given one-dimensional subspace to the one represented by the box below (if there is one, otherwise to zero). The groups on the left represent the multiplicative parts, those on the right represent the etale parts, and those in the middle represent the local-local parts. The (canonical) class \(\eta_{X}\in H^{1}_{dR}(X)_{\ell t}\) is constructed in the proof and spans the image of the injection in part (1) (i.e., the subspace \(H^{1}_{dR}(X)^{1}\).). The (non-canonical) class \(\omega_{X}\in H^{1}_{dR}(X)_{m}\) maps onto a class spanning the image of the projection in part (1) (i.e., the subquotient \(H^{1}_{dR}(X)/H^{1}_{dR}(X)^{2}\)). We may choose \(\omega_{X}\) so that \(\langle\omega_{X},\eta_{X}\rangle=1\). The (non-canonical) classes \(\omega_{Y}\) and \(\eta_{Y}\) can be chosen to satisfy \(\pi^{*}\omega_{X}=\omega_{Y}\), \(\pi_{*}\eta_{Y}=\eta_{X}\), and \(\langle\omega_{Y},\eta_{Y}\rangle=1\). Caution: In general, the lines spanned by \(\omega_{X}\), \(\omega_{Y}\), and \(\eta_{Y}\) are not invariant under \(\mathbb{D}_{k}\). This is closely related to the possible non-splitting of the exact sequences (1.5) and (1.6).
Proof of Proposition 4.: (1) Since \(\pi:Y\to X\) is unramified, the choice of a fixed isomorphism \(\operatorname{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}\) makes \(Y\) into a \(\mathbb{Z}/p\mathbb{Z}\)-torsor over \(X\). The group that classifies such torsors is \(H^{1}_{\acute{e}t}(X,\mathbb{Z}/p\mathbb{Z})\) (see, for example, [11, III.4] or [14, 6.5.5]). Therefore, there is a class in \(H^{1}_{\acute{e}t}(X,\mathbb{Z}/p\mathbb{Z})\) defined by the cover \(\pi:Y\to X\) and the fixed isomorphism \(G\cong\mathbb{Z}/p\mathbb{Z}\), which will be denoted by \(\eta_{X,\acute{e}t}\).
Consider the exact sequence
\[0\to\mathbb{Z}/p\mathbb{Z}\to\mathcal{O}_{X}\stackrel{{\wp}}{{ \to}}\mathcal{O}_{X}\to 0\]
of sheaves for the etale topology on \(X\) where \(\wp(x)=x^{p}-x\). Taking cohomology yields a homomorphism
\[H^{1}_{\acute{e}t}(X,\mathbb{Z}/p\mathbb{Z})\to H^{1}_{\acute{e}t}(X,\mathcal{ O}_{X})[\wp],\]
where \([\wp]\) indicates the kernel of \(\wp\). Since \(H^{0}(Y,\mathcal{O}_{Y})=k\), the image of \(\eta_{X,\acute{e}t}\) in \(H^{1}_{\acute{e}t}(X,\mathcal{O}_{X})[\wp]\) is non-zero, and since \(H^{1}_{\acute{e}t}(X,\mathcal{O}_{X})[\wp]\) is the subset of the usual (coherent) \(H^{1}(X,\mathcal{O}_{X})\) fixed by Frobenius, we have a class \(\eta_{X,coh}\in H^{1}(X,\mathcal{O}_{X})\) fixed by Frobenius. It has a canonical lift to \(H^{1}_{dR}(X)\) (take any lift and apply \(F\)), and we denote this lift by \(\eta_{X}\). Since \(F\eta_{X}=\eta_{X}\), we see that \(\eta_{X}\in H^{1}_{dR}(X)_{\acute{e}t}\), and the injection in (1) is \(\alpha\mapsto\alpha\eta_{X}\).
The surjection in (1) is obtained from the injection by Cartier duality, and is given more explicitly by \(c\mapsto\langle c,\eta_{X}\rangle_{X}\).
Decomposing the module \(\mathcal{M}_{X}\) into its etale, multiplicative, and local-local parts shows that
\[\mathcal{M}_{X,\acute{e}t}=\operatorname{Coker}\left(M(\mathbb{Z}/p\mathbb{Z} )\to H^{1}_{dR}(X)_{\acute{e}t}\right),\]
\[\mathcal{M}_{X,m}=\operatorname{Ker}\left(H^{1}_{dR}(X)_{m}\to M(\mu_{p}) \right),\]
and
\[\mathcal{M}_{X,ll}=H^{1}_{dR}(X)_{ll}.\]
Thus \(\mathcal{M}_{X}\) is the Dieudonne module of \(\mathcal{G}_{X}\). This establishes part (1) of the Proposition.
(2) Taking the multiplicative and etale parts of the de Rham sequence (4.1) with \(Z=Y\) yields isomorphisms
\[H^{1}_{dR}(Y)_{m}\cong H^{0}(Y,\Omega^{1}_{Y})_{m}\quad\text{and}\quad H^{1}_{dR} (Y)_{\acute{e}t}\cong H^{1}(Y,\mathcal{O}_{Y})_{\acute{e}t}.\]
Tamagawa [15] proves that there is an isomorphism of \(k[G]\)-modules
\[H^{0}(Y,\Omega^{1}_{Y})\cong V_{1}\oplus V^{g_{X}-1}_{p}, \tag{4.4}\]
and Serre duality yields
\[H^{1}(Y,\mathcal{O}_{Y})\cong V_{1}\oplus V^{g_{X}-1}_{p}.\]
Nakajima [16] proves that there is an isomorphism of \(k[G]\)-modules
\[H^{1}_{dR}(Y)_{m}\cong H^{0}(Y,\Omega^{1}_{Y})_{m}\cong V_{1}\oplus V^{f_{X}- 1}_{p}, \tag{4.5}\]
and Cartier duality yields
\[H^{1}_{dR}(Y)_{\acute{e}t}\cong H^{1}(Y,\Omega^{1}_{Y})_{\acute{e}t}\cong V_{ 1}\oplus V^{f_{X}-1}_{p}.\]
This establishes the first two claims in part (2).
For the third claim in part (2), let \(h_{X}=g_{X}-f_{X}\), and note that the last four displayed equations and (4.1) show that \(H^{1}_{dR}(Y)_{ll}\) is an extension of \(V^{h_{X}}_{p}\) by \(V^{h_{X}}_{p}\). Since \(V_{p}\) is free, the extension splits and there is an isomorphism of \(k[G]\)-modules
\[H^{1}_{dR}(Y)_{ll}\cong V^{2h_{X}}_{p}.\]
The fourth claim in part (2) is simply the direct sum of the three preceding claims. This completes the proof of part (2) of the Proposition.
(3) Consider the Hochschild-Serre spectral sequence for \(\pi\) in de Rham cohomology. The sequence of low degree terms is
\[0\to H^{1}(G,H^{0}_{dR}(Y))\to H^{1}_{dR}(X)\stackrel{{\pi^{*}}} {{\to}}H^{1}_{dR}(Y)[\delta]\to H^{2}(G,H^{0}_{dR}(Y)).\]
We have
\[H^{1}(G,H^{0}_{dR}(Y))\cong H^{2}(G,H^{0}_{dR}(Y))\cong M(\mathbb{Z}/p\mathbb{ Z}),\]
Figure 1.
and these modules are etale, so taking multiplicative and local-local parts of the sequence yields isomorphisms
\[H^{1}_{dR}(X)_{m}\stackrel{{\pi^{*}}}{{\to}}H^{1}_{dR}(Y)_{m}[ \delta]\quad\text{and}\quad H^{1}_{dR}(X)_{ll}\stackrel{{\pi^{*}}}{{ \to}}H^{1}_{dR}(Y)_{ll}[\delta].\]
Taking the etale part yields an exact sequence
\[0\to M(\mathbb{Z}/p\mathbb{Z})\to H^{1}_{dR}(X)_{\acute{e}t}\stackrel{{ \pi^{*}}}{{\to}}H^{1}_{dR}(Y)_{\acute{e}t}[\delta]\to M(\mathbb{Z}/p \mathbb{Z})\to 0, \tag{4.6}\]
where surjectivity on the right follows from a dimension count using part (2). Thus \(\eta_{X}\) spans the kernel of \(\pi^{*}\) in (4.6). We will prove the last claim in (3) after establishing (4).
(4) The Cartier dual of the isomorphism
\[H^{1}_{dR}(X)_{m}\stackrel{{\pi^{*}}}{{\to}}H^{1}_{dR}(Y)_{m}[\delta]\]
is the isomorphism
\[H^{1}_{dR}(Y)_{m}/\delta H^{1}_{dR}(Y)_{m}\stackrel{{\pi_{*}}}{{ \to}}H^{1}_{dR}(X)_{m}.\]
It follows from part (2) that the image of the natural map
\[H^{1}_{dR}(Y)_{\acute{e}t}[\delta]\to H^{1}_{dR}(Y)_{\acute{e}t}/\delta H^{1} _{dR}(Y)_{\acute{e}t}\]
is a line, and that it is equal to
\[\operatorname{Ker}\left(H^{1}_{dR}(Y)_{\acute{e}t}/\delta H^{1}_{dR}(Y)_{ \acute{e}t}\stackrel{{\delta^{p-1}}}{{\to}}H^{1}_{dR}(Y)_{\acute {e}t}[\delta]\right),\]
and therefore equal to
\[\operatorname{Ker}\left(H^{1}_{dR}(Y)_{\acute{e}t}/\delta H^{1}_{dR}(Y)_{ \acute{e}t}\stackrel{{\pi^{*}\pi^{*}}}{{\to}}H^{1}_{dR}(Y)_{ \acute{e}t}[\delta]\right).\]
This shows that \(\pi_{*}\) identifies this line with
\[\operatorname{Ker}\left(H^{1}_{dR}(X)_{\acute{e}t}\stackrel{{\pi^ {*}}}{{\to}}H^{1}_{dR}(Y)_{\acute{e}t}[\delta]\right),\]
and we observed above that this is the line spanned by \(\eta_{X}\), i.e., the line defined in part (1). This completes the proof of part (4). We also deduce that
\[\pi^{*}\left(H^{1}_{dR}(X)_{\acute{e}t}\right)=\pi^{*}\pi_{*}\left(H^{1}_{dR} (Y)_{\acute{e}t}\right)=\delta^{p-1}\left(H^{1}_{dR}(Y)_{\acute{e}t}\right),\]
and this establishes the last claim in part (3).
We now turn to an elegant result that holds under the assumption that \(X\) has a \(k\)-rational point.
**Proposition 4.3**.: _Assume that \(X\) has a \(k\)-rational point \(S\) and let \(T=\pi^{-1}(S)\). Let \(\mathcal{N}_{Y}\) be the hypercohomology group_
\[\mathcal{N}_{Y}:=\mathbb{H}^{1}\left(Y,\mathcal{O}_{Y}(-T)\stackrel{{ d}}{{\to}}\Omega^{1}_{Y}(T)\right).\]
_Then_
1. \(\mathcal{N}_{Y}\cong V^{2g_{X}}_{p}\) _as_ \(k[G]\)_-modules._
2. _For_ \(i=1,\ldots,p\)_, there are isomorphisms_ \[\frac{\mathcal{N}_{Y}[\delta^{i}]}{\mathcal{N}_{Y}[\delta^{i-1}]}=\frac{\delta^{ p-i}\mathcal{N}_{Y}}{\delta^{p-i+1}\mathcal{N}_{Y}}\cong H^{1}_{dR}(X)\] _of_ \(\mathbb{D}_{k}\)_-modules._
3. _There are exact sequences of_ \(\mathbb{D}_{k}[G]\)_-modules_ \[0\to H^{0}(Y,\mathcal{O}_{Y})\to H^{0}(Y,\mathcal{O}_{T})\to\mathcal{N}_{Y, \acute{e}t}\to H^{1}_{dR}(Y)_{\acute{e}t}\to 0,\] _and_ \[0\to H^{1}_{dR}(Y)_{m}\to\mathcal{N}_{Y,m}\to H^{0}(Y,\Omega^{1}_{Y}(T)/ \Omega^{1}_{Y})\to H^{1}(Y,\Omega^{1}_{Y})\to 0,\] _as well as an isomorphism_ \[\mathcal{N}_{Y,ll}\cong H^{1}_{dR}(Y)_{ll}.\]
The \(\mathbb{D}_{k}[G]\)-module structures and homomorphisms in part (3) will be made explicit in the proof.
_Remarks 4.4_.:
1. In parallel with Remark 4.2, each of the subquotients \(\mathcal{N}_{Y}[\delta^{i}]/\mathcal{N}_{Y}[\delta^{j}]\) for \(0\leq i<j\leq p\), is self-dual, but not in a way compatible with restrictions of pairings.
2. The exact sequences appearing in part (3) may also be interpreted as a 3-step filtration on \(\mathcal{N}_{Y}\) with \[\mathcal{N}_{Y}^{3} =\mathcal{N}_{Y},\] \[\mathcal{N}_{Y}^{2} =\ker\left(\mathcal{N}_{Y}\to H^{0}(Y,\Omega^{1}_{Y}(T)/\Omega^{1 }_{Y})\right),\] \[\mathcal{N}_{Y}^{1} =\operatorname{Im}\left(H^{0}(Y,\mathcal{O}_{T})\to\mathcal{N}_{Y }\right),\] and \[\mathcal{N}_{Y}^{0} =0.\] The subquotients are \[\operatorname{Ker}\left(H^{0}(Y,\Omega^{1}_{Y}(T)/\Omega^{1}_{Y}) \to H^{1}(Y,\Omega^{1}_{Y})\right),\quad H^{1}_{dR}(Y),\quad\text{and}\] \[\operatorname{Coker}\left(H^{0}(Y,\mathcal{O}_{Y})\to H^{0}(Y,\mathcal{O}_{T })\right).\]
3. The filtration \(\mathcal{N}_{Y}^{i}\) is self-dual in the sense that \(\mathcal{N}_{Y}^{2}\) and \(\mathcal{N}_{Y}^{1}\) are orthogonal complements to one another.
4. The filtration on \(\mathcal{N}_{Y}\) induces a filtration on each of the subquotients \(\mathcal{N}_{Y}[\delta^{i}]/\mathcal{N}_{Y}[\delta^{i+1}]\), and the induced filtration is the one on \(H^{1}_{dR}(X)\) in Remark 4.2.
Figure 2 illustrates the case \(g_{X}=5\), \(f_{X}=4\), and \(p=5\), with the same conventions as in the previous figure. In this case, \(\pi_{*}:\mathcal{N}_{Y}/\delta\mathcal{N}_{Y}\to H^{1}_{dR}(X)\) is an isomorphism, as is \(\pi^{*}:H^{1}_{dR}(X)\to\mathcal{N}_{Y}[\delta]\). The gray zone on the right represents the submodule \(\mathcal{N}_{Y}^{1}\) and the gray zone on the left represents the quotient module \(\mathcal{N}_{Y}^{3}/\mathcal{N}_{Y}^{2}\). Note that the classes \(\omega_{X}\), \(\eta_{Y}\), and \(\omega_{Y}\) are not canonically defined.
Proof of Proposition 4.3.: (1) By [12, Thm. 1], \(H^{0}(Y,\Omega^{1}_{Y}(T))\) is free over \(k[G]\) of rank \(g_{X}\). The same is true of \(H^{1}(Y,\mathcal{O}_{Y}(-T))\) by Serre duality. The modified de Rham exact sequence
\[0\to H^{0}(Y,\Omega^{1}_{Y}(T))\to\mathbb{H}^{1}\left(Y,\mathcal{O}_{Y}(-T) \stackrel{{ d}}{{\to}}\Omega^{1}_{Y}(T)\right)\to H^{1}(Y, \mathcal{O}_{Y}(-T))\to 0\]
shows that \(\mathcal{N}_{Y}=\mathbb{H}^{1}\left(Y,\mathcal{O}_{Y}(-T)\stackrel{{ d}}{{\to}}\Omega^{1}_{Y}(T)\right)\) is \(k[G]\)-free of rank \(2g_{X}\).
(2) The quotients appearing in the statement are all isomorphic (via a suitable power of \(\delta\)) to \(\mathcal{N}_{Y}[\delta]\), so it will suffice to prove that \(\mathcal{N}_{Y}[\delta]\cong H^{1}_{dR}(X)\) as \(\mathbb{D}_{k}\)-modules.
Note that \(\pi^{*}\mathcal{O}_{X}(-S)=\mathcal{O}_{Y}(-T)\) and \(\pi^{*}\Omega^{1}_{X}(S)=\Omega^{1}_{Y}(T)\). The exact sequence of low degree terms of the Hochschild-Serre spectral sequence for \(Y\to X\) and \(\mathcal{O}_{X}(-S)\) yields an isomorphism
\[\pi^{*}:H^{1}(X,\mathcal{O}_{X}(-S))\stackrel{{\sim}}{{\to}}H^{1 }(Y,\mathcal{O}_{Y}(-T))[\delta],\]
and since \(Y\to X\) is unramified, it is clear that we have an isomorphism
\[\pi^{*}:H^{0}(X,\Omega^{1}_{X}(S))\stackrel{{\sim}}{{\to}}H^{0}( Y,\Omega^{1}_{Y}(T))[\delta].\]
Using the modified de Rham sequences of \(\mathcal{O}_{X}(-S)\stackrel{{ d}}{{\to}}\Omega^{1}_{X}(S)\) and \(\mathcal{O}_{Y}(-T)\stackrel{{ d}}{{\to}}\Omega^{1}_{Y}(T)\) shows that \(\pi^{*}\) induces an isomorphism
\[\pi^{*}:\mathbb{H}^{1}\left(X,\mathcal{O}_{X}(-S)\stackrel{{ d}}{{\to}}\Omega^{1}_{X}(S)\right) \stackrel{{\sim}}{{\to}}\mathbb{H}^{1}\left(Y,\mathcal{O}_{Y}(-T )\stackrel{{ d}}{{\to}}\Omega^{1}_{Y}(T)\right)[\delta].\]
Since \(\deg S=1\), we also have \(H^{1}(X,\mathcal{O}_{X}(-S))\cong H^{1}(X,\mathcal{O}_{X})\), and \(H^{0}(X,\Omega^{1}_{X}(S))\cong H^{0}(X,\Omega^{1}_{X})\), so
\[H^{1}_{dR}(X)=\mathbb{H}^{1}\left(\mathcal{O}_{X}\stackrel{{ d}}{{\to}}\Omega^{1}_{X}\right)\cong\mathbb{H}^{1}\left(X, \mathcal{O}_{X}(-S)\stackrel{{ d}}{{\to}}\Omega^{1}_{X}(S)\right).\]
This completes the proof of part (2).
(3) Note that there are exact sequences of complexes of coherent sheaves on \(Y\):
\[0\to\left(\mathcal{O}_{Y}(-T)\stackrel{{ d}}{{\to}}\Omega^{1}_{Y }\right)\to\left(\mathcal{O}_{Y}(-T)\stackrel{{ d}}{{\to}}\Omega^{1}_{Y}(T) \right)\to\left(\Omega^{1}_{Y}(T)/\Omega^{1}_{Y}\right)[-1]\to 0,\]
and
\[0\to\left(\mathcal{O}_{Y}(-T)\stackrel{{ d}}{{\to}}\Omega^{1}_{Y }\right)\to\left(\mathcal{O}_{Y}\stackrel{{ d}}{{\to}}\Omega^{1}_{Y }\right)\to\mathcal{O}_{T}\to 0.\]
Let \(\mathcal{N}^{2}_{Y}:=\mathbb{H}^{1}\left(Y,\mathcal{O}_{Y}(-T)\stackrel{{ d}}{{\to}}\Omega^{1}_{Y}\right)\) and \(\mathcal{N}^{1}_{Y}:=\operatorname{Coker}\left(H^{0}(Y,\mathcal{O}_{Y})\to H^{ 0}(T,\mathcal{O}_{T})\right)\).
Figure 2.
Taking cohomology of the first exact sequence above, we have
\[0\to\mathcal{N}_{Y}^{2}\to\mathcal{N}_{Y}\to H^{0}(Y,\Omega_{Y}^{1}(T)/\Omega_{Y} ^{1})\to H^{1}(Y,\Omega_{Y}^{1})\to 0, \tag{4.7}\]
where the surjection on the right is the sum of residues map. Note that \(H^{0}(Y,\Omega_{Y}^{1}(T)/\Omega_{Y}^{1})\) is \(k[G]\)-free of rank 1, and \(H^{1}(Y,\Omega_{Y}^{1})\) is \(k\) with the trivial \(k[G]\) action. The action of \(F\) on both of these spaces is zero, and the \(V\) action is induced by the Cartier operator on differentials.
Taking cohomology of the second exact sequence above, we have
\[0\to H^{0}(Y,\mathcal{O}_{Y})\to H^{0}(Y,\mathcal{O}_{T})\to\mathcal{N}_{Y}^{2 }\to H^{1}_{dR}(Y)\to 0. \tag{4.8}\]
Note that \(H^{0}(Y,\mathcal{O}_{T})\) is \(k[G]\)-free of rank 1, and \(H^{0}(Y,\mathcal{O}_{Y})\) is \(k\) with the trivial \(k[G]\) action. The action of \(F\) on both of these spaces is the usual action of Frobenius, and the \(V\) action is trivial.
The last two displayed exact sequences give the asserted filtration on \(\mathcal{N}_{Y}\), and this completes the proof of part (3) of the proposition.
_Remarks 4.5_.:
1. The hypercohomology group \(\mathcal{N}_{Y}\) is closely related to the de Rham cohomology of the singular curve associated to \(Y\) and \(T\) where \(T\) is viewed as a "modulus", as in [12, Ch. 4].
2. Suppose that \(g_{X}>1\) and choose a \(k\)-rational, effective, reduced divisor \(S\) on \(X\). A canonical divisor of \(X\) is \(k\)-rational and effective, and since \(k\) is perfect, the underlying reduced divisor of a \(k\)-rational divisor is again \(k\)-rational, so there always exists a divisor \(S\) as above with degree \(\leq 2g_{X}-2\). Let \(T=\pi^{-1}(S)\). Then the proof of Proposition 4.3 applies essentially verbatim and shows that \(\mathcal{N}_{Y}\) is free over \(k[G]\) and that \[\mathcal{N}_{Y}[\delta]\cong\mathcal{N}_{X}:=\mathbb{H}^{1}\left(X,\mathcal{ O}_{X}(-S)\stackrel{{ d}}{{\to}}\Omega_{X}^{1}(S)\right).\] Thus, at the expense of enlarging \(H^{1}_{dR}(X)\) and \(H^{1}_{dR}(Y)\) by certain simple etale and multiplicative \(\mathbb{D}_{k}\)-modules, we can always arrange that the cohomology associated to \(Y\) is \(k[G]\)-free with subquotients isomorphic to the cohomology associated to \(X\).
3. We may also choose two reduced, strictly effective divisors \(S_{1}\) and \(S_{2}\) on \(X\), set \(T_{i}=\pi^{*}(S_{i})\) and take hypercohomology of the complexes \[\mathcal{O}_{X}(-S_{1})\to\Omega_{X}^{1}(S_{2})\quad\text{and}\quad\mathcal{ O}_{Y}(-T_{1})\to\Omega_{Y}^{1}(T_{2}).\] Then the latter is a free \(k[G]\)-module with subquotients isomorphic to the former.
## 5. Proofs of Proposition 1.1 and Theorems 1.3 and 1.6
Proof of Propostion 1.1.: The group scheme homomorphisms in the first sentence of Definition-Proposition 1.1 are obtained by applying the Dieudonne functor to the \(\mathbb{D}_{k}\)-module homomorphisms in part (1) of Proposition 4.1. This establishes the claims in Proposition-Definition 1.1, and it identifies \(\mathcal{M}_{X}\) as the Dieudonne module of \(\mathcal{G}_{X}\).
Proof of Theorem 1.3.: (1) Splitting the 4-term exact sequence in part (3) of Proposition 4.1 into two parts and using the identification of the image of \(\pi_{*}\) there, we obtain
\[0\to M(\mathbb{Z}/p\mathbb{Z})\to H^{1}_{dR}(X)_{\text{\'{e}t}}\to\delta^{p-1} H^{1}_{dR}(Y)_{\text{\'{e}t}}\to 0,\]
and
\[0\to\delta^{p-1}H^{1}_{dR}(Y)_{\text{\'{e}t}}\to H^{1}_{dR}(Y)_{\text{\'{e}t}}[ \delta]\to M(\mathbb{Z}/p\mathbb{Z})\to 0.\]
The first of these yields an isomorphism \(\mathcal{M}_{X,\acute{e}t}\cong\delta^{p-1}H^{1}_{dR}(Y)_{\acute{e}t}.\) Using part (4) of Proposition 4.1, we obtain the diagram
Using the isomorphism \(\mathcal{M}_{X,\acute{e}t}\cong\delta^{p-1}H^{1}_{dR}(Y)_{\acute{e}t}\) and applying the Diedonne functor yields the exact sequence (1.5) of Proposition 1.3 and an identification of it with the exact sequence (1.3). Similarly, the second exact sequence above yields (1.6).
Part (2) of Proposition 4.1 shows that the subquotients
\[\frac{\delta^{i}H^{1}_{dR}(Y)_{\acute{e}t}}{\delta^{i+1}H^{1}_{dR}(Y)_{\acute{e }t}}\quad\text{for $i=1\ldots,p-1$}\quad\text{and}\quad\frac{H^{1}_{dR}(Y)_{ \acute{e}t}[\delta^{i}]}{H^{1}_{dR}(Y)_{\acute{e}t}[\delta^{i-1}]}\quad\text{ for $i=2\ldots,p$}\]
are all isomorphic to one another via a suitable power of \(\delta.\) Since \(\delta^{p-1}H^{1}_{dR}(Y)_{\acute{e}t}\cong\mathcal{M}_{X,\acute{e}t},\) applying the Dieudonne functor yields the isomorphisms in the first sentence of part (1) of Theorem 1.3, and this completes the proof of this part.
Part (2) is equivalent to part (1) by Cartier duality.
For part (3), note that part (2) of Proposition 4.1 shows that the subquotients
\[\frac{\delta^{i}H^{1}_{dR}(Y)_{ll}}{\delta^{i+1}H^{1}_{dR}(Y)_{ll}}\quad\text{ for $i=1\ldots,p$}\quad\text{and}\quad\frac{H^{1}_{dR}(Y)_{ll}[\delta^{i}]}{H^{1}_{ dR}(Y)_{ll}[\delta^{i-1}]}\quad\text{for $i=1\ldots,p$}\]
are all isomorphic to one another via a suitable power of \(\delta.\) Moreover, parts (1) and (3) of Proposition 4.1 show that
\[\mathcal{M}_{X,ll}\cong H^{1}_{dR}(X)_{ll}\cong H^{1}_{dR}(Y)_{ll}[\delta],\]
so all of the subquotients above are isomorphic to \(\mathcal{M}_{X,ll}.\) Applying the Dieudonne functor yields part (3) of Theorem 1.3. This completes the proof of that theorem.
Proof of Theorem 1.6.: Let \(\mathcal{H}\) be the \(k\) group scheme with Dieudonne module \(\mathcal{N}_{Y}\) as defined in Proposition 4.3. By part (1) of that proposition, \(\mathcal{H}\) is \(G\)-free, and we have equalities \(\delta^{i}\mathcal{H}=\mathcal{H}[\delta^{p-i}]\) for \(i=1,\ldots,p.\) By part (2), there are canonical isomorphisms
\[\frac{\delta^{i}\mathcal{H}}{\delta^{i+1}\mathcal{H}}\cong\frac{\mathcal{H}[ \delta^{p-i}]}{\mathcal{H}[\delta^{p-i-1}]}\cong J_{X}[p]\quad\text{for $i=0,\ldots,p-1$}.\]
Note that
\[M\left(\operatorname{Res}_{T/S}\mathbb{Z}/p\mathbb{Z}\right)\cong H^{0}(Y, \mathcal{O}_{T})\]
where the right hand side is a \(k\) vector space on which \(F\) acts by the \(p\)-power Frobenius and \(V=0\), and that
\[M\left(\operatorname{Res}_{T/S}\mu_{p}\right)\cong H^{0}\left(Y,\Omega^{1}_{ Y}(T)/\Omega^{1}_{Y}\right)\]
where the right hand side is a \(k\) vector space on which \(F=0\) and \(V\) acts by the Cartier operator (which is essentially the inverse Frobenius on residues). Then applying the Dieudonne functor to part (3) of Proposition 4.3 yields the exact sequences asserted in part (3) of Theorem 1.6. This completes the proof of the theorem.
## 6. Comments on \(\mathbb{D}_{k}[G]\)-modules and examples
### Motivation
Suppose that \(N\) is a finite-dimensional \(k\)-vector space equipped with an action of \(\mathbb{D}_{k}\) and/or \(G=\mathbb{Z}/p\mathbb{Z}\). Then \(N\) is both Artinian and Noetherian, so the Krull-Schmidt theorem holds: \(N\) is the direct sum of indecomposable submodules, and the number and isomorphism types of the summands are uniquely determined. (See, for example, [1, SS3.4].)
In the case where \(N=H^{1}_{dR}(Y)\) or \(\mathcal{N}_{Y}\), we have complete information on \(N\) as a \(k[G]\)-module from part (2) of Proposition 4.1 or part (1) of Proposition 4.3 respectively. If we take \(H^{1}_{dR}(X)\) as given, then we know the associated graded of \(N\) with respect to the two filtrations attached to the \(G\)-action by the proofs of Theorems 1.3 and 1.6 in Section 5.
The basic question that motivates Sections 7 and 8 is this: what restrictions on \(N\) as a \(\mathbb{D}_{k}\)-module are imposed by the information in the preceding paragraph? As we will see below, this question appears to be quite difficult, and we are only able to give satisfactory answers in the simplest cases. In the rest of this section, we explain some of the difficulties and give examples illustrating them.
### Extensions of \(Bt_{1}\) modules
Propositions 4.1 and 4.3 tell us that the self-dual \(BT_{1}\) module \(H^{1}_{dR}(Y)_{ll}\) is a repeated extension of the self-dual \(BT_{1}\) module \(H^{1}_{dR}(X)_{ll}\). In this section, we make some comments (surely well-known to experts) about how ill-behaved such extensions may be, even when \(k\) is algebraically closed
More precisely, consider the full subcategory of the category of \(\mathbb{D}_{k}\)-modules whose objects are \(BT_{1}\) modules. We use freely the Kraft classification of \(BT_{1}\) modules in terms of cyclic words on the two letter alphabet \(\{f,v\}\). (See [10] for an overview.)
It is a simple exercise to check that this category is closed under extension and quotient in the sense that if
\[0\to M_{1}\to M\to M_{2}\to 0\]
is an exact sequence of \(\mathbb{D}_{k}\)-modules, and if two of \(M,M_{1},M_{2}\) are \(BT_{1}\) modules, then so is the third.
However, this category has the unfortunate property that the image and kernel of a morphism between \(BT_{1}\) modules need not be \(BT_{1}\) modules. For example, if \(M_{1,1}\) is the module associated to the word \(fv\) (so one generator \(e\) and one relation \(Fe=Ve\)), then the maps of modules \(M\to M\) determined by sending \(e\) to \(Fe\) has kernel and image isomorphic to the module \(\mathbb{D}_{k}/(F,V)\), and this is not a \(BT_{1}\).
In [10], there is a determination of the simple \(BT_{1}\) modules, i.e., those that have no non-trivial \(BT_{1}\) submodule. It follows by standard arguments that every \(BT_{1}\) module is an iterated extension of simple \(BT_{1}\) modules.
Unfortunately, there is no Jordan-Holder theorem here: The list of simple \(BT_{1}\) modules appearing in a presentation of a given \(M\) as an extension of simple \(BT_{1}\) modules is not in general uniquely determined.
Here is an example: Let \(M\) be the module associated to \(f^{3}v^{3}\) (one generator \(e\) with relation \(F^{3}e=V^{3}e\)). Then is is not hard to check that \(M\) is a three-fold extension of \(M_{1,1}\). On the other hand, \(M\) also admit a surjection onto the module \(M_{2,1}\) corresponding to \(ffv\) (one generator \(a\) with relation \(F^{2}a=Va\)) with kernel isomorphic to \(M_{1,2}\), the module corresponding to the word \(fvv\) (one generator \(b\) with relation \(Fb=V^{2}b\)). Oort's results imply that \(N_{2,1}\) and \(N_{1,2}\) are simple, so we have no uniqueness of "Jordan-Holder" factors.
_Example 6.3_.: We show by example that \(H^{1}_{dR}(Y)\) is not determined by \(H^{1}_{dR}(X)\), even when \(k\) is algebraically closed; that is, \(H^{1}_{dR}(Y)\) is not determined as a \(\mathbb{D}_{k}[G]\)-module by its associated graded \(\mathbb{D}_{k}\)-module.
Assume that \(k\) is algebraically closed. Then, as explained in the proof of part (1) of Proposition 4.1, the data of the cover \(\pi:Y\to X\) and the isomorphism \(\operatorname{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}\) determines and is determined by an element in the finite-dimensional \(\mathbb{F}_{p}\)-vector spaces
\[H^{1}_{et}(X,\mathbb{Z}/p\mathbb{Z})\cong H^{1}_{dR}(X)^{F=1}\cong H^{1}(X, \mathcal{O}_{X})^{F=1}.\]
Multiplying the element by a scalar in \(\alpha\in\mathbb{F}_{p}^{\times}\) represents the same cover \(\pi:Y\to X\) with the isomorphism \(\operatorname{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}\) multiplied by \(\alpha\). The set of unramified covers \(\pi:Y\to X\) (without the isomorphism \(\operatorname{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}\)) is thus in bijection with the projective space \(\mathbb{P}\left(H^{1}(X,\mathcal{O}_{X})^{F=1}\right)\). For general \(k\), elements of \(\mathbb{P}\left(H^{1}(X,\mathcal{O}_{X})^{F=1}\right)\) correspond to \(\overline{k}\)-isomorphism classes of covers which can be defined over \(k\). (In general, they are represented by several distinct \(k\)-isomorphism classes of covers. See Remark 9.2 (1) below.))
Take \(p=3\) and let \(X\) be the degree 9 hyperelliptic curve over \(k:=\mathbb{F}_{3}\) given by
\[X:\qquad y^{2}=x^{9}+x^{4}+x^{2}+1.\]
Then \(X\) has genus \(g_{X}=4\) and \(f_{X}=\nu_{X}=2\), and \(H^{1}(X,\mathcal{O}_{X})^{F=1}\) is \(2\)-dimensional over \(\mathbb{F}_{p}\). The 4 unramified covers of \(X\) over \(\overline{k}\) thus all arise from covers defined over \(k\). For each, we choose a cover representing it given by \(Y_{i}:\ z^{p}-z=f_{i}\), with \(f_{i}\) in the table below, and for each cover \(Y_{i}\) we record the EO-type of \(\mathbb{D}(J_{Y_{i}}[p]_{ll})\) (which determines the isomorphism class of \(J_{Y_{i}}[p]_{ll}\) over \(\overline{k}\), but not in general over \(k\)). For \(a\in\mathbb{F}_{3}\), we also consider the twisted curve \(Y_{i}(a):z^{p}-z=f_{i}+a\) and record the invariant factors of \(F\) acting on \(\mathbb{D}(J_{Y_{i}}(a)[p]_{et})\), which determine its isomorphism type as a \(k[F]\)-module. We emphasize that \(Y_{i}(a)\simeq Y_{i}\) over \(\overline{k}\) for each \(a\). This list of invariant factors will be of the form \((F-1)^{e_{1}},(F-1)^{e_{2}},\ldots\), with \(e_{1}\leq e_{2}\leq\ldots\), and for ease of notation we record it simply as \(e_{1},e_{2},\ldots\). Perhaps surprisingly, the isomorphism type varies among the three possible twists over \(k\).
\begin{tabular}{c|c|c|c} \(Y_{i}:\ z^{p}-z=\) & EO-type of \(\mathbb{D}(J_{Y_{i}}[p]_{ll})\) & Inv. factors of \(F\) on \(\mathbb{D}(J_{Y_{i}(a)}[p]_{et})\) \\ \hline & & \(a=0\) & \(a=1\) & \(a=-1\) \\ \cline{3-4} \(-(x^{6}+x^{3}+x-1)y\) & \([0,0,0,1,2,3]\) & \(1,1,2\) & \(1,3\) & \(1,3\) \\ \((x^{6}+x+1)y\) & \([0,0,1,1,2,3]\) & \(1,1,2\) & \(1,3\) & \(1,3\) \\ \((x^{3}+1)y+1\) & \([0,1,1,2,3,4]\) & \(1,3\) & \(1,3\) & \(1,1,2\) \\ \((x^{6}-x^{3}+x)y\) & \([0,1,1,2,3,4]\) & \(1,1,2\) & \(1,3\) & \(1,3\) \\ \end{tabular}
Note that the three group schemes \(J_{Y_{i}}[p]_{ll}\) for \(1\leq i\leq 3\) are pairwise non-isomorphic over \(\overline{k}\), and that the four \(k\)-group schemes \(J_{Y_{i}}[p]\) for \(1\leq i\leq 4\) are pairwise non-isomorphic over \(k\). However, \(J_{Y_{3}}[p]\) and \(J_{Y_{4}}[p]\) become isomorphic over \(\overline{k}\). Note as well that the EO-types of \(J_{Y_{3}}[p]\) and \(J_{Y_{4}}[p]\) show that the conclusion of Theorem 1.12 need not hold when \(g_{X}-f_{X}>1\).
## 7. Applications to the etale part of \(J_{Y}[p]\)
We turn to a consideration of the etale part of \(J_{Y}[p]\). When \(k\) is algebraically closed, \(J_{Y}[p]_{\acute{e}t}\) is completely determined by the isomorphism (1.1). As we will see below, the situation is more interesting when \(k\) is only assumed to be perfect.
Proof of Proposition 1.8.: Applying the Dieudonne functor to parts (3) and (4) of Proposition 4.1 yields an exact sequence
\[0\to\mathbb{Z}/p\mathbb{Z}\to J_{Y}[p]_{\acute{e}t}/\delta\xrightarrow{\delta^{p -1}}J_{Y}[p]_{\acute{e}t}[\delta]\to\mathbb{Z}/p\mathbb{Z}\to 0,\]
and an identification of \(\operatorname{Im}\delta^{p-1}\) with \(\mathcal{G}_{X}\) via the isomorphism \(\pi^{*}:J_{X}[p]_{\acute{e}t}\to J_{Y}[p]_{\acute{e}t}[\delta]\). Splitting this exact sequence into two short exact sequences yields sequences (1.5) and (1.6).
For part (1), assume that the sequence (1.5) splits. Then we have an inclusion
\[\mathbb{Z}/p\mathbb{Z}\hookrightarrow J_{Y}[p]_{\acute{e}t}[\delta] \hookrightarrow J_{Y}[p]_{\acute{e}t}\]
whose image does not lie in \(\operatorname{Im}\delta^{p-1}\). By part (2) of Theorem 4.1, the image therefore does not lie in the image of \(\delta\). This shows that the quotient \(\mathcal{Q}\) defined by the exactness of
\[0\to\mathbb{Z}/p\mathbb{Z}\to J_{Y}[p]_{\acute{e}t}\to\mathcal{Q}\to 0 \tag{7.1}\]
is \(G\)-free.
Conversely, if we have the exact sequence (7.1) with \(\mathcal{Q}\) assumed to be \(G\)-free, then the image of \(\mathbb{Z}/p\mathbb{Z}\to J_{Y}[p]\) is killed by \(\delta\) and not in the image of \(\delta\), so it splits (1.5). This completes the proof of part (1) of the proposition.
For part (2), assume that the sequence (1.6) splits. Then we have a surjection
\[J_{Y}[p]_{\acute{e}t}\twoheadrightarrow J_{Y}[p]_{\acute{e}t}/\delta \twoheadrightarrow\mathbb{Z}/p\mathbb{Z}\]
whose kernel maps surjectively to \(\mathcal{G}_{X}\) via \(\delta^{p-1}\). This shows that the subgroup \(\mathcal{K}\) defined by the exactness of
\[0\to\mathcal{K}\to J_{Y}[p]_{\acute{e}t}\to\mathbb{Z}/p\mathbb{Z}\to 0 \tag{7.2}\]
is \(G\)-free.
Conversely, if we have the exact sequence (7.2) with \(\mathcal{K}\) assumed to be \(G\)-free, then the surjec-tion \(J_{Y}[p]_{\acute{e}t}\to\mathbb{Z}/p\mathbb{Z}\) factors through \(J_{Y}[p]_{\acute{e}t}/\delta\), so it splits (1.6). This completes the proof of part (2) of the proposition.
For part (3), if the exact sequences (1.5) and (1.6) both split, then we have the sequences (7.1) and (7.2). It then follows easily from part (2) of Theorem 4.1 that the composed maps
\[\mathbb{Z}/p\mathbb{Z}\to J_{Y}[p]_{\acute{e}t}\to\mathbb{Z}/p\mathbb{Z}\]
and
\[\mathcal{K}\to J_{Y}[p]_{\acute{e}t}\to\mathcal{Q}\]
are isomorphisms, so \(J_{Y}[p]_{\acute{e}t}\) is isomorphic to the direct sum of \(\mathbb{Z}/p\mathbb{Z}\) and a \(G\)-free group. (Moreover, sequences (7.1) and (7.2) both split.) The converse is straightforward.
This completes the proof of Proposition 1.8
_Example 7.1_.: In general, the splitting of exact sequences (1.5) and (1.6) appear to be independent conditions: Table 1 provides several examples of etale \(\mathbb{Z}/3\mathbb{Z}\)-covers of genus 3 hyperelliptic curves over \(\mathbb{F}_{3}\) with \(f_{X}=2\) exhibiting that all 4 splitting possibilities indeed occur. (The additional notations \(d_{1}\), \(d_{2}\), and \(\mu\) are explained in Example 7.5 below.) Note, however, that in certain special situations, there are implications: for example when \(k\) is finite and \(\nu_{X}=1\), the splitting of (1.5) implies that of (1.6) by Theorem 1.9 (4).
In some of the proofs below, it will be convenient to use the language of Galois representations. Recall (e.g., using [10, SS5.8]) that there is an equivalence of categories between etale \(p\)-group schemes over \(k\) and representations of \(\operatorname{Gal}(\overline{k}/k)\) on finite-dimensional \(\mathbb{F}_{p}\)-vector spaces. The representation associated to a group \(\mathcal{G}\) is the \(\mathbb{F}_{p}\)-vector space \(\mathcal{V}:=\mathcal{G}(\overline{k})\) equipped with the natural action of \(\operatorname{Gal}(\overline{k}/k)\). In particular, when \(k\) is finite, \(\operatorname{Gal}(\overline{k}/k)\) is pro-cyclic, and \(\mathcal{G}\) is determined by a single endomorphism of \(\mathcal{V}\), namely Frobenius.
Proof of Theorem 1.9.: First note that if
\[0\to\mathcal{G}_{1}\to\mathcal{G}_{2}\to\mathcal{G}_{3}\to 0\]
is an exact sequence of \(p\)-torsion group schemes over \(k\), then
\[\nu(\mathcal{G}_{1})\leq\nu(\mathcal{G}_{2})\leq\nu(\mathcal{G}_{1})+\nu( \mathcal{G}_{3}).\]
If the sequence splits, then \(\nu(G_{2})=\nu(\mathcal{G}_{1})+\nu(\mathcal{G}_{3})\).
By part (1) of Theorem 1.3,
\[J_{X}[p]_{\ell t}\cong J_{Y}[p]_{\ell t}[\delta]\subset J_{Y}[p]_{\ell t},\]
so we have \(\nu_{X}\leq\nu_{Y}\).
Again by part (1) of Theorem 1.3, we have exact sequences
\[0\to J_{Y}[p]_{\ell t}[\delta^{i-1}]\to J_{Y}[p]_{\ell t}[\delta^{i}]\to \mathcal{G}_{X}\to 0\]
for \(i=2,\ldots,p\).
Applying the observation in the first paragraph of the proof and induction on \(i\), we find that \(\nu_{Y}\leq\nu_{X}+(p-1)\nu(\mathcal{G}_{X})\). Since \(\mathcal{G}_{X}\) is a subgroup scheme of \(J_{X}[p]\), \(\nu(\mathcal{G}_{X})\leq\nu_{X}\) and we have \(\nu_{Y}\leq p\nu_{X}\). This establishes part (1).
If (1.3) splits, then \(\nu(\mathcal{G}_{X})=\nu_{X}-1\) and we find that \(\nu_{Y}\leq p(\nu_{X}-1)+1\). This yields part (2).
For part (3), if \(k\) is algebraically closed, then \(\nu_{X}=f_{X}\) and \(f_{X}\geq 1\) since we have assumed \(X\) has an etale \(\mathbb{Z}/p\mathbb{Z}\) cover. Now assume that \(k\) is finite. Note that by Definition-Proposition 1.1, the representation associated to \(J_{X}[p]_{\ell t}\) has the trivial representation as a quotient. This is equivalent to saying that the action of Frobenius on \(\mathcal{V}=J_{X}[p](\overline{k})\) has \(1\) as an eigenvalue. It follows that \(J_{X}[p](k)\) is non-trivial, i.e., \(\nu_{X}\geq 1\).
If \(k\) is finite, (1.5) is split, and (1.6) is non-split, then we claim that \(\operatorname{Fr}_{k}-1\) can not act bijectively on \(\mathcal{G}_{X,\acute{e}t}(\overline{k})\), where \(\operatorname{Fr}_{k}\) generates \(\operatorname{Gal}(\overline{k}/k)\). Indeed, assuming to the contrary that \(\operatorname{Fr}_{k}-1\) is bijective on \(\mathcal{G}_{X,\acute{e}t}(\overline{k})\), one sees via the snake lemma that in the exact sequence of \(\overline{k}\)-points associated to (1.6), the image of \(\operatorname{Fr}_{k}-1\) on \((J_{Y}[p]_{\acute{e}t}/\delta)(\overline{k})\) projects isomorphically onto the quotient \(\mathcal{G}_{X,\acute{e}t}(\overline{k})\); the inverse of this isomorphism splits (1.6), contradicting our hypothesis. We conclude that \(\operatorname{Fr}_{k}\) must have a nontrivial fixed vector on \(\mathcal{G}_{X,\acute{e}t}(\overline{k})\). Since (1.3) (equivalently (1.5)) splits by hypothesis, we deduce that the space of \(\operatorname{Fr}_{k}\)-fixed vectors in \(J_{X}[p]_{\acute{e}t}(\overline{k})\) is at least \(2\)-dimensional, _i.e._\(\nu_{X}\geq 2\). This completes the proof of part (4) of the theorem.
_Example 7.2_.: Table 1 shows that the bounds on \(\nu_{Y}\) in Theorem 1.9 (1)-(2) are sharp: if (1.5) is non-split, we must have \(\nu_{X}=1\) since \(f_{X}=2\) throughout the table (indeed, the alternative is \(\nu_{X}=2=f_{X}\), in which case \(J_{X}[p]_{\acute{e}t}\) would be completely split, implying the splitting of (1.5)) This forces \(\nu_{Y}>1\) by Theorem 1.11 (B) and Proposition 1.8 (3), which gives the bounds \(1=\nu_{X}<\nu_{Y}\leq 3\) in this situation. Lines 4-6 of the table show that both possibilities \(\nu_{Y}=2,3\) indeed occur.
When (1.5) is split, we have \(\nu_{X}\leq\nu_{Y}\leq 3\nu_{X}-2\); lines 1-3 show that the unique possibility \(\nu_{Y}=1\) indeed occurs when \(\nu_{X}=1\), while lines 7-10 show that all three possibilities \(\nu_{Y}=2,3,4\) occur when \(\nu_{X}=2\).
_Example 7.3_.: In part (3), if we do not assume that \(k\) is finite, \(\operatorname{Gal}(\overline{k}/k)\) may no longer be procyclic, and it may have \(\mathbb{F}_{p}\)-representations with trivial quotients, but no trivial subrepresentations. Here is an example of such a representation.
Take \(p=3\), let \(F=\mathbb{F}_{p}(t)\), let \(F^{\prime}=\mathbb{F}_{p}(v)\), and embed \(F\hookrightarrow F^{\prime}\) by \(t\mapsto(v^{3}-v)^{2}\). Then one checks that \(F^{\prime}/F\) is Galois with group \(S_{3}\). Let \(k\) and \(k^{\prime}\) be the perfections of \(F\) and \(F^{\prime}\) respectively. Then we have
\[\operatorname{Gal}(\overline{k}/k)\twoheadrightarrow\operatorname{Gal}(k^{ \prime}/k)\cong S_{3}.\]
Let \(\gamma_{1}\) and \(\gamma_{2}\) be two involutions generating \(S_{3}\), and let \(S_{3}\) act on \(\mathcal{V}:=\mathbb{F}_{3}^{2}\) by the matrices
\[\gamma_{1}\mapsto\left(\begin{smallmatrix}-1&1\\ 0&1\end{smallmatrix}\right)\quad\text{and}\quad\gamma_{2}\mapsto\left( \begin{smallmatrix}-1&0\\ 0&1\end{smallmatrix}\right).\]
Then one sees easily that the resulting representation of \(\operatorname{Gal}(\overline{k}/k)\) has the trivial representation as a quotient, but it does not have the trivial representation as a sub. Thus the corresponding group scheme \(\mathcal{G}\) admits a surjection to \(\mathbb{Z}/p\mathbb{Z}\), but has \(\nu(\mathcal{G})=0\).
To prove Theorems 1.10 and 1.11, we will use an approach via coordinates, so we begin with some preliminaries on \(\mathbb{F}_{p}[G]\)-modules.
Let \(S=\mathbb{F}_{p}[G]\cong\mathbb{F}_{p}[\delta]/(\delta^{p})\), and for \(i=1,\dots,p\), let \(W_{i}=S/(\delta^{i})\) be the \(i\)-dimensional indecomposable module over \(S\). Write \(a\mapsto\overline{a}\) for the natural reduction homomorphism \(S\to k\) (the quotient modulo \(\delta\)). We have natural identifications \(\operatorname{Hom}_{S}(W_{p},W_{p})\cong S\), \(\operatorname{Hom}_{S}(W_{p},W_{1})\cong\mathbb{F}_{p}\), \(\operatorname{Hom}_{S}(W_{1},W_{p})\cong\mathbb{F}_{p}\), \(\operatorname{Hom}_{S}(W_{1},W_{1})\cong\mathbb{F}_{p}\). Under these identifications, the maps
\[\operatorname{Hom}_{S}(W_{p},W_{p})\to\operatorname{Hom}_{S}(W_{p}[\delta],W_ {p}[\delta])=\operatorname{Hom}_{S}(W_{1},W_{1})\]
and
\[\operatorname{Hom}_{S}(W_{p},W_{p})\to\operatorname{Hom}_{S}(W_{p}/\delta,W_ {p}/\delta)=\operatorname{Hom}_{S}(W_{1},W_{1})\]
are both the reduction map \(S\to k\). The restriction map
\[\operatorname{Hom}_{S}(W_{p},W_{1})\to\operatorname{Hom}_{S}(W_{p}[\delta],W_ {1})=\operatorname{Hom}_{S}(W_{1},W_{1})\]
is zero, as is the reduction map
\[\operatorname{Hom}_{S}(W_{1},W_{p})\to\operatorname{Hom}_{S}(W_{1},W_{p}/\delta)= \operatorname{Hom}_{S}(W_{1},W_{1}).\]
Now let \(\mathcal{V}_{X}\) and \(\mathcal{V}_{Y}\) be the representations of \(\operatorname{Gal}(\overline{k}/k)\) associated to \(J_{X}[p]_{\ell t}\) and \(J_{Y}[p]_{\ell t}\):
\[\mathcal{V}_{X}=J_{X}[p]_{\ell t}(\overline{k})\quad\text{and}\quad\mathcal{V }_{Y}=J_{Y}[p]_{\ell t}(\overline{k}).\]
By part (2) of Proposition 4.1, we have a (non-canonical) isomorphism of \(S\)-modules
\[\mathcal{V}_{Y}\cong W_{1}\oplus W_{p}^{f_{X}-1}.\]
Choosing such an isomorphism, we may represent the action of \(\phi\in\operatorname{Gal}(\overline{k}/k)\) by a matrix of the form
\[\left(\begin{array}{c|c}a_{0}&\alpha_{1}\,\cdots\,\alpha_{r}\\ \hline\beta_{1}&\\ \vdots&A=(a_{ij})\\ \beta_{r}&\end{array}\right), \tag{7.3}\]
where \(r=f_{X}-1\), \(a_{0}\in\operatorname{Hom}_{S}(W_{1},W_{1})\), \(\alpha_{j}\in\operatorname{Hom}_{S}(W_{p},W_{1})\), \(\beta_{i}\in\operatorname{Hom}_{S}(W_{1},W_{p})\), \(a_{ij}\in\operatorname{Hom}_{S}(W_{p},W_{p})\), and \(i\) and \(j\) run from 1 to \(r\).
Using the observations above on \(S\)-homomorphisms, we find that the induced action of \(\phi\) on \(\mathcal{V}_{Y}[\delta]\cong\mathcal{V}_{X}\) is given by the matrix
\[\left(\begin{array}{c|c}a_{0}&0\,\cdots\,0\\ \hline\beta_{1}&\\ \vdots&\overline{A}=(\overline{a}_{ij})\\ \beta_{r}&\end{array}\right), \tag{7.4}\]
and the induced action of \(\phi\) on \(\mathcal{V}_{Y}/\delta\) is given by the matrix
\[\left(\begin{array}{c|c}a_{0}&\alpha_{1}\,\cdots\,\alpha_{r}\\ \hline 0&\\ \vdots&\overline{A}=(\overline{a}_{ij})\\ 0&\end{array}\right).\]
The block triangular structure of the last two displayed matrices reflects the exact sequences (1.5) and (1.6) (i.e., the fact that \(\mathbb{Z}/p\mathbb{Z}\) is a quotient of \(\mathcal{V}_{Y}[\delta]\) and a sub of \(\mathcal{V}_{Y}/\delta\)), and we find that \(a_{0}=1\). Moreover, (1.5) splits if and only if we may choose the isomorphism \(\mathcal{V}_{Y}\cong W_{1}\oplus W_{p}^{r}\) such that the \(\beta_{i}\) all vanish, and (1.6) splits if and only if we may choose the isomorphism such that the \(\alpha_{j}\) all vanish. (This gives an alternate proof of Proposition 1.8.)
**Lemma 7.4**.: _Define \(U\), the 1-units of \(\operatorname{GL}_{r}(S)\), by the exactness of_
\[0\to U\to\operatorname{GL}_{r}(S)\to\operatorname{GL}_{r}(\mathbb{F}_{p})\to 0,\]
_where the surjection is \((a_{ij})\mapsto(\overline{a}_{ij})\). Then the group \(U\) has exponent \(p\)._
Proof.: Write an element of \(U\) in the form \(I+\Delta M\) where \(\Delta\) is the diagonal matrix with all diagonal entries equal to \(\delta\) and where \(M\in M_{r}(S)\). Since \(I\) and \(\Delta\) are in the center of the characteristic \(p\) ring \(M_{r}(S)\), we have
\[(I+\Delta M)^{p}=I+\Delta^{p}M^{p}=I.\]
With these preliminaries, we are ready to prove the two remaining theorems about \(J_{Y}[p]_{\ell t}\).
Proof of Theorem 1.10.: For part (1), note that if \(\phi\) has matrix of the shape (7.4) with \(a_{0}=1\), then by an inductive argument, \(\phi^{n}\) has matrix
\[\left(\begin{array}{c|c}1&0\,\cdots\,0\\ \hline n\beta_{1}&\\ \vdots&\overline{A}^{n}\\ n\beta_{r}&\end{array}\right).\]
Thus there is a finite Galois extension \(k^{\prime}\) of \(k\) with \(\operatorname{Gal}(k^{\prime}/k)\) of exponent \(p\) such that the representation of \(\operatorname{Gal}(\overline{k}/k^{\prime})\) on \(\mathcal{V}_{X}=\mathcal{V}_{Y}[\delta]\) has the trivial representation as a direct factor, i.e., such that the sequences (1.3) and (1.5) split over \(k^{\prime}\).
For part (2), to say that \(J_{X}[p]_{\ell t}\) is completely split is to say that for every \(\phi\in\operatorname{Gal}(\overline{k}/k)\), the matrix of \(\phi\) is of the form (7.3) with \(a_{0}=1\), \(\beta_{i}=0\) for all \(i\), and \(\overline{A}=I\), i.e., with \(A\in U\). Lemma 7.4 then shows that \(\phi^{p}\) acts trivially on \(\mathcal{V}_{Y}\). This shows that there is a a finite Galois extension \(k^{\prime}\) of \(k\) with \(\operatorname{Gal}(k^{\prime}/k)\) of exponent \(p\) such that \(J_{Y}[p]_{\ell t}\) is completely split over \(k^{\prime}\).
This completes the proof of the theorem.
_Example 7.5_.: Table 1 (in which each base curve \(X\) has \(f_{X}=2\)) illustrates that the degree bounds of Theorems 1.10 and 1.11 are sharp: in the antepenultimate column we have listed the degree \(d_{1}\) of the unique minimal extension \(k_{1}/k\) with the property that \(J_{X}[p]_{\ell t}\) is completely split over \(k_{1}\). The penultimate column lists the degree \(d_{2}\) of the minimal extension \(k_{2}/k_{1}\) over which \(J_{Y}[p]_{\ell t}\) is completely split. When \(\nu_{X}=\nu_{Y}=1\), the final column gives the positive integer \(\mu\) specified in (1a) of Theorem 1.11; note that all possibilities indeed occur. The first three lines of the table provides examples in which the extension \(k^{\prime}\) guaranteed by (1a) of Theorem 1.11 is as large as possible; in the first line \(k^{\prime\prime}=k\), while in the second and third lines \(k^{\prime\prime}\) is the unique degree \(p=3\) extension of \(k\).
Proof of Theorem 1.11.: For part (A), if \(f_{X}=1\), then in equation (1.3), \(\mathcal{G}_{X}=0\) and \(J_{X}[p]_{\ell t}\cong\mathbb{Z}/p\mathbb{Z}\). It then follows from part (1) of Theorem 1.3 that \(J_{Y}[p]_{\ell t}\cong\mathbb{Z}/p\mathbb{Z}\).
Now consider part (B). The alternatives are mutually exclusive, and they exhaust the possibilities, so it will suffice to verify the additional claims in each case. Note that the subgroup \(\mathcal{G}_{X,\ell t}\) defined by exact sequence (1.3) corresponds to a line in \(\mathcal{V}_{X}\) which is invariant under \(\operatorname{Gal}(\overline{k}/k)\). Let \(\phi\in\operatorname{Gal}(\overline{k}/k)\) be Frobenius, and let \(\rho\in\mathbb{F}_{p}^{\ \times}\) be the eigenvalue of \(\phi\) on this line.
(Case (1a)) If \(\rho\neq 1\), then \(p>2\) since \(\mathbb{F}_{2}^{\times}=\{1\}\). Moreover, both (1.5) and (1.6) are split (by taking the kernel or image of a high power of \(\phi-\rho\)). Thus, we may choose the isomorphism \(\mathcal{V}_{Y}\cong W_{1}\oplus W_{p}\) so that \(\phi\) has the shape
\[\left(\begin{array}{c|c}1&0\\ \hline 0&a\end{array}\right),\]
where \(a\in S\) has \(\overline{a}=\rho\). It is then clear that \(\nu_{X}=\nu_{Y}=1\) and we are in case (1a). The group scheme \(\mathcal{Q}\) in the statement is the one corresponding to the representation of \(\operatorname{Gal}(\overline{k}/k)\) on \(W_{p}\) with Frobenius acting by \(a\). Since \(\overline{a}^{p-1}=\rho^{p-1}=1\), over an extension \(k^{\prime}\) of \(k\) of degree dividing \(p-1\), \(\phi\) acts on \(W_{p}\) via a 1-unit, so has a non-trivial space of invariants. This shows that \(|\mathcal{Q}(k^{\prime})|=p^{\mu}\)
with \(1\leq\mu\leq p\). Since \(a\) is \(\rho\) times a \(1\)-unit, Lemma 7.4 shows that \(a^{p}=\rho^{p}=\rho\in k\subset S\), so that over the extension \(k^{\prime\prime}\) of degree \(p\), \(\rho\) acts on \(W_{p}\) by \(\rho\), and \(\mathcal{G}\cong(\mathcal{G}^{\prime})^{p}\) for a rank \(1\), non-split group scheme \(\mathcal{G}^{\prime}\). Finally, \(\phi^{p(p-1)}\) acts trivially on \(\mathcal{V}_{Y}\), so \(J_{Y}[p]_{\mathcal{E}t}\) is completely split over an extension of degree dividing \(p(p-1)\).
(Case (1b), \(p>2\), and Case (1), \(p=2\)) Next, suppose that \(\rho=1\) and (1.3) is not split. Then \(\mathcal{G}_{X}\cong\mathbb{Z}/p\mathbb{Z}\) and \(\nu_{X}=1\). If \(p=2\), this is enough to conclude that we are in case (1). Since \(J_{X}[p]_{\mathcal{E}t}\) is not completely split, \(J_{Y}[p]_{\mathcal{E}t}\) is also not completely split and \(\nu_{Y}<p+1\). The action of \(\phi\) is given by a matrix of the form
\[\left(\begin{array}{c|cc}1&\alpha\\ \hline\beta&a\end{array}\right),\]
where \(\beta\neq 0\). If \(p>2\), we consider the matrix of \(\phi\) with respect to a suitable \(k\)-basis of \(W_{1}\oplus W_{p}\), namely 1 for \(W_{1}\) and \(\delta^{p-1},\delta^{p-2},\ldots,1\) for \(W_{p}\). Then \(\phi\) takes the form
\[\left(\begin{array}{c|cccc}1&0&\ldots&0&\alpha\\ \hline\beta&1&*&*&*\\ 0&0&1&*&*\\ \vdots&\vdots&\ddots&1&*\\ 0&0&\ldots&0&1\end{array}\right).\]
It is then visible that \(\phi-1\) has rank \(<p\), so \(\nu_{Y}>1\). Thus we are in case (1b). For any \(p\), an inductive argument shows that \(\phi^{n}\) has matrix
\[\left(\begin{array}{c|cc}1&n\alpha\\ \hline n\beta&(1+2+\cdots+(n-1))\beta\alpha+a^{n}\end{array}\right).\]
Applying Lemma 7.4 shows that if \(p>2\), then \(J_{Y}[p]_{\mathcal{E}t}\) is completely split over the extension of \(k\) degree \(p\), and if \(p=2\), then \(J_{Y}[p]_{\mathcal{E}t}\) is completely split over the extension of \(k\) degree dividing \(4\).
(Case (2)) Finally, if \(\rho=1\) and (1.3) splits, then \(\nu_{X}=2\), \(J_{X}[p]_{\mathcal{E}t}\) splits completely, and we are in case (2) (for any \(p\)). The conclusions there follow from Theorem 1.9 and part (2) of Theorem 1.10. We can say a bit more about the structure of \(J_{Y}[p]_{\mathcal{E}t}\) in this case: By part (1) of Proposition 1.8, there is an exact sequence
\[0\to\mathbb{Z}/p\mathbb{Z}\to J_{Y}[p]_{\mathcal{E}t}\to\mathcal{Q}\to 0,\]
where \(\mathcal{Q}\) is \(G\)-free of rank \(1\). We have \(\nu(\mathcal{Q})\geq 1\), and both the extension above and the group scheme \(\mathcal{Q}\) split completely over an extension of degree dividing \(p\).
This completes the proof of Theorem 1.11.
_Example 7.6_.: Taking into account Theorem 1.9 (2), which implies that when \(k\) is finite and \(\nu_{X}=1\), the splitting of (1.5) implies that of (1.6), we see that Table 1 exhibits that all possibilities specified by Theorem 1.11 indeed occur when \(p=3\) and \(k=\mathbb{F}_{3}\).
### Dependence of \(\mathcal{H}\) and \(\mathcal{N}_{Y}\) on \(S\)
The group scheme \(\mathcal{H}\) in Theorem 1.6 and its Dieudonne module \(\mathcal{N}_{Y}\) (analyzed in Proposition 4.3) apparently depend on the choice of a rational point \(S\). Indeed, the subquotients \(\operatorname{Ker}\left(\operatorname{Res}_{T/S}\mathbb{Z}/p\mathbb{Z}\to \mathbb{Z}/p\mathbb{Z}\right)\) and \(\operatorname{Coker}\left(\mu_{p}\to\operatorname{Res}_{T/S}\mu_{p}\right)\) of \(\mathcal{H}\) and their Dieudonne modules
\[\operatorname{Coker}\left(k=H^{0}(Y,\mathcal{O}_{Y})\to H^{0}(Y,\mathcal{O}_{ T})\right)\quad\text{and}\quad\operatorname{Ker}\left(H^{0}(Y,\Omega_{Y}^{1}(T)/ \Omega_{Y}^{1})\to H^{1}(Y,\Omega_{Y}^{1})=k\right)\]
visibly depend on whether \(S\) splits in \(Y\) (and more precisely on the class of \(T=\pi^{-1}(S)\) in \(H^{1}(S,\mathbb{Z}/p\mathbb{Z})\)).
**Proposition 7.8**.: _If \(k\) is algebraically closed, then the isomorphism class of \(\mathcal{N}_{Y}\) as a \(\mathbb{D}_{k}[G]\)-module is independent of the choice of the rational point \(S\)._
Proof.: The exact sequences (4.7) and (4.8) show that the local-local part of \(\mathcal{N}_{Y}\) is independent of \(S\) (without any hypothesis on \(k\)). If \(k\) is algebraically closed, the etale part of \(\mathcal{N}_{Y}\) is completely split as a \(\mathbb{D}_{k}\)-module (isomorphic to \(M(\mathbb{Z}/p\mathbb{Z})^{pf_{X}}\)) and by Proposition 4.3, it is \(G\)-free of rank \(f_{X}\), so it is isomorphic to
\[M(\mathbb{Z}/p\mathbb{Z})^{f_{X}}\otimes\mathbb{F}_{p}[G],\]
and is thus independent of \(S\). The same follows for the multiplicative part by Cartier duality. Since \(\mathcal{N}_{Y}\) is the direct sum of its etale, multiplicative, and local-local parts, this establishes the proposition.
_Example 7.9_.: Surprisingly, when \(k\) is finite, \(\mathcal{N}_{Y}\) depends on \(S\), even when \(S\) splits in \(Y\). To emphasize the subtlety of this dependence on \(S\) and its fiber \(T:=\pi^{-1}(S)\) in \(Y\), let us write \(\mathcal{N}_{Y}(T)\) in place of \(\mathcal{N}_{Y}\). Consider the smooth projective genus 5 hyperelliptic curve over \(k=\mathbb{F}_{3}\) given by the affine equation
\[X:\quad y^{2}+x^{12}+x^{10}-x^{9}+x^{6}+x^{4}-x^{2}+x-1=0.\]
This curve has \(a\)-number 1 and arithmetic \(p\)-rank \(\nu_{X}=2\), so in particular has two independent unramified \(\mathbb{Z}/3\mathbb{Z}\)-covers. One such cover \(Y\) is given by the Artin-Schreier equation \(z^{3}-z=f\) where \(f\) is the rational function
\[f=\frac{x^{9}-x^{7}-x^{6}+x^{5}+x^{4}+x^{3}+x^{2}-x+1}{x^{15}}\\ -\frac{x^{15}+x^{14}-x^{13}+x^{12}+x^{11}-x^{10}+x^{9}+x^{6}-x^{3 }+1}{x^{15}}.\]
The projective curve \(X\) has exactly six \(k\)-rational points \((x,y)\):
\[S_{1}:=(0,-1),S_{2}:=(0,1),S_{3}:=(-1,-1),S_{4}:=(-1,1),S_{5}:=(1,-1),S_{6}:=( 1,1);\]
note that the point at infinity on \(X\) is of degree 2. Let \(\pi:Y\to X\) be the covering map and \(T_{i}:=\pi^{-1}S_{i}\) be the fiber in \(Y\) over \(S_{i}\). Each \(T_{i}\) is gives a \(\mathbb{Z}/3\mathbb{Z}\)-torsor (for the etale topology) over \(k\), so gives a class in \(H^{1}_{\acute{e}t}(\operatorname{Spec}(k),\mathbb{Z}/p\mathbb{Z})\). Via the canonical identifications of abelian groups
\[H^{1}_{\acute{e}t}(\operatorname{Spec}(k),\mathbb{Z}/p\mathbb{Z})\simeq H^{1} (\operatorname{Gal}(\overline{k}/k),\mathbb{Z}/p\mathbb{Z})\simeq k/\wp k= \mathbb{Z}/3\mathbb{Z}\]
each torsor \(T_{i}\) gives a class \([T_{i}]\in\mathbb{Z}/3\mathbb{Z}\). For example, \([T_{1}]=0=[T_{3}]\) as each of \(T_{1}\) and \(T_{3}\) consist of 3 distinct \(k\)-rational points of \(Y\), whereas \([T_{i}]=\pm 1\) for \(i=2,4,5,6\) since for these values of \(i\), the fiber \(T_{i}\) is a single degree 3 point on \(Y\).
Using Magma, we compute the matrix of \(V\) acting on the spaces \(H^{0}(\Omega^{1}_{Y}(T_{i}))\) for \(i=1,\dots,6\). We also compute the action of the Artin-Schreier automorphism of \(Y\to X\) on the residue field of \(T_{i}\), which determines \([T_{i}]\), and obtain the following table:
\begin{tabular}{r||r r r r r r r} \(i\) & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline \([T_{i}]\) & 0 & 1 & 0 & 1 & -1 & -1 \\ \(\dim\ker(V-1)\) & 4 & 3 & 4 & 3 & 3 & 3 \\ \(\dim\ker(V-1)^{3}\) & 9 & 9 & 8 & 8 & 8 & 8 \\ \end{tabular} It follows from this that the four \(k[V]\)-modules \(H^{0}(\Omega^{1}_{Y}(T_{i}))=\mathcal{N}_{Y}(T_{i})[F]\) for \(1\leq i\leq 4\) are pairwise non-isomorphic. For \(i=1,3\) we have short exact sequences of \(k[V]\)-modules
while for \(i=2,4\) we have short exact sequences
Noting that the kernel of \(V-1\) on \(H^{0}(\Omega^{1}_{Y})\) has dimension 3, our computations show that the above exact sequences are not only _non_-split for \(1\leq i\leq 4\), but that the two _extension_ classes of \(k[V]\)-modules provided by \(H^{0}(\Omega^{1}_{Y}(T_{i}))\) for \(i=1,3\) (respectively \(i=2,4\)) are non-isomorphic! This is rather surprising, as all four \(k[V]\)-modules \(H^{0}(\Omega^{1}_{Y}(T_{i}))\) for \(1\leq i\leq 4\) become isomorphic after a finite extension of the ground field. We conclude that the \(\mathbb{D}_{k}\) modules \(\mathcal{N}_{Y}(T_{i})\) are non-isomorphic for \(1\leq i\leq 4\). On the other hand, further computation shows that \(\mathcal{N}_{Y}(T_{4})\simeq\mathcal{N}_{Y}(T_{5})\simeq\mathcal{N}_{Y}(T_{6})\) as \(\mathbb{D}_{k}\)-modules, which is again somewhat surprising as the torsors \(T_{4}\) and \(T_{5}\) are non-isomorphic, while the torsors \(T_{1},T_{3}\)_are_ isomorphic, as are \(T_{2},T_{4}\). Again, all six Dieudonne modules \(\mathcal{N}_{Y}(T_{i})\) become isomorphic after a suitable finite extension on \(k\).
## 8. Applications to \(J_{y}[p]\): The local-local part
In this section, we consider \(J_{Y}[p]_{ll}\) and its Dieudonne module \(H^{1}_{dR}(Y)_{ll}\) Throughout, we assume \(k=\overline{k}\). We view the local-local part of \(H^{1}_{dR}(X)\) as given, and we exploit the \(G\)-module structure on \(H^{1}_{dR}(Y)_{ll}\) to find restrictions on its structure as a \(\mathbb{D}_{k}\)-module. We begin by recording the basic properties of \(H^{1}_{dR}(Y)_{ll}\).
**Proposition 8.1**.: _Write \(M\) for \(H^{1}_{dR}(Y)_{ll}\) and let \(h=g_{X}-f_{X}\). We have_
\[M[\delta]\cong M/\delta\cong H^{1}_{dR}(X)_{ll}.\]
_Moreover, \(M\) has the following properties:_
1. \(M\) _is a free_ \(k[G]\)_-module of rank_ \(2h\)_._
2. \(M\) _is a self-dual, local-local_ \(BT_{1}\) _module, i.e.,_ \(\operatorname{Im}F=\operatorname{Ker}V\)_,_ \(\operatorname{Im}V=\operatorname{Ker}F\)_,_ \(F\) _and_ \(V\) _act nilpotently on_ \(M\)_, and_ \(M\) _admits a perfect_ \(k\)_-bilinear pairing_ \(\langle\cdot,\cdot\rangle\) _which is alternating_ (_\(\langle m,m\rangle=0\) _for all_ \(m\in M\)_) and which satisfies_ \(\langle Fm,n\rangle=\langle m,Vn\rangle^{p}\) _for all_ \(m,n\in M\)_._
3. _The pairing is compatible with the_ \(G\) _action in that_ \(\langle gm,gn\rangle=\langle m,n\rangle\) _for all_ \(m,n\in M\)_._
4. \(\operatorname{Im}F=\operatorname{Ker}V\) _and_ \(\operatorname{Im}V=\operatorname{Ker}F\) _are free submodules of_ \(M\) _of rank_ \(h\)_._
Proof.: Proposition 4.1 shows that \(M[\delta]\cong M/\delta\cong H^{1}_{dR}(X)_{ll}\). Part (1) was proven in Proposition 4.1 part (2). For part (2), as we reviewed at the beginning of Section 4, \(H^{1}_{dR}(Y)\) is a self-dual \(BT_{1}\) module where the duality is induced by the de Rham pairing, and it is easy to see that
the restriction of this pairing makes \(M\) into a self-dual \(BT_{1}\) module. It is local-local by definition. Part (3) holds because \(g\) is an automorphism of \(Y\), so has degree 1. For part (4), note that \(VM=H^{0}(Y,\Omega^{1}_{Y})_{ll}\), and comparing the result of Tamagawa (equation (4.4)) to that of Nakajima (equation (4.5)) shows that \(VM\) is free over \(k[G]\) of rank \(h\). The same then follows for \(\operatorname{Im}F=\operatorname{Ker}V\) since \(VM\cong M/(\operatorname{Ker}V)\).
_Remark 8.2_.: For a module \(M\) with properties (1-3), the spaces \(\operatorname{Im}F=\operatorname{Ker}V\) and \(\operatorname{Im}V=\operatorname{Ker}F\) are all free over \(k[G]\) of rank \(h\) as soon as one of them is. Furthermore, in this situation the calculations
\[\langle Fm,Fn\rangle=\langle m,VFn\rangle^{p}=0\quad\text{and}\quad\langle Vm,Vn\rangle=\langle m,FVn\rangle^{1/p}=0\]
show that \(\operatorname{Im}F\) and \(\operatorname{Im}V\) are isotropic, and they are maximal isotropic since they have dimension \(ph=\frac{1}{2}\dim M\).
### Coordinates
We will introduce special coordinates on any \(\mathbb{D}_{k}[G]\)-module \(M\) with properties (1-4) as in Proposition 8.1. This allows for numerical experimentation and will lead to a full analysis in a significant case.
Write \(R\) for the group ring \(k[G]\) and introduce a \(k\)-linear involution \(a\mapsto\tilde{a}\) by requiring that \(\tilde{g}=g^{-1}\) for all \(g\in G\). This involution is trivial if \(p=2\) and is non-trivial with invariant subspace of dimension \((p+1)/2\) if \(p>2\). We extend it to vectors and matrices with entries in \(R\) by acting componentwise.
Recall that \(\gamma\in G\) is the element corresponding to 1 under \(G\cong\mathbb{Z}/p\mathbb{Z}\). For \(a=a_{0}+a_{1}\gamma+\cdots+a_{p-1}\gamma^{p-1}\in R\), define
\[(a)_{0}:=a_{0}.\]
Next, note that the function \(R\times R\to k\) given by
\[(a,b):=(a\tilde{b})_{0}\]
is \(k\)-bilinear and satisfies \((\gamma a,\gamma b)=(a,b)\). Let \(J\) be the \(2h\)-by-\(2h\) matrix
\[J=\begin{pmatrix}0_{h}&I_{h}\\ -I_{h}&0_{h}\end{pmatrix},\]
where \(0_{h}\) and \(I_{h}\) are the \(h\times h\) zero and identity matrices respectively. Regarding \(R^{2h}\) as a space of column vectors, we have a perfect, alternating, \(k\)-bilinear pairing on \(R^{2h}\) given by
\[\langle m,n\rangle=\left({}^{t}mJ\tilde{n}\right)_{0}.\]
Here, \({}^{t}m\) stands for the transpose of \(m\) and \(\tilde{n}\) is computed by applying the involution \(a\mapsto\tilde{a}\) to each entry of \(n\). If \(G\) acts coordinatewise on elements of \(R^{2h}\), then \(\langle\gamma m,\gamma n\rangle=\langle m,n\rangle\).
If \(M\) is a \(\mathbb{D}_{k}[G]\)-module which is free over \(R\) of rank \(2h\), and if \(m_{1},\ldots,m_{2h}\) is an ordered basis of \(M\), we write \([m]\) for the coordinate vector of \(m\):
\[[m]=\begin{pmatrix}r_{1}\\ \vdots\\ r_{2h}\end{pmatrix}\quad\text{if}\quad m=r_{1}m_{1}+\cdots+r_{2h}m_{2h}.\]
Since \(F\) acts \(R\)-semilinearly (i.e., \(F(am)=a^{(p)}Fm\) for \(a\in R\) and \(m\in M\)), we may represent \(F\) by a \(2h\)-by-\(2h\) matrix \(\mathcal{F}\) with entries in \(R\), namely, the matrix such that
\[[Fm]=\mathcal{F}[m]^{(p)}\]
where the right hand side is the matrix product of \(\mathcal{F}\) and \([m]^{(p)}\).
We say that a \(\mathbb{D}_{k}\)-module \(N\) of dimension \(2h\) over \(k\) is _superspecial_ if it is isomorphic to \(E[p]^{h}\) where \(E\) is a supersingular elliptic curve. Three equivalent characterizations are: (i) \(F^{2}=V^{2}=0\) on \(N\); (ii) in the Kraft-Oort classification, \(N\) corresponds to the word \(fv\) repeated \(h\) times; and (iii) In the Ekedahl-Oort classification, \(N\) corresponds to the elementary sequence \([0,0,\ldots,0]\).
**Proposition 8.4**.: _Suppose that \(M\) is a \(\mathbb{D}_{k}[G]\)-module with properties_ (1-4) _as in Proposition_ 8.1_._
1. _There exists an ordered basis_ \(m_{1},\ldots,m_{2h}\) _of_ \(M\) _such that the pairing_ \(\langle\cdot,\cdot\rangle\) _is given by_ \[\langle m,n\rangle=\begin{pmatrix}{}^{t}[m]\widetilde{J[n]}\end{pmatrix}_{0},\] _and such that the matrix of_ \(F\) _has the form_ \[\mathcal{F}=\begin{pmatrix}0&B\\ 0&D\end{pmatrix}\] _where_ \(B\) _and_ \(D\) _are_ \(h\)_-by-_\(h\) _matrices with coordinates in_ \(R\) _satisfying_ \({}^{t}\tilde{D}B={}^{t}\tilde{B}D\) _and the columns of_ \(\mathcal{F}\) _generate a free_ \(R\)_-module of rank_ \(h\)_._
2. _Conversely, any choice of_ \(B\) _and_ \(D\) _satisfying_ \({}^{t}\tilde{D}B={}^{t}\tilde{B}D\) _and such that the columns of_ \(\mathcal{F}=\begin{pmatrix}0&B\\ 0&D\end{pmatrix}\) _generate a free_ \(R\)_-module of rank_ \(h\) _and_ \(\mathcal{F}\) _is_ \(p\)_-nilpotent_2 _defines the structure of_ \(\mathbb{D}_{k}[G]\)_-module on_ \(R^{2h}\) _which satisfies properties_ (1-4) _of Proposition_ 8.1_._ Footnote 2: We say \(\mathcal{F}\) is “\(p\)-nilpotent” if \(\mathcal{FF}^{(p)}\cdots\mathcal{F}^{(p^{a})}=0\) for some \(a>0\).
3. _If_ \(M/\delta M\) _is superspecial, then we may choose the basis so that the matrix of_ \(F\) _has the form_ \[\mathcal{F}=\begin{pmatrix}0&I\\ 0&D\end{pmatrix}\] _where_ \(\delta\) _divides_ \(D\) _(i.e.,_ \(\delta\) _divides every entry of_ \(D\)_) and_ \({}^{t}\tilde{D}=D\)_._
4. _Conversely, any choice of_ \(D\) _which is divisible by_ \(\delta\) _and which satisfies_ \({}^{t}\tilde{D}=D\) _defines the structure of_ \(\mathbb{D}_{k}[G]\)_-module on_ \(R^{2h}\) _which satisfies properties_ (1-4) _of Proposition_ 8.1 _and which is superspecial modulo_ \(\delta\)_._
_Remarks 8.5_.:
1. In parts (1) and (3), we do not claim that \(B\) and \(D\) are uniquely determined by \(M\).
2. In parts (2) and (4), the action of \(V\) is determined by that of \(F\) and the pairing by the requirement that \(\langle Fm,n\rangle=\langle m,Vn\rangle^{p}\).
Proof.: By hypothesis, \(\operatorname{Ker}F\) is free of rank \(h\) over \(R\) and isotropic for the pairing. Choose an \(R\)-basis \(m_{1},\ldots,m_{h}\) for \(\operatorname{Ker}F\). Since the pairing is non-degenerate, we may choose elements \(n_{1},\ldots,n_{h}\) of \(M\) such that for \(0\leq i<p\) and \(1\leq j,\ell\leq h\) we have
\[\langle\gamma^{i}m_{j},n_{\ell}\rangle=\begin{cases}1&\text{if $i=0$ and $j=\ell$,}\\ 0&\text{otherwise.}\end{cases}\]
Since \(\langle\gamma^{i}m_{j},\gamma^{i^{\prime}}n_{\ell}\rangle=\langle\gamma^{i-i^{ \prime}}m_{j},n_{\ell}\rangle\), we find that \(n_{1},\ldots,n_{h}\) generate a free \(R\)-module of rank \(h\) which is complementary to \(\operatorname{Ker}F\) and in \(k\)-duality with \(\operatorname{Ker}F\) via the pairing. We then inductively modify the \(n_{j}\) by elements of \(\operatorname{Ker}F\) to make their \(R\)-span isotropic. More precisely, set \(m_{h+1}=n_{1}\), choose \(m_{12}\in\operatorname{Ker}F\) such that \(\langle m_{h+1},m_{12}\rangle=\langle m_{h+1},m_{12}\rangle\) and set \(m_{h+2}=n_{2}-m_{12}\), etc. Then for \(1\leq i,j\leq h\) we have
\[\langle m_{i},m_{j}\rangle=\langle m_{h+i},m_{h+j}\rangle=0\quad\text{and} \quad\langle m_{i},m_{h+j}\rangle=\begin{cases}1&\text{if $i=j$,}\\ 0&\text{otherwise},\end{cases}\]
The pairing \(\langle\cdot,\cdot\rangle\) then has the desired form with respect to the basis \(m_{1},\ldots,m_{2h}\).
Since the first \(h\) basis elements span the kernel of \(F\), the matrix of \(F\) has the form
\[\mathcal{F}=\begin{pmatrix}0&B\\ 0&D\end{pmatrix}\]
where \(B\) and \(D\) are \(h\)-by-\(h\) matrices with coordinates in \(R\). Since \(\operatorname{Im}F\) is \(R\)-free of rank \(h\), the columns of \(\mathcal{F}\) generate a free \(R\)-module of rank \(h\). Let \(\mathcal{V}\) be the matrix of \(V\) with respect to the chosen basis. The compatibility \(\langle Fm,n\rangle=\langle m,Vn\rangle^{p}\) implies that \({}^{t}\mathcal{F}J=J\tilde{\mathcal{V}}^{(p)}\), so
\[\mathcal{V}=\begin{pmatrix}{}^{t}\tilde{D}^{(1/p)}&-{}^{t}\tilde{B}^{(1/p)}\\ 0&0\end{pmatrix},\]
and \(VF=0\) implies \(\mathcal{VF}^{(1/p)}=0\) which in turn implies \({}^{t}\tilde{D}B={}^{t}\tilde{B}D\). This completes the proof of part (1).
For part (2), given \(B\) and \(D\) satisfying the conditions, define a \(p\)-linear operator \(F\) and a \(p^{-1}\)-linear operator \(\mathcal{V}\) on \(R^{2h}\) by setting
\[Fm=\mathcal{F}m^{(p)}\quad\text{and}\quad Vm=\mathcal{V}m^{(1/p)}\]
where
\[\mathcal{F}=\begin{pmatrix}0&B\\ 0&D\end{pmatrix}\quad\text{and}\quad\mathcal{V}=\begin{pmatrix}{}^{t}\tilde{D }^{(1/p)}&-{}^{t}\tilde{B}^{(1/p)}\\ 0&0\end{pmatrix}.\]
Then \(\operatorname{Im}F=\operatorname{Ker}V\) and \(\operatorname{Im}V=\operatorname{Ker}F\), so we obtain a \(BT_{1}\)-module with a perfect alternating pairing, and it is straightforward to check that it has the properties (1-4) enumerated in Proposition 8.1. This completes the proof of part (2)
For part (3), first choose a basis as in part (1). Since \(M/\delta M\) is superspecial, the matrix \(D\) is divisible by \(\delta\), and the condition that the columns of \(\mathcal{F}\) span a free \(R\)-module of rank \(h\) implies that \(B\) is invertible.
Now consider changes of coordinates that preserve the matrix of the pairing. These are precisely the matrices \(S\in GL_{2h}(R)\) satisfying \({}^{t}SJ\tilde{S}=J\). In particular, we may take \(S\) of the form
\[S=\begin{pmatrix}T&0\\ 0&U\end{pmatrix}\]
where \(T,U\in\operatorname{GL}_{h}(R)\) and \({}^{t}T\tilde{U}=I\). in terms of the new basis, the matrix of \(F\) is
\[S^{-1}\mathcal{F}S^{(p)}=\begin{pmatrix}0&T^{-1}BU^{(p)}\\ 0&U^{-1}DU^{(p)}\end{pmatrix}=\begin{pmatrix}0&{}^{t}\tilde{U}BU^{(p)}\\ 0&U^{-1}DU^{(p)}\end{pmatrix}.\]
By the Lang-Steinberg theorem [12, Thm. 10.1] (applied to the endomorphism \(U\mapsto\left({}^{t}\tilde{U}^{-1}\right)^{(p)}\) of \(\operatorname{GL}_{h}(R)\)), we may choose \(U\) so that \({}^{t}\tilde{U}BU^{(p)}=I\). In these new coordinates, \(\mathcal{F}\) has the desired form, and this proves part (3).
Part (4) follows from the same argument as in part (2) and the observation that \(\mathcal{F}\) is \(p\)-nilpotent as soon as \(\delta\) divides \(D\).
The coordinates of Proposition 8.4 can be used to analyze the special case where \(h=1\). Recall that the \(a\)-number of a \(\mathbb{D}_{k}\)-module is \(a=\dim_{k}\left(\operatorname{Ker}F\cap\operatorname{Ker}V\right)\).
**Theorem 8.6**.: _Assume \(p>2\) and let \(M\) be a \(\mathbb{D}_{k}[G]\)-module satisfying the properties_ (1-4) _of Proposition 8.1 with \(h=1\). The \(a\)-number of \(M\) is in \(\{2,4,\ldots,p-1,p\}\). Define integers \(\ell\) and \(b\) by \(0\leq b<a\) and \(p=\ell a+b\). Then \(M\) can be presented as the \(\mathbb{D}_{k}\)-module with generators \(e_{1},\ldots,e_{a}\) and relations_
\[F^{\ell+1}e_{i}=V^{\ell+1}e_{i}\quad\text{for $1\leq i\leq b$}\quad\text{and} \quad F^{\ell}e_{i}=V^{\ell}e_{i}\quad\text{for $b<i\leq a$}.\]
We give other descriptions of \(M\) in the proof below.
Proof.: Since \(M/\delta M\) is \(2\)-dimensional over \(k\) and local-local, it is superspecial. We apply part (3) of Proposition 8.4 to identify \(M\) with \(R^{2}\) where \(F\) acts via a matrix of the form \(\begin{pmatrix}0&1\\ 0&D\end{pmatrix}\) with \(D\in R\) satisfying \(\tilde{D}=D\), and \(V\) acts via \(\begin{pmatrix}D^{(1/p)}&-1\\ 0&0\end{pmatrix}\).
We will use the Kraft-Oort and Ekedahl-Oort classifications of \(\mathbb{D}_{k}\)-modules to analyze \(M\). (See [11] for the original analysis and [11] for a more leisurely and detailed exposition.) First, we construct the canonical filtration on \(R^{2}\), and then we check that under the Ekedahl-Oort classification, \(M\) has elementary sequence \([0,\ldots,0,1,2,\ldots,p-a]\) (of length \(p\) with \(a\) zeroes), or equivalently under the Kraft-Oort classification, it corresponds to the words \(f^{\ell+1}v^{\ell+1}\) with multiplicity \(b\) and \(f^{\ell}V^{\ell}\) with multiplicity \(a-b\).
We first treat the edge case \(D=0\). If \(D=0\), then one easily checks that the canonical filtration on \(M\) has the form
\[0=M_{0}\subset M_{1}=R\begin{pmatrix}1\\ 0\end{pmatrix}\subset M_{2}=M,\]
with \(FM=M_{1}\) and \(FM_{1}=0\). Thus \(M\) has elementary sequence \([0,\ldots,0]\), and \(a\) number \(p\). In the Kraft-Oort classification, this corresponds to the word \(fv\) with multiplicity \(p\).
Now assume that \(D\neq 0\). Then \(D\) has the form \(\delta^{a}u\) where \(0<a<p\) and \(u\in R^{\times}\). Moreover, \(a\) must be even since \(\tilde{\delta}=-\delta/(1+\delta)\). (Here we use \(p>2\).) Writing \(p=\ell a+b\) with \(b<a\), we
have that \(b\neq 0\). Define submodules \(M_{j}\subset M=R^{2}\) for \(0\leq j\leq 4\ell+2\) by
\[M_{2i} =R\delta^{b+a(\ell-i)}\begin{pmatrix}1\\ D\end{pmatrix} \text{for }0\leq i\leq\ell\] \[M_{1+2i} =R\delta^{a(\ell-i)}\begin{pmatrix}1\\ D\end{pmatrix} \text{for }0\leq i\leq\ell\] \[M_{2\ell+1+2i} =R\begin{pmatrix}1\\ D\end{pmatrix}+R\delta^{(p-ia)}\begin{pmatrix}0\\ 1\end{pmatrix} \text{for }0\leq i\leq\ell\] \[M_{2\ell+2+2i} =R\begin{pmatrix}1\\ D\end{pmatrix}+R\delta^{(p-b-ia)}\begin{pmatrix}0\\ 1\end{pmatrix} \text{for }0\leq i\leq\ell\]
Then we have
\[0=M_{0}\subset M_{1}\subset\cdots\subset M_{2\ell+1}=FM\subset M_{2\ell+2} \subset\cdots\subset M_{4\ell+2}=M.\]
Next, one checks that
\[FM_{j}=\begin{cases}M_{0}&\text{if }0\leq j\leq 2,\\ M_{j-2}&\text{if }2\leq j\leq 2\ell+1,\\ M_{2\ell-1}&\text{if }2\ell+1\leq j\leq 4\ell,\\ M_{2\ell}&\text{if }j=4\ell+1,\\ M_{2\ell+1}&\text{if }j=4\ell+2,\end{cases}\]
and
\[V^{-1}M_{j}=\begin{cases}M_{2\ell+1}&\text{if }j=0,\\ M_{2\ell+2}&\text{if }j=1,\\ M_{2\ell+3}&\text{if }2\leq j\leq 2\ell+1,\\ M_{j+2}&\text{if }2k+1\leq j\leq 4\ell,\\ M_{4\ell+2}&\text{if }4k\leq j\leq 4\ell+2.\end{cases}\]
This shows that the \(M_{j}\) give the canonical filtration of \(M\), and that the corresponding elementary sequence is \([0,\ldots,0,1,\ldots,p-a]\). Thus the \(a\)-number of \(M\) is \(a\). For the classification by words, we note that the cycles corresponding to the filtration are
\[0\stackrel{{ V^{-1}}}{{\longrightarrow}}2\ell+1\stackrel{{ V^{-1}}}{{\longrightarrow}}2\ell+3\stackrel{{ V^{-1}}}{{\longrightarrow}}\cdots\stackrel{{ V^{-1}}}{{\longrightarrow}}4\ell+1\stackrel{{ F}}{{\longrightarrow}}2\ell\stackrel{{ F}}{{\longrightarrow}}2\ell-2\stackrel{{ F}}{{\longrightarrow}}\cdots\stackrel{{ F}}{{\longrightarrow}}2\stackrel{{ F}}{{\longrightarrow}}0\]
(yielding the word \(f^{\ell+1}v^{\ell+1}\)) with multiplicity \(b=\dim_{k}(M_{1}/M_{0})\) and
\[1\stackrel{{ V^{-1}}}{{\longrightarrow}}2\ell+2\stackrel{{ V^{-1}}}{{\longrightarrow}}2\ell+4\stackrel{{ V^{-1}}}{{\longrightarrow}}\cdots\stackrel{{ V^{-1}}}{{\longrightarrow}}4\ell\stackrel{{ F}}{{\longrightarrow}}2\ell-1\stackrel{{ F}}{{\longrightarrow}}2\ell-3\stackrel{{ F}}{{\longrightarrow}}\cdots\stackrel{{ F}}{{\longrightarrow}}3\stackrel{{ F}}{{\longrightarrow}}1\]
(yielding the word \(f^{\ell}v^{\ell}\)) with multiplicity \(a-b=\dim_{k}(M_{2}/M_{1})\).
The presentation by generators and relations then follows from [11, Lemma 3.1]. This completes the proof of the theorem.
We can now give a significant application to Artin-Schreier covers:
**Corollary 8.7**.: _Suppose that \(p>2\), \(\pi:Y\to X\) and \(G=\operatorname{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}\) are as usual and that \(f_{X}=g_{X}-1\). Then the \(a\)-number of \(J_{Y}\) is in \(\{2,4,\ldots,p-1,p\}\). Moreover, the Dieudonne module of \(J_{Y}[p]\) has the form_
\[L\oplus(L\otimes k[G])^{f_{X}-1}\oplus M_{a}\]
_where \(L=M(\mathbb{Z}/p\mathbb{Z}\oplus\mu_{p})\) and \(M_{a}\) is the module described in Theorem 8.6._
Proof.: Since \(k\) is algebraically closed, \(H^{1}_{dR}(Y)_{\ell t}\) is completely split, and part (2) of Proposition 4.1 gives its \(G\)-module structure. In all we have an isomorphism of \(\mathbb{D}_{k}[G]\)-modules
\[H^{1}_{dR}(Y)_{\ell t}\cong M(\mathbb{Z}/p\mathbb{Z})\oplus(M(\mathbb{Z}/p \mathbb{Z})\otimes_{k}k[G])^{f_{X}-1}\,.\]
Similarly,
\[H^{1}_{dR}(Y)_{m}\cong M(\mu_{p})\oplus(M(\mu_{p})\otimes_{k}k[G])^{f_{X}-1}\,.\]
Since \(H^{1}_{dR}(Y)_{ll}=M\) has the properties enumerated in Proposition 8.1, it is isomorphic to the \(\mathbb{D}_{k}[G]\)-module \(M_{a}\) described in Theorem 8.6. This completes the proof of the Corollary.
_Example 8.8_.: Let \(p=5\) and \(k=\mathbb{F}_{p}\). The table below illustrates that all possibilities listed in Corollary 8.7 occur, with \(X\) hyperelliptic of degree \(7\) and genus \(g_{X}=3\). Each base curve \(X\) has \(f_{X}=2=g_{X}-1\), and \(\nu_{X}=1\) so the specified cover \(Y\to X\) is the unique unramified \(\mathbb{Z}/p\mathbb{Z}\)-cover defined over \(k\).
\[\begin{array}{c|c|c}X:y^{2}=&Y:z^{3}-z=&a_{Y}\\ \hline x^{7}-x^{5}-2x^{3}-2x^{2}+x-1&(-2x^{4}-x^{2}+1)y+2&2\\ -x^{7}-x^{6}-x^{5}+x^{4}+x^{2}-2x&(-x^{4}-2x^{3}+2x^{2}+1)y&4\\ 2x^{7}-2x^{5}+2x^{4}-x&(-x^{9}+2x^{7}-2x^{6}-x^{5}+2x^{4}-x+1)y&5\end{array}\]
We now consider more general \(\mathbb{D}_{k}[G]\)-modules \(M\) with \(M/\delta M\) superspecial.
**Lemma 8.9**.: _Let \(M\) be a \(\mathbb{D}_{k}[G]\)-module with the properties_ (1-4) _of Proposition 8.1 and such that \(M/\delta M\) is superspecial. Then the Ekedahl-Oort structure_ (_elementary sequence_) of \(M\) starts with at least \(h\) zeroes, i.e., it has the form \([0,\ldots,0,\psi_{h+1},\ldots,\psi_{ph}]\)._
Proof.: Let \(M[\delta]\) be the kernel of \(\delta\) on \(M\). Multiplication by \(\delta^{p-1}\) induces an isomorphism \(M/\delta M\cong M[\delta]\) of \(\mathbb{D}_{k}\)-modules, so \(M[\delta]\) is also superspecial.
Now consider the canonical filtration of \(M\). Its elements are obtained by applying arbitrary words \(w\) in \(F\) and \(V^{-1}\) to \(M\). For any such word \(w\), we have
\[w\left(M[\delta]\right)\subset w(M)\cap M[\delta].\]
(On the left, we are applying \(F\) and \(V^{-1}\) using the \(\mathbb{D}_{k}\)-module structure of \(M[\delta]\).) Since \(M[\delta]\) is superspecial, there are only three possibilities for the left hand side in the display, namely \(0\), \(FM[\delta]\) and \(M[\delta]\).
If \(N\subset M\) is any \(R\)-submodule, \(N\cap M[\delta]=0\) implies that \(N=0\). Indeed, if \(0\neq m\in N\), let \(i\) be the smallest power of \(\delta\) that kills \(m\). Then \(0\neq\delta^{i-1}m\in M[\delta]\).
Applied to the submodules \(w(M)\), this shows that every non-zero element of the canonical filtration of \(M\) contains \(FM[\delta]\). Since \(F\) annihilates \(FM[\delta]\), which has dimension \(h\), we conclude that the E-O structure of \(M\) starts with at least \(h\) zeroes.
Note that the lemma reduces the number of possibilities for an E-O structure on an \(M\) of dimension \(2ph\) over \(k\) from \(2^{ph}\) to \(2^{(p-1)h}\), in other words, the \(R\)-module structure and superspecial hypothesis impose non-trivial restrictions on \(M\). It turns out that for \(p=2\), the lemma gives the only restrictions.
**Theorem 8.10**.: _Suppose \(p=2\) and let \(\Psi=[0,\ldots,0,\psi_{h+1},\ldots,\psi_{2h}]\) be an elementary sequence starting with at least \(h\) zeroes. Then there is a \(\mathbb{D}_{k}[G]\)-module \(M\) with properties \((1\!-\!4)\) of Proposition 8.1 and with \(M/\delta M\) superspecial such that the elementary sequence of \(M\) is \(\Psi\)._
Proof.: We will consider the \(\mathbb{D}_{k}\)-module \(M\) with E-O structure \(\Psi\) as constructed by Oort in [1, SS9] and show that \(M\) admits a \(k[G]\)-module structure such that it has the properties \((1\!-\!4)\) of Proposition 8.1 and \(M/\delta M\) is superspecial.
Extend \(\Psi\) to a "final sequence" \([\psi_{1},\ldots,\psi_{4h}]\) by setting \(\psi_{4h}=2h\) and \(\psi_{4h-i}=\psi_{i}+2h-i\) for \(1\leq i\leq 2h\). Let \(1\leq m_{1}<m_{2}<\cdots<m_{2h}\leq 4h\) be the indices \(i\) such that \(\psi_{i-1}<\psi_{i}\) and let \(1\leq n_{2h}<n_{2h-1}<\cdots n_{1}\leq 4h\) be the indices \(i\) such that \(\psi_{i-1}=\psi_{i}\). Our hypothesis on \(\Psi\) implies that \(m_{h+i}=3h+i\) for \(1\leq i\leq h\) and \(n_{2h+1-i}=i\) for \(1\leq i\leq h\). Moreover, \(m_{i}+n_{i}=4h+1\) for \(1\leq i\leq 2h\).
Now let \(M\) be the \(k\)-vector space with basis \(Z_{1},\ldots,Z_{4h}\) and for \(1\leq i\leq 2h\) define \(X_{i}=Z_{m_{i}}\) and \(Y_{i}=Z_{n_{i}}\). Define a \(\mathbb{D}_{k}\) module structure on \(M\) by setting
\[F(X_{i})=Z_{i},\quad F(Y_{i})=0,\quad V(Z_{i})=0,\quad\text{and}\quad V(Z_{4h+ 1-i})=Y_{i}\quad\text{for $1\leq i\leq 2h$}.\]
Introduce a bilinear pairing \(\langle\cdot,\cdot\rangle\) on \(M\) by setting
\[\langle X_{i},X_{j}\rangle=0,\quad\langle Y_{i},Y_{j}\rangle=0,\quad\text{and} \quad\langle X_{i},Y_{j}\rangle=\delta_{ij}\quad\text{for $1\leq i,j\leq 2h$}.\]
It is then straightforward to check that \(M\) is a self-dual, local-local \(BT_{1}\) module whose elementary sequence is the given \(\Psi\).
Let \(N\) be the subspace of \(M\) spanned by \(X_{i}+Y_{i}\) and \(Y_{h+i}\) for \(1\leq i\leq h\). Then \(N\) is a \(\mathbb{D}_{k}\)-submodule and we have
\[F(N)=V(N)=\text{Span of $Y_{h+i}$ for $1\leq i\leq h$}.\]
It follows that \(N\) is superspecial.
The quotient \(M/N\) is spanned by the classes of \(X_{j}\) for \(1\leq j\leq 2h\), and we have
\[F(M/N)=V(M/N)=\text{Span of the classes of $X_{i}$ for $1\leq i\leq h$}.\]
It follows that \(M/N\) is also superspecial, so there is an isomorphism of \(\mathbb{D}_{k}\)-modules \(M/N\cong N\). This isomorphism carries \(F(M/N)\) isomorphically onto \(F(N)\).
Define \(\delta:M\to M\) as the composition
\[M\twoheadrightarrow M/N\cong N\hookrightarrow M.\]
Since each of the constituent maps is a \(\mathbb{D}_{k}\)-module homomorphism, so is \(\delta\), and clearly we have \(\delta^{2}=0\), so we have given \(M\) the structure of a \(k[G]\)-module. Note that \(N\) is isotropic for the pairing, and the induced pairing on \(M/N\) is also zero. Therefore, the pairing induces a duality between \(M/N\) and \(N\). If we choose an isomorphism \(M/N\cong N\) which is self-adjoint (as we may, since \(N\) and \(M/N\) are self-dual \(\mathbb{D}_{k}\)-modules), then \(\delta\) will satisfy \(\langle\delta m_{1},m_{2}\rangle=\langle m_{1},\delta m_{2}\rangle\) which (because \(p=2\)) in turn implies that \(\langle gm_{1},gm_{2}\rangle=\langle m_{1},m_{2}\rangle\).
We have thus established that \(M\) enjoys properties (2) and (3) of Proposition 8.1. The remaining properties are easily checked: \(M\) is free over \(k[G]\) with basis \(\{X_{1},\ldots,X_{2h}\}\), \(\operatorname{Im}F\) is free with basis
\[\{Z_{h+i}|1\leq i\leq h\}=\{X_{i}\mid h<m_{i}\leq 2h\}\cup\{Y_{i}\mid h<n_{i} \leq 2h\}\,,\]
(each \(i\in\{1,\ldots,h\}\) appears exactly once on the right hand side), and \(\operatorname{Im}V\) is free with basis \(\{Y_{1},\ldots,Y_{h}\}\).
This completes the proof of the Theorem.
_Remarks 8.11_.:
1. It does not seem likely that there is a simple formula for an endomorphism \(\delta\) as in the theorem.
2. The words corresponding to the E-O structures appearing in the theorem can be rather elaborate. For example, the four possibilities when \(h=2\) are \[\left(f^{2}v^{2}\right)^{2},\quad\left(f^{2}v^{2}\right)\left(fv\right),\quad \left(fv\right)^{4},\quad\text{and}\quad\left(f^{2}vfv^{2}fv\right).\] The last word is associated to the elementary sequence \([0,0,1,1]\), and in the coordinates of Proposition 8.4, it corresponds to the case where \(D\) has rank \(1\) and its column span is not rational over \(\mathbb{F}_{p^{2}}\).
3. Numerical experiments suggest that the naive generalization of the theorem to \(p>2\) does not hold. There seem to be far fewer cases when \(h\) is small with respect to \(p\).
## 9. Explicit geometry of unramified \(\mathbb{Z}/p\mathbb{Z}\) covers
In this section, \(k\) will be an arbitrary field of characteristic \(p>0\), and \(\pi:Y\to X\) will be a Galois cover of geometrically connected, smooth, proper curves over \(k\) with a fixed isomorphism \(\operatorname{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}\). We consider a presentation of \(Y\) as a cover of \(X\) which will be useful for explicit calculations in the next section.
We write \(\wp\) for the function \(\wp(a)=a^{p}-a\) and \(\wp(k)\) for the set \(\{\wp(a)|a\in k\}\).
**Proposition 9.1**.: _With notation as above, we have:_
1. _The sheaf_ \(\mathcal{F}:=\pi_{*}\mathcal{O}_{Y}\) _is a locally free sheaf of_ \(\mathcal{O}_{X}\)_-algebras of rank_ \(p\) _with an action of_ \(\mathbb{Z}/p\mathbb{Z}\)_. We may recover_ \(Y\) _from_ \(\mathcal{F}\) _and_ \(X\) _as the global spectrum:_ \(Y\cong\underline{\operatorname{Spec}}_{\mathcal{O}_{X}}\mathcal{F}\)_._
2. \(\mathcal{F}\) _admits a filtration by_ \(\mathcal{O}_{X}\)_-submodules_ \(\operatorname{Fil}^{i}\mathcal{F}\subset\mathcal{F}\) _with_ \(\operatorname{Fil}^{i}\mathcal{F}\) _of rank_ \(i\) _and with graded pieces_ \[\operatorname{Fil}^{i}\mathcal{F}/\operatorname{Fil}^{i-1}\mathcal{F}\cong \mathcal{O}_{X}.\]
3. _The_ \(\mathcal{O}_{X}\)_-submodule_ \(\mathcal{E}:=\operatorname{Fil}^{2}\mathcal{F}\) _of rank 2 determines_ \(\mathcal{F}\) _as a locally free_ \(\mathcal{O}_{X}\)_-module:_ \[\mathcal{F}\cong\operatorname{Sym}^{p-1}\mathcal{E}.\]
4. _The class of_ \(\mathcal{E}\) _in_ \[\operatorname{Ext}^{1}_{\mathcal{O}_{X}}(\mathcal{O}_{X},\mathcal{O}_{X}) \cong H^{1}(X,\mathcal{O}_{X})\] _is fixed by Frobenius._
5. _Conversely, any class in_ \(H^{1}(X,\mathcal{O}_{X})\) _fixed by Frobenius arises from a_ \(\mathbb{Z}/p\mathbb{Z}\)_-cover_ \(Y\) _of_ \(X\)_, and the set of covers yielding a given class is a principal homogeneous space for_ \(k/\wp(k)\)_. In particular, if_ \(k\) _is algebraically closed,_ \(Y\) _is determined by the corresponding class in_ \(H^{1}(X,\mathcal{O}_{X})\)
We will prove the proposition in the rest of this section. Part (1) is of course well known since \(Y\to X\) is finite and Galois, but we will give an explicit presentation of \(\mathcal{F}\).
_Remarks 9.2_.:
1. The exact sequence \[0\to\mathbb{Z}/p\mathbb{Z}\to\mathcal{O}_{X}\xrightarrow{\wp}\mathcal{O}_{X}\to 0\] of etale sheaves on \(X\) yields an exact sequence \[0\to k/\wp(k)\to H^{1}_{\acute{e}t}(X,\mathbb{Z}/pZ)\to H^{1}(X,\mathcal{O}_{ X})[\wp]\to 0.\] The data \(\pi:Y\to X\) and \(\operatorname{Gal}(Y/X)\cong\mathbb{Z}/p\mathbb{Z}\) determine a class \(\eta_{X}\) in \(H^{1}_{\acute{e}t}(X,\mathbb{Z}/p\mathbb{Z})\). The class of \(\mathcal{E}\) in part (4) is the image of \(\eta_{X}\) in \(H^{1}(X,\mathcal{O}_{X})\), and part (5) is a restatement of the exactness of the last displayed sequence.
2. The last displayed exact sequence can also be written \[0\to H^{1}(\operatorname{Spec}k,\mathbb{Z}/p\mathbb{Z})\to H^{1}_{\acute{e}t} (X,\mathbb{Z}/pZ)\to H^{1}(X,\mathcal{O}_{X})[\wp]\to 0.\] If \(X\) has a rational point \(\operatorname{Spec}k\hookrightarrow X\), then this sequence splits by pulling back a cover of \(X\) to a cover of \(\operatorname{Spec}k\). The splitting \(H^{1}(X,\mathcal{O}_{X})[\wp]\to H^{1}_{\acute{e}t}(X,\mathbb{Z}/p\mathbb{Z})\) sends a class to the unique cover with that class in which the given rational point splits completely.
3. Applying the Dieudonne functor to the inclusion \(k\eta_{X}\hookrightarrow H^{1}_{dR}(X)\) yields a surjection of group schemes \(J_{X}[p]\twoheadrightarrow\mathbb{Z}/p\mathbb{Z}\). Suppose that \(k\) is finite and that \(P\in X(k)\) is a rational point over which \(\pi:Y\to X\) splits completely. Then by geometric class field theory (see [12, Chap. VI]), giving a \(\mathbb{Z}/p\mathbb{Z}\)-cover in which the rational point \(P\) splits is the same as giving a surjective group homomorphism \(J_{X}(k)\twoheadrightarrow\mathbb{Z}/p\mathbb{Z}\). We may construct this homomorphism from the surjection \(J_{X}[p]\twoheadrightarrow\mathbb{Z}/p\mathbb{Z}\) as follows: Form the push-out diagram of group schemes over \(k\) \[\begin{CD}0@>{}>{}>J_{X}[p]@>{}>{}>J_{X}@>{}>{}>J_{X}@>{}>{}>0\\ 0@>{}>{}>\mathbb{Z}/p\mathbb{Z}@>{}>{}>J^{\prime}@>{}>{}>J_{X}@>{}>{}>0.\end{CD}\] Taking cohomology of the bottom row and using Lang's theorem (\(H^{1}(k,J^{\prime})=0\)) yields a surjection \[J_{X}(k)\to H^{1}(k,\mathbb{Z}/p\mathbb{Z}).\] Composing this with the isomorphisms \[H^{1}(k,\mathbb{Z}/p\mathbb{Z})\cong k/\wp(k)\stackrel{{\rm Tr}}{ {\longrightarrow}}\mathbb{Z}/p\mathbb{Z}\] yields the desired surjection \(J_{X}(k)\to\mathbb{Z}/p\mathbb{Z}\).
### Artin-Schreier theory
The field extension \(k(Y)/k(X)\) is Galois with group \(\mathbb{Z}/p\mathbb{Z}\), so by Artin-Schreier theory, there are elements \(y\in k(Y)\) and \(f\in k(X)\) such that \(k(Y)=k(X)[y]\), \(\wp(y)=f\), and the element \(1\in\mathbb{Z}/p\mathbb{Z}\) acts on \(y\) by \(y\mapsto y+1\).
If \(z=y-h\) with \(h\in k(X)\), then \(\wp(z)=g:=f-\wp(h)\) and the element \(1\in\mathbb{Z}/p\mathbb{Z}\) of the Galois group again acts by \(z\mapsto z+1\).
Because \(\pi\) is unramified, for any generator \(y\) as above and \(f=\wp(y)\), we have that for every place \(x\) of \(X\),
\[f\in\wp(h)+\mathcal{O}_{X,x}\]
for some element \(h\in k(X)\). In other words, the principal part of \(f\) at every place of \(X\) is of the form \(\wp(h)\) for a suitable \(h\).
### Trivializing \(\mathcal{F}\)
The Riemann-Roch theorem (in the form of its corollary "strong approximation"), implies that given a non-empty affine open subset \(U\subset X\), we may choose a generator \(y\) so that \(f=\wp(y)\) is regular on \(U\). Moreover, if \(D\) is a non-special divisor (i.e., \(H^{1}(X,\mathcal{O}_{X}(D))=0\)) supported on \(X\setminus U\), we may choose \(f\) so that its principal part at any closed point \(x\) in the support of \(D\) is in \(\wp(h)+\mathcal{O}_{X,x}\) where \(h\) has poles no worse than \(D\) at \(x\).
Let \(\{U,V\}\) be a cover of \(X\) by affine open subsets, and choose elements \(y\) and \(z\) such that \(\wp(y)=f\) is regular on \(U\) and \(\wp(z)=g\) is regular on \(V\). Let \(h=y-z\) and note that \(h\) is fixed by \(\mathbb{Z}/p\mathbb{Z}\), so lies in \(k(X)\). We have
\[\mathcal{F}(U)=\mathcal{O}_{Y}(\pi^{-1}(U))\cong\frac{\mathcal{O}_{X}(U)[y]}{ (\wp(y)-f)}\]
and
\[\mathcal{F}(V)=\mathcal{O}_{Y}(\pi^{-1}(V))\cong\frac{\mathcal{O}_{X}(V)[z]}{ (\wp(z)-g)}.\]
These presentations show that over \(U\), \(\mathcal{F}\) is a free \(\mathcal{O}_{X}\) module with basis \((1,y,y^{2},\ldots,y^{p-1})\), and over \(V\) it is free with basis \((1,z,z^{2},\ldots,z^{p-1})\). Thus \(\pi_{*}\mathcal{F}\) has the properties asserted in part (1).
### Additional structures on \(\mathcal{F}\)
Noting that on \(U\cap V\), \(y^{i}=(z+h)^{i}=\sum_{j=0}^{i}\binom{i}{j}h^{i-j}z^{j}\), we see that there is a subsheaf of \(\mathcal{F}\) of rank \(i\) generated by \((1,y,\ldots,y^{i})\) over \(U\) and by \((1,z,\ldots,z^{i})\) over \(V\). This is the \(\operatorname{Fil}^{i}\mathcal{F}\) in part (2), and it is clear that the graded pieces are all trivial \(\mathcal{O}_{X}\)-modules. We may also recover \(\operatorname{Fil}^{i}\mathcal{F}\) more invariantly as
\[\operatorname{Fil}^{i}\mathcal{F}=\ker\left(\delta^{i}:\mathcal{F}\to \mathcal{F}\right)\]
where \(\delta=\gamma-1\). This proves part (2).
It is also clear that \(\mathcal{E}:=\operatorname{Fil}^{2}\mathcal{F}\) satisfies \(\mathcal{F}\cong\operatorname{Sym}^{p-1}\mathcal{E}\) as \(\mathcal{O}_{X}\)-modules, i.e., we have part (3).
The class of \(\mathcal{E}\) (as an extension of \(\mathcal{O}_{X}\) by \(\mathcal{O}_{X}\)) in \(H^{1}(X,\mathcal{O}_{X})\) is represented by the alternating Cech cocycle for the cover \(\{U,V\}\) given by \(h\in\mathcal{O}_{X}(U\cap V)\). This class is fixed by Frobenius because \(\wp(h)=f-g\) is the difference of a regular function on \(U\) and a regular function on \(V\). This establishes part (4).
### Constructing \(Y\) from \(\mathcal{E}\)
Finally, suppose we have a class in \(H^{1}(X,\mathcal{O}_{X})\) fixed by Frobenius, and choose a representing cocycle. This amounts to giving a section \(h\in\mathcal{O}_{X}(U\cap V)\) such that there exist regular functions \(f\) on \(U\) and \(g\) on \(V\) with \(\wp(h)=f-g\). _Choose_ such functions. Then using \(h\) we construct the locally free \(\mathcal{O}_{X}\)-module \(\mathcal{E}\) of rank 2 with the given class by gluing the rank 2 trivial modules over \(U\) and \(V\) by the transition matrix \(\left(\begin{smallmatrix}1&0\\ h&1\end{smallmatrix}\right)\) over \(U\cap V\). Let \(\mathcal{F}=\operatorname{Sym}^{p-1}\mathcal{E}\)
so that \(\mathcal{F}\) is a locally free \(\mathcal{O}_{X}\)-module of rank \(p\). It remains to give \(\mathcal{F}\) the structure of an \(\mathcal{O}_{X}\)-algebra. We do this by requiring that
\[y^{i}y^{j}=\begin{cases}y^{i+j}&\text{if $i+j<p$,}\\ (y+f)y^{i+j-p}&\text{if $i+j\geq p$}\end{cases}\]
over \(U\) and
\[z^{i}z^{j}=\begin{cases}z^{i+j}&\text{if $i+j<p$,}\\ (z+g)y^{i+j-p}&\text{if $i+j\geq p$}\end{cases}\]
over \(V\). These requirements are easily seen to be compatible and give the desired algebra structure. Finally, we obtain a cover \(\pi:Y\to X\) by setting \(Y=\underline{\operatorname{Spec}}_{\mathcal{O}_{X}}\mathcal{F}\) and letting \(\mathbb{Z}/p\mathbb{Z}\) act via its action on \(y\) and \(z\).
Note that we chose \(f\) and \(g\) above to trivialize the class represented by \(\wp(h)\), and the ambiguity in that choice is exactly an element of \(H^{0}(X,\mathcal{O}_{X})=k\). Moreover, if we change \(f\) to \(f+a\) and \(g\) to \(g+a\) with \(a\in k\), the isomorphism class \(k(Y)\) (and thus of \(Y\)) depends (and depends only) on the class of \(a\) in \(k/\wp(k)\). This completes the proof of part (5) of the proposition.
## 10. Computing the Hasse-Witt triple of \(Y\).
In this section, we explain a method to compute explicitly the de Rham cohomology of an unramified Artin-Schreier cover over a perfect field together with its Frobenius and Verschiebung endomorphisms. We do this in the form of the "Hasse-Witt triple" as in [10].
### Hasse-Witt triples
As defined in [10], a Hasse-Witt triple consists of a \(g\)-dimensional vector space \(Q\) over \(k\), a \(p\)-linear endomorphism \(\Phi:Q\to Q\), and a \(p\)-linear injection \(\Psi:\operatorname{Ker}(\Phi)\to Q^{\vee}\) whose image is the orthogonal complement of \(\operatorname{Im}(\Phi)\).
Given a self-dual \(BT_{1}\) module \(M\) of dimension \(2g\) over \(k\) with pairing \(\langle\cdot,\cdot\rangle\), we obtain a Hasse-Witt triple by setting \(Q=M/\operatorname{Ker}F\), letting \(\Phi\) be the endomorphism of \(Q\) induced by \(F\) on \(M\), and defining \(\Psi\) by \(\Psi(\tau)=\langle\cdot,F\tilde{\tau}\rangle\) where \(\tilde{\tau}\) is a lift of \(\tau\in Q\) to \(M\). In [10, 2.5], Moonen explains how to recover the self-dual \(BT_{1}\) module \((M,F,V,\langle\cdot,\cdot\rangle)\) from the Hasse-Witt triple \((Q,\Phi,\Psi)\).
In the case at hand, where \(M=H^{1}_{dR}(Y)\), we see that \(Q=H^{1}(Y,\mathcal{O}_{Y})\), \(Q^{\vee}=H^{0}(Y,\Omega^{1}_{Y})\), \(\Phi\) is the usual Frobenius endomorphism of \(H^{1}(Y,\mathcal{O}_{Y})\), and \(\Psi\) is obtained as follows: Suppose \(\eta\in H^{1}(Y,\mathcal{O}_{Y})\) is killed by Frobenius and is represented by a Cech cocycle \(g_{ij}\) for an open cover \(\{U_{i}\}\); since \(F\eta=0\), \(g^{p}_{ij}\) is a coboundary: \(g^{p}_{ij}=f_{i}-f_{j}\) with \(f_{i}\in\mathcal{O}_{Y}(U_{i})\); then the differentials \(df_{i}\) patch together to give a global, regular, locally exact one-form \(\omega\). We define \(\Psi(\eta)=\omega\). Note that \(\Psi\) is injective, and its image is precisely the space of differentials killed by \(V\), which is the orthogonal complement in \(H^{0}(Y,\Omega^{1}_{Y})\) of \(FH^{1}(Y,\mathcal{O}_{Y})\).
### Computing Frobenius on \(H^{1}(x,\mathcal{O}_{X})\) (review)
Fix an effective, non-special divisor \(D\) on \(X\). (The most "efficient" choice is to take \(D\) of degree \(g_{X}\), but this may not be possible over a small ground field.) Thus we have \(H^{1}(X,\mathcal{O}_{X}(D))=0\), and the exact sequence
\[0\to\mathcal{O}_{X}\to\mathcal{O}_{X}(D)\to\mathcal{O}_{X}(D)/\mathcal{O}_{X}\to 0\]
is an acyclic resolution of \(\mathcal{O}_{X}\). Taking cohomology yields an isomorphism
\[\frac{H^{0}(X,\mathcal{O}_{X}(D)/\mathcal{O}_{X})}{H^{0}(X,\mathcal{O}_{X}(D))} \cong H^{1}(X,\mathcal{O}_{X}).\]
The divisor \(pD\) is also non-special, and we find an isomorphism
\[\frac{H^{0}(X,\mathcal{O}_{X}(pD)/\mathcal{O}_{X})}{H^{0}(X,\mathcal{O}_{X}(pD ))}\cong H^{1}(X,\mathcal{O}_{X}).\]
The composed isomorphism
\[\frac{H^{0}(X,\mathcal{O}_{X}(pD)/\mathcal{O}_{X})}{H^{0}(X,\mathcal{O}_{X}(pD ))}\cong\frac{H^{0}(X,\mathcal{O}_{X}(D)/\mathcal{O}_{X})}{H^{0}(X,\mathcal{O}_ {X}(D))}\]
can be computed explicitly by taking a meromorphic function \(t\) on a neighborhood of \(D\) representing an element of \(H^{0}(X,\mathcal{O}_{X}(pD)/\mathcal{O}_{X})\) and "correcting" it by the principal part of an element of \(H^{0}(X,\mathcal{O}_{X}(pD))\) so that it has poles no worse than \(D\), and therefore defines an element of \(H^{0}(X,\mathcal{O}_{X}(D)/\mathcal{O}_{X})\). The result is well-defined up to an element of \(H^{0}(X,\mathcal{O}_{X}(D))\).
The Frobenius endomorphism
\[F:H^{1}(X,\mathcal{O}_{X})\to H^{1}(X,\mathcal{O}_{X})\]
can then be computed as the composition
\[H^{1}(X,\mathcal{O}_{X})\cong\frac{H^{0}(X,\mathcal{O}_{X}(D)/ \mathcal{O}_{X})}{H^{0}(X,\mathcal{O}_{X}(D))}\to\frac{H^{0}(X,\mathcal{O}_{X }(pD)/\mathcal{O}_{X})}{H^{0}(X,\mathcal{O}_{X}(pD))}\\ \cong H^{0}(X,\mathcal{O}_{X}(D)/\mathcal{O}_{X})\cong H^{1}(X, \mathcal{O}_{X})\]
where the second homomorphism is induced by \(t\mapsto t^{p}\).
Summarizing, to compute \(F\) on \(H^{1}(X,\mathcal{O}_{X})\), it suffices to know the principal parts of elements of \(H^{0}(X,\mathcal{O}_{X}(pD))\). It will transpire later in this section that to compute Frobenius on \(H^{1}(Y,\mathcal{O}_{Y})\), it will suffice to know the principal parts of elements of \(H^{0}(X,\mathcal{O}_{X}((2p-1)D))\) plus some simple linear algebra.
### Making \(\pi_{*}\mathcal{O}_{Y}\) explicit
We make the description of \(\mathcal{F}=\pi_{*}\mathcal{O}_{Y}\) given in the previous section more explicit.
Let \(D\) be an effective, non-special divisor on \(X\). Let \(U=X\setminus D\) and choose an affine open neighborhood \(V\) of \(D\). Then \(\{U,V\}\) is a cover of \(X\) by affine opens. Choose elements \(f,g\in k(X)\) with \(f\) regular on \(U\), \(g\) regular on \(V\), such that
\[k(Y)\cong\frac{k(X)[y]}{(\wp(y)-f)}\cong\frac{k(X)[z]}{(\wp(z)-g)}\]
and such that \(f-g=\wp(h)\) where at each closed point of \(D\), \(h\) has poles no worse than \(D\).
Over \(U\), the sections \(1,y,\dots,y^{p-1}\) are a basis of \(\mathcal{F}\) as \(\mathcal{O}_{X}\)-module. Similarly, over \(V\), the sections \(1,z,\dots,z^{p-1}\) are a basis. Over \(U\cap V\), we compute a transition matrix \(H\) as follows:
\[(\alpha_{0},\dots,\alpha_{p-1})\begin{pmatrix}1\\ y\\ \vdots\\ y^{p-1}\end{pmatrix}=(\alpha_{0},\dots,\alpha_{p-1})\begin{pmatrix}1\\ z+h\\ \vdots\\ (z+h)^{p-1}\end{pmatrix}=(\alpha_{0},\dots,\alpha_{p-1})\,H\begin{pmatrix}1\\ z\\ \vdots\\ z^{p-1}\end{pmatrix}\]
where
\[H=\begin{pmatrix}1&0&0&\dots&0\\ h&1&0&\dots&0\\ h^{2}&2h&1&\dots&0\\ \vdots&\vdots&\vdots&\ddots&0\\ h^{p-1}&(p-1)h^{p-2}&{p-1\choose 2}h^{p-3}&\dots&1\end{pmatrix}. \tag{10.1}\]
Numbering the rows and columns of \(H\) from \(0\) to \(p-1\), the \((i,j)\) entry of \(H\) is \({i\choose j}h^{i-j}\).
Thus, if \(W\subset X\) is an open subset, a section of \(\mathcal{F}\) over \(W\) is determined by a tuple of functions \((\alpha_{0},\dots,\alpha_{p-1})\) such that each \(\alpha_{i}\) is regular on \(U\cap W\) and the tuple
\[(\beta_{0},\dots,\beta_{p-1}):=(\alpha_{0},\dots,\alpha_{p-1})\,H\]
has each \(\beta_{i}\) regular on \(V\cap W\).
The \(\mathcal{O}_{X}\)-module \(\mathcal{F}\) is self-dual. Indeed, letting \(A\) be the \(p\times p\) anti-diagonal matrix with non-zero entries equal to \(\pm 1\):
\[A=\begin{pmatrix}0&0&\dots&0&1\\ 0&0&\dots&-1&0\\ \vdots&\vdots&\iddots&\vdots&\vdots\\ 0&-1&\dots&0&0\\ 1&0&\dots&0&0\end{pmatrix},\]
one computes that \({}^{t}H^{-1}=AHA^{-1}=AHA\). The induced pairing
\[\mathcal{F}\otimes_{\mathcal{O}_{X}}\mathcal{F}\cong\mathcal{F}\otimes_{ \mathcal{O}_{X}}\mathcal{F}^{\vee}\to\mathcal{O}_{X}\]
is determined by
\[\langle y^{i},y^{j}\rangle=(-1)^{i}\delta_{p-1,i+j}\]
and satisfies \(\langle z^{i},z^{j}\rangle=(-1)^{i}\delta_{p-1,i+j}\) as well.
The \(\mathcal{O}_{X}\)-module \(\mathcal{F}\) also carries an action of \(\operatorname{Gal}(Y/X)\). Let
\[\gamma=\begin{pmatrix}1&0&0&\dots&0\\ 1&1&0&\dots&0\\ 1&2&1&\dots&0\\ \vdots&\vdots&\vdots&\ddots&0\\ 1&(p-1)&{p-1\choose 2}1&\dots&1\end{pmatrix}\]
be the matrix whose \((i,j)\) entry is \({i\choose j}\) (numbering the rows and columns from \(0\) to \(p-1\)). Then the element of \(\operatorname{Gal}(Y/X)\) corresponding to \(1\in\mathbb{Z}/p\mathbb{Z}\) acts as
\[(\alpha_{0},\dots,\alpha_{p-1})\mapsto(\alpha_{0},\dots,\alpha_{p-1})\, \gamma\quad\text{and}\quad(\beta_{0},\dots,\beta_{p-1})\mapsto(\beta_{0},\dots,\beta_{p-1})\,\gamma.\]
(Note that \(g\) and \(H\) commute.)
### Computing \(H^{1}(y,\mathcal{O}_{Y})\)
We keep the notation of the preceding subsections. Since \(\pi\) is finite, we have
\[H^{1}(Y,\mathcal{O}_{Y})\cong H^{1}(X,\pi_{*}\mathcal{O}_{Y})=H^{1}(X,\mathcal{ F}).\]
We compute the latter using an acyclic resolution.
Since the \(\mathcal{O}_{X}\)-module \(\mathcal{F}\) is a repeated extension of copies of \(\mathcal{O}_{X}\), an easy inductive argument shows that \(H^{1}(X,\mathcal{F}(D))=0\) and \(h^{0}(X,\mathcal{F}(D))=ph^{0}(X,\mathcal{O}_{X}(D))\). Thus, the exact sequence
\[0\to\mathcal{F}\to\mathcal{F}(D)\to\mathcal{F}(D)/\mathcal{F}\to 0\]
is an acyclic resolution of \(\mathcal{F}\), and we have an isomorphism
\[\frac{H^{0}(X,\mathcal{F}(D)/\mathcal{F})}{H^{0}(X,\mathcal{F}(D))}\cong H^{1 }(X,\mathcal{F}).\]
We make the left hand side more explicit. The description of \(\mathcal{F}\) over the neighborhood \(V\) of \(D\) in the last section yields an identification
\[H^{0}(X,\mathcal{F}(D)/\mathcal{F})\cong H^{0}(X,\mathcal{O}_{X}(D)/\mathcal{ O}_{X})^{p}\]
which sends the class of \((\beta_{0},\ldots,\beta_{p-1})\) on the right to the section
\[(\beta_{0},\ldots,\beta_{p-1})\begin{pmatrix}1\\ z\\ \vdots\\ z^{p-1}\end{pmatrix}\]
of \(\mathcal{F}(D)/\mathcal{F}\). To obtain \(H^{1}(X,\mathcal{F})\), we need to take the quotient by \(H^{0}(X,\mathcal{F}(D))\). The section \(s_{0}:=1\in H^{0}(X,\mathcal{F}(D))\) maps to zero in \(H^{0}(X,\mathcal{O}_{X}(D)/\mathcal{O}_{X})^{p}\), and the section \(s_{1}:=y\) maps to
\[(\beta_{0},\ldots,\beta_{p-1})=(h,1,0,\ldots,0)=(h,0,\ldots,0).\]
Other sections of \(H^{0}(X,\mathcal{F}(D))\) are somewhat less explicit, but can be constructed as follows. For \(\ell=2,\ldots,p-1\), consider a section \(s_{\ell}\) of \(\mathcal{F}(D)\) corresponding to tuple \((\alpha_{0},\ldots,\alpha_{p-1})\) with \(\alpha_{j}=0\) for \(j>\ell\), \(\alpha_{\ell}=1\), \(\alpha_{\ell-1}=0\), and \(\alpha_{j}\) with \(j\leq\ell-2\) to be chosen. In order to obtain a section of \(\mathcal{F}(D)\) the tuple
\[(\beta_{0},\ldots,\beta_{p-1})=(\alpha_{0},\ldots,\alpha_{p-1})\,H\]
should have entries with poles no worse than \(D\) on \(V\). This is automatic for \(\beta_{j}\) with \(j>\ell\) (since \(\beta_{j}=0\)) as well as for \(j=\ell\) (since \(\beta_{\ell}=1\)) and \(j=\ell-1\) (since \(\beta_{\ell-1}=\ell h\)). There is a function \(\alpha_{\ell-2}\in H^{0}(X,\mathcal{O}_{X}(2D))\) (unique up to addition of a scalar) such that
\[\beta_{\ell-2}=\alpha_{\ell-2}+\binom{\ell}{2}h^{2}\alpha_{\ell}\]
has poles no worse that \(D\) on \(V\). We continue to choose \(\alpha_{j}\in H^{0}(X,\mathcal{O}_{X}((\ell-j)D))\) with \(j<\ell-2\) in descending order to satisfy the condition that \(\beta_{j}\) have poles no worse than \(D\) on \(V\), thus obtaining a section \(s_{\ell}\) in \(H^{0}(X,\mathcal{F}(D))\).
It remains to consider the images of the \(s_{j}\) in \(H^{0}(X,\mathcal{F}(D)/\mathcal{F})\). Inspection of the process for choosing the \(\alpha_{j}\) in the last paragraph then shows that
\[s_{0} \mapsto(1,0,\ldots,0)=(0,0,\ldots,0)\] \[s_{1} \mapsto(h,1,0,\ldots,0)=(h,0,\ldots,0)\] \[s_{2} \mapsto(*,2h,1,0\ldots,0)=(*,2h,0,\ldots,0)\] \[\vdots\] \[s_{\ell} \mapsto(*,\ldots,*,\ell h,0,\ldots,0)\]
where the last non-zero entry in the image of \(s_{\ell}\) occurs in column \(\ell-1\) (numbering from \(0\)).3
Footnote 3: We know nothing about the entries \(*\) other than that they have poles no worse than \(D\) on \(V\). They can of course be computed for a given \(Y\), but it is not clear how to say anything explicit about them in general, except for \(s_{2}\).
### Computing Frobenius on \(H^{1}(y,\mathcal{O}_{Y})\)
With notation as before, we have isomorphisms
\[H^{1}(Y,\mathcal{O}_{Y})\cong H^{1}(X,\mathcal{F})\cong\frac{H^{0}(X, \mathcal{F}(D)/\mathcal{F})}{H^{0}(X,\mathcal{F}(D))},\]
and the Frobenius endomorphism of \(H^{1}(Y,\mathcal{O}_{Y})\) transported to the right hand group above is given by
\[\frac{H^{0}(X,\mathcal{F}(D)/\mathcal{F})}{H^{0}(X,\mathcal{F}(D))}\to\frac{H ^{0}(X,\mathcal{F}(pD)/\mathcal{F})}{H^{0}(X,\mathcal{F}(pD))}\cong\frac{H^{0} (X,\mathcal{F}(D)/\mathcal{F})}{H^{0}(X,\mathcal{F}(D))}\]
where the first arrow is \(s\mapsto s^{p}\) and the second is the isomorphism obtained by noting that
\[0\to\mathcal{F}\to\mathcal{F}(pD)\to\mathcal{F}(pD)/\mathcal{F}\to 0\]
is another acyclic resolution of \(\mathcal{F}\). Note that making this explicit requires understanding the principal parts along \(D\) of sections of \(\mathcal{F}(pD)\).
We want to further transport Frobenius via the isomorphism
\[H^{0}(X,\mathcal{F}(D)/\mathcal{F})\cong H^{0}(X,\mathcal{O}_{X}(D)/\mathcal{ O}_{X})^{p}\]
as in the previous section. Note that if \(s=\sum\beta_{i}z^{i}\), then
\[s^{p}=\sum\beta_{i}^{p}z^{pi}=\sum\beta_{i}^{p}(z+g)^{i}=\left(\beta_{0}^{p}, \ldots,\beta_{p-1}^{p}\right)G\left(\begin{array}{c}1\\ z\\ z^{2}\\ \vdots\\ z^{p-1}\end{array}\right)\]
where
\[G=\begin{pmatrix}1&0&0&\ldots&0\\ g&1&0&\ldots&0\\ g^{2}&2g&1&\ldots&0\\ \vdots&\vdots&\vdots&\ddots&0\\ g^{p-1}&(p-1)g^{p-2}&\binom{p-1}{2}g^{p-3}&\ldots&1\end{pmatrix}\]
(Numbering the rows and columns of \(G\) from \(0\) to \(p-1\), the \((i,j)\) entry of \(G\) is \(\binom{i}{j}g^{i-j}\).)
Thus to compute the image of the class of \((\beta_{0},\ldots,\beta_{p-1})\), we should form
\[\left(\beta_{0}^{p},\ldots,\beta_{p-1}^{p}\right)G\in H^{0}(V,\mathcal{O}_{X}(pD) )^{p}\]
and then use sections of \(H^{0}(X,\mathcal{F}(pD))\) to "reduce" this quantity so that it lies in \(H^{0}(V,\mathcal{O}_{X}(D))\).
We end by examining what needs to be known to write down global sections of \(\mathcal{F}(pD)\): They are given by tuples
\[(\alpha_{0},\ldots,\alpha_{p-1})\in\mathcal{O}_{X}(U)^{p}\]
such that
\[(\beta_{0},\ldots,\beta_{p-1})=(\alpha_{0},\ldots,\alpha_{p-1})\,H\]
has components lying in \(\mathcal{O}_{X}(pD)(V)\). To find them, choose \(\alpha_{p-1}\in H^{0}(X,\mathcal{O}_{X}(pD))\) arbitrarily. Then choose \(\alpha_{p-2}\in H^{0}(X,\mathcal{O}_{X}((p+1)D)\) such that \(\alpha_{p-2}+(p-1)h\alpha_{p-1}\) has poles no worse than \(pD\) in \(V\). (The set of such choices is a homogeneous space for \(H^{0}(X,\mathcal{O}_{X}(pD))\).) Iterating, one sees that \(\alpha_{i}\) lies in \(H^{0}(X,\mathcal{O}_{X}((2p-1-i)D)\) and is uniquely determined up to addition of an element of \(H^{0}(X,\mathcal{O}_{X}(pD))\). Thus, if we have good control on the principal parts of elements of \(H^{0}(X,\mathcal{O}_{X}((2p-1)D))\), we can compute Frobenius on \(H^{1}(Y,\mathcal{O}_{Y})\).
### Completing the Hasse-Witt triple of \(Y\)
Since \(\pi\) is finite and etale, we have \(\pi^{*}\Omega^{1}_{X}\cong\Omega^{1}_{Y}\) and
\[\pi_{*}\Omega^{1}_{Y}\cong\Omega^{1}_{X}\otimes_{\mathcal{O}_{X}}\pi_{*} \mathcal{O}_{X}=\Omega^{1}_{X}\otimes_{\mathcal{O}_{X}}\mathcal{F}.\]
We write \(\mathcal{F}^{1}\) for \(\Omega^{1}_{X}\otimes_{\mathcal{O}_{X}}\mathcal{F}\). The auto-duality of \(\mathcal{F}\) induces a bilinear map of \(\mathcal{O}_{X}\)-modules
\[\mathcal{F}\otimes_{\mathcal{O}_{X}}\mathcal{F}^{1}\to\Omega^{1}_{X}.\]
Let \(Q=H^{1}(Y,\mathcal{O}_{Y})\cong H^{1}(X,\mathcal{F})\) and let \(\Phi\) be the Frobenius endomorphism as computed in the preceding section. Then Serre duality says that \(Q^{\vee}\cong H^{0}(Y,\Omega^{1}_{Y})\) and since \(\pi\) is finite, \(H^{0}(Y,\Omega^{1}_{Y})\cong H^{0}(X,\mathcal{F}^{1})\).
Recall that we have fixed an effective, non-special divisor \(D\). We have
\[H^{0}(X,\Omega^{1}_{X}(-D))\cong H^{1}(X,\mathcal{O}_{X}(D))^{\vee}=0,\]
and an easy argument by induction shows that \(H^{0}(X,\mathcal{F}^{1}(-D))=0\) as well. We find an injection4
Footnote 4: Note that the right hand side here is an explicit \(pg\)-dimensional vector space, and we can compute in it, rather than worrying about writing down an explicit basis of \(H^{0}(X,\mathcal{F}^{1})\)
\[H^{0}(\mathcal{F}^{1})\to H^{0}(X,\mathcal{F}^{1}/\mathcal{F}^{1}(-D)).\]
Its image is easily seen to be orthogonal to
\[\operatorname{Im}\left(H^{0}(X,\mathcal{F}(D))\to H^{0}(X,\mathcal{F}(D)/ \mathcal{F})\right)=\operatorname{Ker}\left(H^{0}(X,\mathcal{F}(D)/\mathcal{F} )\twoheadrightarrow H^{1}(X,\mathcal{F})\right)\]
under the pairing
\[H^{0}(X,\mathcal{F}(D)/\mathcal{F})\times H^{0}(X,\mathcal{F}^{1}/\mathcal{F} ^{1}(-D))\to H^{0}(X,\Omega^{1}_{X}(D)/\Omega^{1}_{X})\stackrel{{ \operatorname{Res}}}{{\longrightarrow}}k\]
where the first arrow is induced by the bilinear map mentioned above, and the second arrow is the sum of residues along \(D\).
Now suppose \(s\in H^{0}(X,\mathcal{F}(D)/\mathcal{F})\) maps to an element in \(H^{1}(X,\mathcal{F})\) which is killed by Frobenius. This means that there is a global section \(t\in H^{0}(X,\mathcal{F}(pD))\cong H^{0}(Y,\mathcal{O}_{Y}(\pi^{*}D))\) whose principal parts along \(\pi^{*}D\) are given by \(s^{p}\). It is then immediate that \(\omega=dt\) is a _regular_ 1-form on \(Y\), i.e., an element of \(H^{0}(Y,\Omega^{1}_{Y})\cong H^{0}(X,\mathcal{F}^{1})\). The map \(\Psi\) is then given by
\[\Psi:\operatorname{Ker}(\Phi)\to Q^{\vee}\qquad[s]\mapsto\omega. \tag{10.2}\]
Since \(\omega\) is exact, it is orthogonal to \(\operatorname{Im}(\Phi)\), so our \(\Psi\) has the required properties.
Summing up, we have proven:
**Proposition 10.7**.: _The Hasse-Witt triple associated to \(H^{1}_{dR}(Y)\) is \((Q,\Phi,\Psi)\) with \(Q=H^{1}(X,\mathcal{F})\) computed explicitly in Section 10.4, with Frobenius \(\Phi\) defined in Section 10.5, and with \(\Psi\) defined in the paragraph before equation (10.2)._
|
2305.17621 | On groups with same number of centralizers | In this paper, among other results, we give some sufficient conditions for
every non-abelian subgroup of a group to be isoclinic with the group itself. It
is also seen that under certain conditions, two groups have same number of
element centralizers implies they are isoclinic. We prove that if $G$ is any
group having $4, 5, 7$ or $9$ element centralizers and $H$ is any non-abelian
subgroup of $G$, then $\mid \Cent(G)\mid=\mid \Cent(H)\mid$ and $ G' \cong H'
\cong C_2, C_3, C_5$ or $C_7$ respectively. Furthermore, it is proved that if
$G$ is any group having $n \in \lbrace 4, 5, 6, 7, 9 \rbrace$ element
centralizers, then $\mid G' \mid=n-2$. | Sekhar Jyoti Baishya | 2023-05-28T03:45:26Z | http://arxiv.org/abs/2305.17621v1 | # On groups with same number of centralizers
###### Abstract.
In this paper, among other results, we give some sufficient conditions for every non-abelian subgroup of a group to be isoclinic with the group itself. It is also seen that under certain conditions, two groups have same number of element centralizers implies they are isoclinic. We prove that if \(G\) is any group having \(4,5,7\) or \(9\) element centralizers and \(H\) is any non-abelian subgroup of \(G\), then \(|\operatorname{Cent}(G)\ |=\mid\operatorname{Cent}(H)\mid\) and \(G^{\prime}\cong H^{\prime}\cong C_{2},C_{3},C_{5}\) or \(C_{7}\) respectively. Furthermore, it is proved that if \(G\) is any group having \(n\in\{4,5,6,7,9\}\) element centralizers, then \(|\ G^{\prime}\ |=n-2\).
Key words and phrases:Finite group, Centralizer, Isoclinic groups 2010 Mathematics Subject Classification: 20D60, 20D99
## 1. Introduction
Given any group \(G\), let \(\operatorname{Cent}(G)\) and \(nacent(G)\) denotes, respectively, the set of centralizers and the set of non-abelian centralizers of elements of \(G\). A group \(G\) is said to be \(n\)-centralizer if \(|\operatorname{Cent}(G)\ |=n\). In 1994 Belcastro and Sherman [13] introduced the notion of \(n\)-centralizer groups and since then the influence of \(\operatorname{Cent}(G)\) on the structure of group have been studied extensively. See [7, 11, 12, 17, 19, 23] for recent advances on this and related areas. Perhaps motivated by the impact of \(|\operatorname{Cent}(G)\ |\) on the group, Ashrafi and Taeri [5] in 2005 asked the following question which was disproved by Zarrin [30]: Let \(G\) and \(H\) be finite simple groups. Is it true that if \(|\operatorname{Cent}(G)|=|\operatorname{Cent}(H)|\), then \(G\) is isomorphic to \(H\)? Amiri and Rostami [3] in 2015 put forward the following analogue question which was also disproved by Khoramshahi and Zarrin [23]: Let \(G\) and \(H\) be finite simple groups. Is it true that if \(|nacent(G)|=|nacent(H)|\), then \(G\) is isomorphic to \(H\)? In this context we have the following natural question:
**Question 1.1**.: _What can be said about the relationship between two groups if they have the same number of element centralizers._
It may be mentioned here that if an \(n\)-centralizer group \(G\) is isoclinic with a group \(H\), then \(|\operatorname{Cent}(G)|=|\operatorname{Cent}(H)|\) (see [24, 31]). However, the converse is not true in general. For example, if \(G\) is a non-abelian group of order \(27\), then \(|\operatorname{Cent}(G)|=|\operatorname{Cent}(S_{3})|=5\), but \(G\) and \(S_{3}\) are not isoclinic. The authors in [23] studied and obtained some conditions under which the converse of this statement holds. In this paper, we continue with Question 1.1 and improve some earlier
results. We obtain some sufficient conditions for every non-abelian subgroup of a group to be isoclinic with the group itself. In particular, it is seen that any non-abelian subgroup of a \(4\) or \(5\)-centralizer group is isoclinic with the group itself, which improves [23, Theorem 3.5]. It is also proved that any two arbitrary \(4\)-centralizer groups are isoclinic and any two arbitrary nilpotent \(5,7\) or \(9\)-centralizer groups are isoclinic. We obtain that if \(H\) is any non-abelian subgroup of an \(n\)-centralizer group \(G\), where \(n=4,5,7\) or \(9\), then \(|\operatorname{Cent}(G)|=|\operatorname{Cent}(H)|\) and \(G^{\prime}\cong H^{\prime}\cong C_{2},C_{3},C_{5}\) or \(C_{7}\) respectively. For any subgroup \(H\) of an arbitrary \(8\)-centralizer group \(G\), it is observed that \(|\operatorname{Cent}(G)|=|\operatorname{Cent}(H)|\) implies \(G\) is isoclinic with \(H\). Given any \(n\)-centralizer group \(G\) with \(n\in\{4,5,6,7,9\}\), we see that \(|\)\(G^{\prime}\)\(|=n-2\). A finite group is said to be of conjugate type \((m,1)\) if every proper element centralizer is of index \(m\). For any two finite groups \(G\) and \(H\) of conjugate type \((p,1)\), \(p\) a prime, it is proved that \(|\operatorname{Cent}(G)|=|\operatorname{Cent}(H)|\) implies \(G\) is isoclinic with \(H\). Among other results, we prove that if \(G\) is any finite \((n+2)\)-centralizer group of conjugate type \((n,1)\), then \(G\) is a CA-group (i.e., every proper element centralizer of \(G\) is abelian) and \(\frac{G}{Z(G)}\) is elementary abelian of order \(n^{2}\), which improves [4, Theorem 3.3].
Throughout this paper, for a group \(G\), \(Z(G)\) and \(G^{\prime}\) denotes its center and commutator subgroup respectively, \(C_{G}(x)\) denotes the centralizer of \(x\in G\) (however, if there is no confusion in the context then we simply write \(C(x)\) in place of \(C_{G}(x)\)), \(C_{n}\) denotes the cyclic group of order \(n\) and \(D_{2n}\) denotes the dihedral group of order \(2n\). Some results of this paper holds for finite groups only and we have specifically mentioned it whenever necessary.
## 2. Definitions and basic results
We begin with the notion of isoclinism between two groups introduced by P. Hall [18] in 1940. Two groups \(G\) and \(H\) are said to be isoclinic if there are two isomorphisms \(\varphi:G/Z(G)\longrightarrow H/Z(H)\) and \(\phi:G^{\prime}\longrightarrow H^{\prime}\) such that if
\[\varphi(g_{1}Z(G))=h_{1}Z(H)\;\;\text{and}\;\;\varphi(g_{2}Z(G))=h_{2}Z(H)\]
with \(g_{1},g_{2}\in G,h_{1},h_{2}\in H\), then
\[\phi([g_{1},g_{2}])=[h_{1},h_{2}].\]
Isoclinism is an equivalence relation weaker than isomorphism having many family invariants. Here we list some of the invariants concerning the element centralizers of two isoclinic groups.
Recall that a group \(G\) is called an F-group if every non-central element centralizer contains no other element centralizer and a CA-group if all non-central element centralizers are abelian. Finite groups having exactly two class sizes are called I-groups which are direct product of an abelian group and a group of prime power order [22]. Two elements of a group are said to be \(z\)-equivalent or in the same \(z\)-class if their centralizers are conjugate in the group. Being \(z\)-equivalent is an equivalence relation which is weaker than conjugacy relation. A \(z\)-equivalence class is called a
\(z\)-class. In the following result, \(\omega(G)\) denotes the size of a maximal set of pairwise non-commuting elements of a group \(G\).
**Proposition 2.1**.: _If an \(n\)-centralizer group \(G\) is isoclinic with a group \(H\), then_
1. \(\omega(G)=\omega(H)\) _(_ _[_31_, Lemma 2.1]__)._
2. \(z\)_-classes in_ \(G\)_=_\(z\)_-classes in_ \(H\) _(_ _[_25_, Theorem 2.2]__)._
3. \(|\operatorname{Cent}(G)\mid=|\operatorname{Cent}(H)\mid\) _(_ _[_31_, Lemma 3.2]__,_ _[_24_, Theorem A]__)._
4. \(|\operatorname{\mathit{nacent}}(G)\mid=|\operatorname{\mathit{nacent}}(H)\mid\)_._
5. \(G\) _is a CA-group implies_ \(H\) _is also a CA-group._
6. \(G\) _is an F-group implies_ \(H\) _is also an F-group._
7. \(G\) _is an I-group implies_ \(H\) _is also an I-group (_ _[_18_]__,_ _[_21_, Proposition 2.2]__)._
Proof.: d) Let \(\varphi:G/Z(G)\longrightarrow H/Z(H)\) be the isomorphism. Then \(\varphi\) induces a bijection between the subgroups of \(G\) containing \(Z(G)\) and the subgroups of \(H\) containing \(Z(H)\) and the corresponding subgroups are isoclinic [18, pp. 134]. For any \(x\in G\), consider its centralizer \(C_{G}(x)\) which contains \(Z(G)\). In view of proof of [24, Theorem A], the corresponding subgroup of \(H\) containing \(Z(H)\) is \(C_{H}(y)\), where \(yZ(H)=\varphi(xZ(G))\). Therefore \(C_{G}(x)\) is isoclinic with \(C_{H}(y)\). Hence the result follows.
e) It follows from part (d)
f) Let \(\varphi:G/Z(G)\longrightarrow H/Z(H)\) be the isomorphism. Suppose \(H\) is not an F-group. Then \(C_{H}(a)<C_{H}(b)\) for some \(a,b\in H\setminus Z(H)\). Therefore \(\frac{C_{H}(a)}{Z(H)}<\frac{C_{H}(b)}{Z(H)}\) and consequently, in view of proof of [24, Theorem A], we have \(\frac{C_{G}(x)}{Z(G)}<\frac{C_{G}(y)}{Z(G)}\) for some \(x,y\in G\setminus Z(G)\), where \(\varphi(\frac{C_{G}(x)}{Z(G)})=\frac{C_{H}(a)}{Z(H)}\) and \(\varphi(\frac{C_{G}(y)}{Z(G)})=\frac{C_{H}(b)}{Z(H)}\). It now follows that \(C_{G}(x)<C_{G}(y)\), which implies \(G\) is not an F-group.
The following theorems will be used to obtain some of our results.
**Theorem 2.2**.: _(p.135 [18]) Every group is isoclinic to a group whose center is contained in the commutator subgroup._
**Theorem 2.3**.: _( [18], [21, Proposition 2.2]) Let \(G\) and \(H\) be finite \(p\)-groups (\(p\) a prime). Suppose \(G\) is isoclonic with \(H\). Then \(G\) and \(H\) are groups of the same conjugate type._
**Theorem 2.4**.: _(Theorem 11 [24], Theorem 3.3 [31]) The representatives of the families of isoclinic groups with \(n\)-centralizers (\(n\neq 2,3\)) can be chosen to be finite groups._
For any subgroup \(H\) of \(G\), it is easy to see that \(C_{H}(x)=C_{G}(x)\cap H\), for any \(x\in H\). This gives the following result:
**Lemma 2.5**.: _Let \(H\) be a subgroup of \(G\) such that \(H\cap Z(G)\lneq Z(H)\). Then the number of centralizers of \(G\) produced by the elements of \(H\) is atleast \(|\operatorname{Cent}(H)\mid+1\)._
Proof.: Clearly, the number of centralizers of \(G\) produced by elements of \(H\) is equal to the number of centralizers of \(G\) produced by the elements of \(H\cap Z(G)+\) the number of centralizers of \(G\) produced by the elements of \(H\setminus(H\cap Z(G))\geq 1+|\) Cent\((H)\mid\) (note that elements of \(H\) that have the same centralizers in \(H\) may have different centralizers in \(G\)).
**Lemma 2.6**.: _Let \(G\) be a finite group and \(p\) be a prime. If \(G\) has a non-central element of order \(p\), then \(|\) Cent\((G)\mid\geq p+2\)._
Proof.: Let \(x\in G\setminus Z(G)\) be an element of order \(p\). Let \(a\in G\setminus C(x)\). Clearly, \(ax^{i}\in G\setminus C(x)\) for any \(i\). Consider the set \(X=\{x,a,ax,ax^{2},\ldots,ax^{p-1}\}\). Observe that if \(ax^{i}ax^{j}=ax^{j}ax^{i}\) for some \(0\leq i<j\leq p-1\), then \(a\in C(x^{j-i})=C(x)\), a contradiction (noting that \(gcd((j-i),o(x))=1\)). Therefore \(X\) is a set of pairwise non-commuting elements of \(G\) and \(|\)\(X\mid=p+1\). Hence \(|\) Cent\((G)\mid\geq p+2\).
For any finite group \(G\), the author in [4, Lemma 3.1] proved that if \(G^{\prime}\cap Z(G)=\{1\}\), then \(|\) Cent\((G)\mid=\mid\) Cent\((\frac{G}{Z(G)})\mid\). However, for any arbitrary \(n\)-centralizer group we have the following general result:
**Proposition 2.7**.: _Let \(G\) be any \(n\)-centralizer group and \(N\unlhd G\). If \(N\cap G^{\prime}=\{1\}\), then \(|\) Cent\((G)\mid=\mid\) Cent\((\frac{G}{N})\mid\)._
Proof.: In view of [18, pp. 134] and Proposition 2.1 we have the result.
In response to a question raised by Belcastro and Sherman [13], namely, whether there exists an \(n(\neq 2,3)\)-centralizer group, Ashrafi showed [4, Proposition 2.1] that there exists \(n\)-centralizer groups for \(n\neq 2,3\). In this connection, we have the following result which implies we can say something more than Ashrafi's result. It also improves [1, Proposition 2.2] and [6, Lemma 2]. Furthermore, it improves [19, Example 16], namely, there exists a \(2^{r}\)-centralizer CA-group for every \(r>1\). In the following result \(C_{n}{\rtimes_{\theta}}C_{p}\) denotes semidirect product of \(C_{n}\) and \(C_{p}\), where \(\theta:C_{p}\longrightarrow\) Aut\((C_{n})\) is a homomorphism.
**Proposition 2.8**.: _Given any group \(G\), suppose \(\frac{G}{Z(G)}\) be non-abelian and \(p\) be a prime. If \(\frac{G}{Z(G)}\cong C_{n}{\rtimes_{\theta}}C_{p}\), then \(G\) is an \((n+2)\)-centralizer CA-group._
Proof.: In view of Theorem 2.4, \(G\) is isoclinic with a finite group. Now, the result follows using Proposition 2.1 and [8, Proposition 2.9 and Lemma 2.10].
Recall that the generalized quaternion group \(Q_{4m}\) has the presentation \(\langle a,b\mid a^{2m}=1,b^{2}=a^{m},bab^{-1}=a^{-1}\rangle,m\geq 2\).
**Corollary 2.9**.: _There exists \(n\)-centralizer CA-groups for \(n\geq 4\)._
Proof.: In view of Proposition 2.8, \(Q_{4(n-2)},n\geq 4\) is an \(n\)-centralizer CA-group by noting that \(\frac{Q_{4(n-2)}}{Z(Q_{4(n-2)})}\cong D_{2(n-2)}\) for any \(n\geq 4\)
**Remark 2.10**.: Let \(p\) be a prime. A finite \(p\)-group \(G\) is said to be a special \(p\)-group of rank \(k\) if \(G^{\prime}=Z(G)\) is elementary abelian of order \(p^{k}\) and \(\frac{G}{G^{\prime}}\) is elementary abelian. Furthermore, a finite group \(G\) is extraspecial if \(G\) is a special \(p\)-group and \(\mid G^{\prime}\mid=\mid Z(G)\mid=p\). It is well known that every extraspecial \(p\)-group has order \(p^{2a+1}\) for some positive integer \(a\). Furthermore, for every prime \(p\) and every positive integer \(a\), there exists, upto isomorphism, exactly two extraspecial groups of order \(p^{2a+1}\). Moreover, any two extraspecial groups of same order are isoclinic (see [28, pp. 7]). Again, if \(G\) is any group and \(A\) is an abelian group, then \(G\) and \(G\times A\) are isoclinic (see [18, pp. 135]).
In this context we have the following result.
**Proposition 2.11**.: _There exists \(2^{2n}\)-centralizer F-groups which are not CA-groups for \(n>1\)._
Proof.: Let \(G\) be an extraspecial group of order \(2^{2n+1},n>1\). Then in view of [12, Proposition 2.26] and [11, Proposition 3.13], \(G\) is an \(2^{2n}\)-centralizer F-group which is not a CA-group.
## 3. Main results
The following key result helps in determining whether a given group is CA or not.
**Proposition 3.1**.: _An arbitrary group \(G\) is a CA-group if and only if \(Z(H)=Z(G)\cap H\) for any non-abelian subgroup \(H\) of \(G\). In particular, \(\frac{H}{Z(H)}=\frac{H}{Z(G)\cap H}\cong\frac{HZ(G)}{Z(G)}\leq\frac{G}{Z(G)}\) for any non-abelian subgroup \(H\) of a CA-group \(G\)._
Proof.: Let \(G\) be a CA-group and \(H\) be a non-abelian subgroup of \(G\). If \(a,b\in H\) be such that \(ab\neq ba\), then \(a,b\in H\setminus(Z(G)\cup Z(H))\). It is easy to verify that \(Z(H)=C_{H}(a)\cap C_{H}(b)=C_{G}(a)\cap H\cap C_{G}(b)\cap H=Z(G)\cap H\). Conversely, if \(Z(H)=Z(G)\cap H\) for any non-abelian subgroup \(H\) of \(G\), then \(G\) is a CA-group. For if \(C_{G}(x)\) is non-abelian for some \(x\in G\setminus Z(G)\), then \(C_{G}(x)\cap Z(G)=Z(G)\subsetneq Z(C_{G}(x))\), which is a contradiction. Last part is trivial.
**Corollary 3.2**.: _If \(Z(G^{\prime})=\{1\}\) for any CA-group \(G\), then \(G\) is isoclinic with \(\frac{G}{Z(G)}\). In particular, if \(G\) is \(n\)-centralizer, then \(\frac{G}{Z(G)}\) is also an \(n\)-centralizer CA-group._
Proof.: Using Proposition 3.1, [18, pp. 134] and Proposition 2.1 we have the result.
As an application of Proposition 3.1, we also have the following result.
**Proposition 3.3**.: _If \(H\) is a non-abelian subgroup of \(G\) with \(\frac{G}{Z(G)}\cong D_{2n}\), then_
1. \(G\) _is an_ \((n+2)\)_-centralizer CA-group._
2. \(\frac{H}{Z(H)}\cong D_{2n/d}\) _for some divisor_ \(d\) _of_ \(n\)_._
3. \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\) _implies_ \(G\) _isoclinic with_ \(H\)
Proof.: a) It follows from Proposition 2.8.
b) By part (a) \(G\) is a CA-group and consequently, using Proposition 3.1 we have \(\frac{H}{Z(H)}=\frac{H}{Z(G)\cap H}\cong\frac{HZ(G)}{Z(G)}\leq\frac{G}{Z(G)}\). Now, using [16, Theorem 3.1] we have the result.
(c) In view of part (b), \(\frac{H}{Z(H)}\cong\frac{HZ(G)}{Z(G)}\cong D_{2n/d}\) for \(d\mid n\). Now, if \(\mid\operatorname{Cent}(H)\mid=n+2\), then by part (a), we have \(HZ(G)=G\) and consequently \(G\) is isoclinic with \(H\) by [26, Lemma 2.7].
Let \(H\) be a subgroup of \(G\). The author in [26, Lemma 2.7] proved that if \(G=HZ(G)\), then \(G\) and \(H\) are isoclinic, and if \(H\) is finite then the converse is also true. We have the following general result for an arbitrary \(n\)-centralizer group.
**Proposition 3.4**.: _Let \(G\) be any \(n\)-centralizer group and \(H\leq G\). Then \(G=HZ(G)\) iff \(G\) is isoclinic with \(H\)._
Proof.: If \(G=HZ(G)\), then by [26, Lemma 2.7], we have \(G\) is isoclinic with \(H\). Conversely, if \(G\) is isoclinic with \(H\), then \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\) by Proposition 2.1. Therefore using Lemma 2.5, we have \(Z(H)=H\cap Z(G)\), which implies \(\frac{H}{Z(H)}=\frac{H}{Z(G)\cap H}=\frac{HZ(G)}{Z(G)}\cong\frac{G}{Z(G)}\) and thus \(G=HZ(G)\).
Using arguments similar to Proposition 3.4, we also have the following result:
**Proposition 3.5**.: _Let \(G\) be any \(n\)-centralizer group and \(H\leq G\) be such that \(\frac{G}{Z(G)}\cong\frac{H}{Z(H)}\). Then \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\) iff \(G\) is isoclinic with \(H\)._
For groups \(G_{1}\) and \(G_{2}\) in [1, pp.56], we have \(\frac{G_{1}}{Z(G_{1})}\cong\frac{G_{2}}{Z(G_{2})}\cong C_{2}\times C_{2}\times C _{2}\), \(\mid\operatorname{Cent}(G_{1})\mid=6\) and \(\mid\operatorname{Cent}(G_{2})\mid=8\); which implies \(G_{1}\) and \(G_{2}\) are not isoclinic by Proposition 2.1. However, for some special situations we have the following result:
**Proposition 3.6**.: _Let \(G\) be any \(n\)-centralizer CA-group and \(H\leq G\). Then \(\frac{G}{Z(G)}\cong\frac{H}{Z(H)}\) iff \(G\) is isoclinic with \(H\). In particular, \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\)._
Proof.: In view of Proposition 3.1, \(Z(H)=Z(G)\cap H\) and so \(\frac{G}{Z(G)}\cong\frac{H}{Z(H)}=\frac{H}{Z(G)\cap H}=\frac{HZ(G)}{Z(G)}\). Therefore \(G=HZ(G)\), and hence \(G\) is isoclinic with \(H\) by [26, Lemma 2.7]. Converse is trivial. Last part follows from Proposition 2.1.
**Theorem 3.7**.: _If \(H\) is a non-abelian subgroup of \(G\) with \(\mid\frac{G}{Z(G)}\mid=pq\) (\(p\leq q\) are primes), then \(G\) is isoclinic with \(H\). In particular, \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid=q+2\)._
Proof.: In view of Proposition 3.1, \(\frac{H}{Z(H)}=\frac{H}{Z(G)\cap H}\cong\frac{HZ(G)}{Z(G)}\leq\frac{G}{Z(G)}\) and so \(HZ(G)=G\) by noting that \(G\) is a CA-group. Now, using [26, Lemma 2.7], \(G\) is isoclinic with \(H\). Last part follows using Theorem 2.4, Proposition 2.1 and [8, Corollary 2.5].
The following result will be used in the next theorem.
**Proposition 3.8**.: _Let \(G\) be a finite group such that \(\frac{G}{Z(G)}=\frac{K}{Z(G)}\rtimes\frac{H}{Z(G)}\) is a Frobenius group with \(K\) and \(H\) abelian. Then \(\mid\operatorname{Cent}(G)\mid=\mid G^{\prime}\mid+2\)._
Proof.: Using the third isomorphic theorem, we get \(\frac{G}{K}\cong\frac{H}{Z(G)}\). Consequently, we have \(K\) is an abelian normal subgroup of \(G\) such that \(\frac{G}{K}\) is cyclic. In the present scenario, in view of [20, Lemma 12.12], \(\mid K\mid=\mid G^{\prime}\mid\mid K\cap Z(G)\mid\) which forces \(\mid\frac{K}{Z(G)}\mid=\mid G^{\prime}\mid\). Therefore by [2, Proposition 3.1], \(\mid\operatorname{Cent}(G)\mid=\mid G^{\prime}\mid+2\).
In the following result, which improves [23, Theorem 3.5], \((C_{m},C_{n})\) denotes the Frobenius group with complement \(C_{m}\) and kernel \(C_{n}\).
**Theorem 3.9**.: _Let \(H\) be a non-abelian subgroup of an \(n\)-centralizer group \(G\)._
1. _If_ \(n=4\) _or_ \(5\)_, then_ \(G\) _is isoclinic with_ \(H\)_._
2. _If_ \(n=4,5,7\) _or_ \(9\)_, then_ \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\) _and_ \(G^{\prime}\cong H^{\prime}\cong C_{2},C_{3},C_{5}\) _or_ \(C_{7}\) _respectively._
3. _If_ \(n=8\)_, then_ \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\) _implies_ \(G\) _is isoclinic with_ \(H\)_._
Proof.: a) We have \(\mid\frac{G}{Z(G)}\mid=4,6\) or \(9\) by [31, Theorem 3.5] and hence the result follows using Theorem 3.7.
b) Following [31, Theorem 3.5] and applying similar arguments to [9, Theorem 2.6], we have \(\frac{G}{Z(G)}\cong(C_{4},C_{5})\), \((C_{6},C_{7})\) or \(\mid\frac{G}{Z(G)}\mid\in\{4,6,9,10,14,21,25,49\}\). In the present scenario, using [10, Lemma 2.1] and Proposition 3.1, we have \(Z(H)=Z(G)\cap H\) and hence \(\frac{H}{Z(H)}=\frac{H}{Z(G)\cap H}\cong\frac{HZ(G)}{Z(G)}\leq\frac{G}{Z(G)}\).
Now, if \(\mid\frac{G}{Z(G)}\mid\in\{4,6,9,10,14,21,25,49\}\), then using Theorem 3.7 we have \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\).
Next, suppose \(\frac{G}{Z(G)}\cong(C_{4},C_{5})\). If \(\frac{HZ(G)}{Z(G)}<\frac{G}{Z(G)}\), then \(\mid\frac{H}{Z(H)}\mid=10\) (by noting that \(2\)-Sylow subgroup of \(\frac{G}{Z(G)}\) is cyclic) and so \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\) using [31, Theorem 3.5]. On the other hand if \(\frac{HZ(G)}{Z(G)}=\frac{G}{Z(G)}\), then \(HZ(G)=G\) and hence \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\) using Proposition 3.4 and Proposition 2.1.
Finally, suppose \(\frac{G}{Z(G)}\cong(C_{6},C_{7})\). If \(\frac{HZ(G)}{Z(G)}<\frac{G}{Z(G)}\), then \(\mid\frac{H}{Z(H)}\mid=14\) or \(21\) by noting that \(\frac{G}{Z(G)}\) cannot have a non-abelian subgroup of order \(6\) and consequently, applying arguments of [31, Theorem 3.5] to [9, Theorem 2.6] we have \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\). On the other hand if \(\frac{HZ(G)}{Z(G)}=\frac{G}{Z(G)}\), then \(HZ(G)=G\) and hence \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\) using Proposition 3.4 and Proposition 2.1.
Second part follows using Proposition 2.1, Theorem 2.4, [8, Theorem 2.3] and Proposition 3.8.
c) In view of [31, Theorem 3.5], we have \(\mid\frac{G}{Z(G)}\mid=8\) or \(12\). In the present scenario, by [10, Lemma 2.1] and Proposition 3.1, we have \(\frac{H}{Z(H)}=\frac{H}{Z(G)\cap H}\cong\frac{HZ(G)}{Z(G)}\leq\frac{G}{Z(G)}\)
Now using [31, Theorem 3.5] again, we have \(HZ(G)=G\) and so by Proposition 3.4, \(G\) is isoclinic with \(H\).
Note that the \(6\)-centralizer group \(Q_{16}\) has a \(4\)-centralizer subgroup, namely \(Q_{8}\), which is not isoclinic with \(Q_{16}\). The \(7\)-centralizer group \((C_{4},C_{5})\) has a \(7\)-centralizer subgroup of order \(10\) which is not isoclinic with \((C_{4},C_{5})\). The \(8\)-centralizer group \(Q_{24}\) has a \(4\)-centralizer subgroup, namely \(Q_{8}\). Again, the \(9\)-centralizer group \((C_{6},C_{7})\) has a \(9\)-centralizer subgroup of order \(21\) which is not isoclinic with \((C_{6},C_{7})\).
A finite \(p\)-group (\(p\) a prime) \(G\) is semi-extraspecial if for every maximal subgroup \(N\) in \(Z(G)\) the quotient \(\frac{G}{N}\) is extraspecial. It is known that every semi-extraspecial \(p\)-group is special. Furthermore, a group \(G\) is said to be ultraspecial if \(G\) is semi-extraspecial and \(\mid G^{\prime}\mid=\sqrt{\mid G:G^{\prime}\mid}\).
**Proposition 3.10**.: _If \(G\) is an \(n\)-centralizer group with \(n\in\{4,5,6,7,9\}\), then \(\mid G^{\prime}\mid=n-2\)._
Proof.: In view of Proposition 2.1 and Theorem 2.4 without any loss we may assume that \(G\) is a finite group.
Now, suppose \(n=6\). Using [31, Theorem 3.5], we have \(\frac{G}{Z(G)}\cong D_{8},A_{4},C_{2}\times C_{2}\times C_{2}\) or \(C_{2}\times C_{2}\times C_{2}\times C_{2}\). If \(\mid\frac{G}{Z(G)}\mid=8\), then in view of [10, Proposition 2.14], \(G\) has an abelian centralizer of index \(2\) and consequently, using [8, Theorem 2.3], we have \(\mid G^{\prime}\mid=4\). Again, if \(\frac{G}{Z(G)}\cong A_{4}\), then by [10, Proposition 2.12], \(G\) has an abelian normal centralizer of index \(3\) and consequently, using [8, Theorem 2.3], we have \(\mid G^{\prime}\mid=4\). Finally, if \(\mid\frac{G}{Z(G)}\mid=16\), then in view of [11, Proposition 3.21], \(G\) is isoclinic with an ultraspecial group of order \(64\) and hence \(\mid G^{\prime}\mid=4\). Now, the result follows using Theorem 3.9.
Note that for the group \(G_{2}\) in [1, pp.56], we have \(\frac{G_{2}}{Z(G_{2})}\cong C_{2}\times C_{2}\times C_{2}\) and \(\mid\operatorname{Cent}(G_{2})\mid=8\). In view of [25, Lemma 3.1], \(G_{2}\) is isoclinic with a finite \(2\)-group and hence \(\mid G^{\prime}_{2}\mid\neq 6\). From the above result, we can also see that if \(G\) and \(H\) are \(n\)-centralizer groups with \(n\in\{4,5,7,9\}\), then \(G^{\prime}\cong H^{\prime}\). However, \(D_{16}\) and \(A_{4}\) are \(6\)-centralizer groups with \(D^{\prime}_{16}\cong C_{4}\) and \(A^{\prime}_{4}\cong C_{2}\times C_{2}\).
**Proposition 3.11**.: _Let \(G\) and \(H\) be two finite groups of conjugate type \((p,1)\), \(p\) a prime. If \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\), then \(G\) is isoclinic with \(H\)._
Proof.: In view of Ito [22], without any loss we may assume that \(G\) is a \(p\)-group. Furthermore, using Theorem 2.2, \(G\) is is isoclinic with a group \(G_{1}\) such that \(Z(G_{1})\subseteq G^{\prime}_{1}\). Note that since \(Z(G_{1})\) is finite, therefore \(G_{1}\) is finite. It now follows using Theorem 2.3 and [21, Proposition 3.1] that \(G_{1}\) is an extraspecial \(p\)-group. Similarly, we can see that \(H\) is isoclinic with an extraspecial \(p\)-group \(G_{2}\). Now, suppose \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\). In the present scenario, using Proposition 2.1, we have \(\mid\operatorname{Cent}(G_{1})\mid=\mid\operatorname{Cent}(G_{2})\mid\) and consequently, applying [11, Proposition 3.13] if follows that \(\mid G_{1}\mid=\mid G_{2}\mid\). Now the result follows using Remark 2.10.
**Proposition 3.12**.: _Let \(G\) be an extraspecial \(p\)-group of order \(p^{k}\) for some \(k\) and prime \(p\). If \(H\) is a subgroup of \(G\) such that \(|\operatorname{Cent}(G)\ |=\mid\operatorname{Cent}(H)\mid\), then \(G=H\)._
Proof.: Since \(\mid\operatorname{Cent}(G)\ |=\mid\operatorname{Cent}(H)\mid\), therefore applying Lemma 2.5, we have \(H\cap Z(G)=Z(H)=Z(G)\) and consequently, \(\frac{H}{Z(H)}=\frac{H}{Z(G)\cap H}\cong\frac{HZ(G)}{Z(G)}\leq\frac{G}{Z(G)}\). It now follows that \(H\) is an extraspecial \(p\)-group of order \(p^{l}\) for some \(l\). Therefore by [11, Proposition 3.13], \(k=l\) and hence \(G=H\).
**Proposition 3.13**.: _Let \(G\) be any group such that \(\frac{G}{Z(G)}\cong C_{p}\times C_{p}\), \(p\) a prime. Then \(G\) is isoclinic with an extraspecial group of order \(p^{3}\)._
Proof.: Using Theorem 2.2 and arguments in the proof of Theorem 2.4, \(G\) is isoclinic with a finite group \(N\) of order \(p^{n}\) with \(Z(N)\subseteq N^{\prime}\). Moreover, since \(\frac{N}{Z(N)}\cong C_{p}\times C_{p}\), therefore \(Z(N)=N^{\prime}\). Also note that any proper centralizer of \(N\) is abelian normal of index \(p\) in \(N\). Hence using [14, Lemma 4, pp. 303], we have \(\mid N\mid=p.\mid Z(N)\mid.\mid N^{\prime}\mid\) and consequently, \(N\) is an extraspecial group of order \(p^{3}\).
**Corollary 3.14**.: _Let \(G\) and \(H\) be any two groups such that \(\frac{G}{Z(G)}\cong\frac{H}{Z(H)}\cong C_{p}\times C_{p}\), \(p\) a prime. Then \(G\) is isoclinic with \(H\)._
Proof.: The result follows from Proposition 3.13 and Remark 2.10.
The following result shows that any two \(4\)-centralizer groups are isoclinic.
**Proposition 3.15**.: _Any \(4\)-centralizer group is isoclinic with \(Q_{8}\)._
Proof.: It follows using [31, Theorem 3.5], Proposition 3.13 and Remark 2.10.
**Lemma 3.16**.: _If \(M\) is a maximal non-abelian subgroup of a CA-group \(G\), then either \(Z(G)=Z(M)\) or \(G\) is isoclinic with \(M\)._
Proof.: If \(Z(G)\subseteq M\), then using Proposition 3.1 we have \(Z(G)=Z(M)\). On the otherhand, if \(Z(G)\nsubseteq M\), then \(MZ(G)=G\) and hence \(G\) is isoclinic with \(M\) by [26, Lemma 2.7].
**Proposition 3.17**.: _Let \(H\) be a non-abelian subgroup of \(G\) with \(\mid\frac{G}{Z(G)}\mid=p^{3}\) (\(p\) a prime).Then \(\mid\operatorname{Cent}(G)\mid=\mid\operatorname{Cent}(H)\mid\) implies \(G\) is isoclinic with \(H\)._
Proof.: In view of [10, Lemma 2.1] and Proposition 3.1, \(\frac{H}{Z(H)}\cong\frac{HZ(G)}{Z(G)}\leq\frac{G}{Z(G)}\). In the present scenario using [10, Proposition 2.14], we have \(HZ(G)=G\) by noting that if \(\mid\frac{H}{Z(H)}\mid=p^{2}\), then \(\mid\operatorname{Cent}(H)\mid=p+2\). Hence by [26, Lemma 2.7], \(G\) is isoclinic with \(H\).
Combining Corollary 3.14 and [9, 31] it is easy to see that any two nilpotent \(n\in\{5,7,9\}\)-centralizer groups are isoclinic. For \(5\)-centralizer groups we have the following result:
**Proposition 3.18**.: _Any \(5\)-centralizer group \(G\) is isoclinic with \(G_{m}=\langle a,b\mid a^{3}=b^{2^{m}}=1,bab^{-1}=a^{-1}\rangle\), where \(m\geq 1\) or an extraspecial group of order \(27\)._
Proof.: In view of Theorem 2.4 without any loss we may assume that \(G\) is finite. Moreover, by [31, Theorem 3.5] we have \(\frac{\dot{G}}{Z(G)}\cong S_{3}\) or \(C_{3}\times C_{3}\). Now, if \(\frac{G}{Z(G)}\cong S_{3}\), then using [27, Corollary 2.2], we have \(G=G_{m}\times A\), where \(m\geq 1\) and \(A\) is an abelian group. Hence by Remark 2.10, \(G\) is isoclinic with \(G_{m}\). Again, if \(\frac{G}{Z(G)}\cong C_{3}\times C_{3}\), then by Proposition 3.13, we have the result.
Let \(p\) be a prime. The author in [4, Theorem 3.3] proved that if \(G\) is a finite \((p^{2}+2)\)-centralizer group of conjugate type \((p^{2},1)\) and two of the proper element centralizers are normal in \(G\), then \(\frac{G}{Z(G)}\) is elementary abelian of order \(p^{4}\). We conclude the paper with the following generalization of this result.
**Theorem 3.19**.: _Let \(G\) be any finite \((n+2)\)-centralizer group of conjugate type \((n,1)\). Then \(G\) is a CA-group and \(\frac{G}{Z(G)}\) is elementary abelian of order \(n^{2}\)._
Proof.: In view of Ito [22], without any loss we may assume that \(G\) is a \(p\)-group for some prime \(p\). Let \(X_{i}=C(x_{i}),1\leq i\leq n+1\) where \(x_{i}\in G\setminus Z(G)\). We have \(G=\underset{i=1}{\cup}X_{i}\) and \(\mid G\mid=\underset{i=2}{\sum}\mid X_{i}\mid\). In the present scenario, interchanging \(X_{i}\)'s and applying [15, Cohn's Theorem], we have \(G=X_{i}X_{j}\) and \(X_{i}\cap X_{j}=Z(G)\) for any \(1\leq i,j\leq n+1,i\neq j\). It is easy to verify that \(\mid\frac{\dot{G}}{Z(G)}\mid=n^{2}\) and \(G\) is a CA-group. Moreover, using [29, Proposition 2] we have \(\frac{G}{Z(G)}\) is elementary abelian.
## Acknowledgment
I would like to thank Prof. Mohammad Zarrin for carefully reading the manuscript and giving his valuable suggestions and comments on it.
|
2310.02924 | Quantum forgery attacks against OTR structures based on Simon's
algorithm | Classical forgery attacks against Offset Two-round (OTR) structures require
some harsh conditions, such as some plaintext and ciphertext pairs need to be
known, and the success probability is not too high. To solve these problems, a
quantum forgery attack on OTR structure using Simon's algorithm is proposed.
The attacker intercept the ciphertext-tag pair $(C,T)$ between the sender and
receiver, while Simon's algorithm is used to find the period of the tag
generation function in OTR, then we can successfully forge new ciphertext $C'$
($C'\ne C$) for intercepted tag $T$. For a variant of OTR structure
(Pr{/o}st-OTR-Even-Mansour structure), a universal forgery attack, in which it
is easy to generate the correct tag of any given message if the attacker is
allowed to change a single block in it, is proposed. It first obtains the
secret parameter L using Simon's algorithm, then the secret parameter L is used
to find the keys $k_1$ and $k_2$, so that an attacker can forge the changed
messages. It only needs several plaintext blocks to help obtain the keys to
forge any messages. Performance analysis shows that the query complexity of our
attack is $O(n)$, and its success probability is very close to 1. | Wenjie Liu, Mengting Wang, Zixian Li | 2023-10-01T15:16:43Z | http://arxiv.org/abs/2310.02924v1 | # Quantum forgery attacks against OTR structures based on Simon's algorithm
###### Abstract
Classical forgery attacks against Offset Two-round (OTR) structures require some harsh conditions, such as some plaintext and ciphertext pairs need to be known, and the success probability is not too high. To solve these problems, a quantum forgery attack on OTR structure using Simon's algorithm is proposed. The attacker intercept the ciphertext-tag pair \((C,T)\) between the sender and receiver, while Simon's algorithm is used to find the period of the tag generation function in OTR, then we can successfully forge new ciphertext \(C^{\prime}\) (\(C^{\prime}\neq C\)) for intercepted tag \(T\). For a variant of OTR structure (Prost-OTR-Even-Mansour structure), a universal forgery attack, in which it is easy to generate the correct tag of any given message if the attacker is allowed to change a single block in it, is proposed. It first obtains the secret parameter \(L\) using Simon's algorithm, then the secret parameter \(L\) is used to find the keys \(k_{1}\) and \(k_{2}\), so that an attacker can forge the changed messages. It only needs several plaintext blocks to help obtain the keys to forge any messages. Performance analysis shows that the query complexity of our attack is \(O(n)\), and its success probability is very close to \(1\).
OTR structure; Prost-OTR-Even-Mansour structure; Simon's algorithm; Quantum forgery attack. PACS Nos.: 03.67.Dd.
\({}^{*}\)Corresponding Author.
## 1 Introduction
In terms of cryptographic security research, the authentication encryption algorithm can realize the confidentiality and integrity verification of information at the same time, and it has been widely used in various network security systems. The authentication encryption working mode is a cryptographic scheme that encrypts messages to generate ciphertext and calculates authentication labels to solve practical problems such as privacy and authenticity of user information. At present, a large amount of information not only needs to be kept confidential during the transmission process, but also needs to be authenticated after the receiver receives the information to ensure the confidentiality, integrity and authenticity of the information during the transmission process. Therefore, it is very necessary to design and study the authentication encryption algorithm. The goal of the CAESAR competition is to identify reliable, efficient, secure, authenticated cryptographic algorithm combinations with unique properties for different application scenarios. A total of 57 algorithms were collected in the initial stage of this encryption competition.
Offset Two-round (OTR)[1] is an online, one-pass, authenticated encryption block cipher mode that can be processed in parallel for every two consecutive blocks. The OTR mode is similar in structure to the OCB mode[2], but the OTR mode only uses the forward function of the block cipher for encryption and decryption algorithms. Its instantiation using an AES block cipher with 128 key bits (called AES-OTR) has become a CAESAR candidate. The Prost-OTR authenticated encryption algorithm (v1.0/1.1) is also a CAESAR candidate submitted by Kavun _et al.[3, 4]_ inspired by OTR. It incorporates a newly designed efficient permutation, the Prost permutation. The Prost-OTR variant (Prost-OTR-Even-Mansour) uses Prost arrangement in a single-key Even-Mansour structure as block cipher[5]. The Even-Mansour method[6, 7, 8, 9, 10, 11] has been extensively studied, and it has been proven to be secure under different security concepts. Besides, detailed security level and key length bounds are also given, but it is inherently vulnerable to related key. OTR structure is an authentication encryption structure, which can ensure the confidentiality and integrity of information at the same time, and has certain research significance. There are also many scholars who are constantly studying forgery attacks on these two structures. Christoph _et al.[5]_ suggested that the related key properties constructed by Even-Mansour are not well covered by classical security concepts, and they can lead to powerful forgery attacks on Prost-OTR-Even-Mansour structure. Hassan _et al.[12]_ showed that some primitive polynomials can cause collisions between the masking coefficients used in the current instantiation, allowing forgery on OTR structure. However, as far as we know, these forgery attacks have harsh constraints, which makes the forgery scenario very strict.
On the other hand, in the quantum world, since the Shor algorithm[13] was proposed, it has been announced that quantum computers will pose a serious threat to public key cryptography. More and more researchers have begun to use quantum algorithms to crack symmetric cryptosystems, such as Simon's algorithm[14, 15, 16, 17, 18],
Grover's algorithm [19, 20] and Bernstein-Vazirani algorithm [21, 22]. In addition, they also proposed some new quantum algorithms [23, 24], and even extended classical cryptanalysis methods to the quantum domain [25, 26]. It is worth mentioning that there have also been new breakthroughs in the field of quantum image encryption. Feng _et al._[27] proposed an image encryption scheme based on the chaotic random behavior characteristics of Boson sampling (BS) probability distribution, which has achieved certain results in many cryptographic applications. Shi _et al._[28] proposed a novel quantum image encryption scheme based on quantum cellular neural network with quantum operations and hyper-chaotic systems, aiming to optimize security, computation complexity, and decrypted image definition. In 2021, Simon's algorithm was first used to break the 3-round Feistel construction [14] and proved that the Even-Mansour construction [15] is insecure with superposition queries. Inspired by them, Kaplan _et al._[17] showed several classical attacks based on finding collisions, which can be greatly accelerated using Simon's algorithm. Shi _et al._[18] also adopted a similar method to implement a collision attack on the authenticated encrypted AEZ in the CAESAR competition. Recently, according to the parallel and serial structure characteristics of AES-OTR algorithm in processing the associated data, Chang _et al._[29] constructed periodic function of associated data multiple times based on Simon's algorithm to forge associated data. But as far as we know, there is no quantum forgery attack method for OTR structure ciphertext, and existing quantum attack methods cannot solve this problem. In order to improve the success probability of classical forgery attack on OTR, a quantum forgery attack against OTR structure based on Simon's algorithm is proposed. We conducted quantum forgery attacks directly from the perspective of ciphertext, while Chang _et al._[29] did so from the perspective of correlated data. Therefore, although the final efficiency is the same, the cost of obtaining information is lower.
In this paper, first, Simon's algorithm is used to obtain the period of the tag generation function in OTR. When the period is obtained, we can forge new messages with known tags. In order to lower the threshold of forgery attack, we propose to conduct forgery attack from the perspective of ciphertext, which makes the attacker only need to intercept the information \((C,T)\) during the communication process between the sender and the receiver, and does not need to know the hard-to-obtain plaintext messages. In addition, for the Prost-OTR-Even-Mansour structure, we also propose an attack method that only requires some plaintexts to perform a universal forgery attack, which produce the correct ciphertext and tag for any specified message whose ciphertext and tag are not given. Although some conditions are relaxed but it can perform very powerful universal forgery attack. It obtains the secret parameter \(L\) using Simon's algorithm and then uses \(L\) to obtain keys \(k_{1}\) and \(k_{2}\). The attacker can calculate the tag value for any message, so this attack is the most thorough. The number of queries and the success probability of our quantum forgery attack are mainly reflected in the number of executions of Simon's algorithm and the success probability of finding periods. That is to say, our quantum
forgery attack is not only more realistic, but also has a high probability of success. In addition, the query complexity of our attack is O (\(n\)).
The rest of this paper is organized as follows. Sect.2 briefly introduces the OTR authentication encryption algorithm and the Simon's algorithm. Our quantum forgery attack on the OTR structure based on Simon's algorithm is introduced in Sect.3. Sect.4 specifically presents a universal forgery attack against the Prost-OTR-Even-Mansour structure. In Sect.5 we give the performance analysis of two attacks. Then the conclusions is given in Sect.6.
## 2 Preliminaries
### OTR structure
The OTR structure[1] accepts the following inputs, key \(K\in\{0,1\}^{|K|}\), the random number \(N\in\{0,1\}^{j}\) (\(1\leq j\leq N-1\)), associated data \(A\in\{0,1\}^{*}\) (a binary string of any finite length), plaintext messages \(M\in\{0,1\}^{*}\). And the outputs are ciphertext messages \(C\in\{0,1\}^{*}\) and tag \(T\in\{0,1\}^{\tau}\). The OTR structure divides the plaintext message \(M\) into multiple blocks, each containing two plaintext blocks. Then, each block is encrypted using two different masks. These two masks are doubled to get the other two masks for the next block, and so on. The encryption process of OTR removing the last group is shown in Fig. 1, and the special encryption process of the last group and the part of generating the label are shown in Fig. 2. The OTR algorithm is described in detail by Minematsu.[1]
The authentication token \(Tag\_OTR\) in the OTR is obtained by \(TE\) XOR \(TA\) (\(Tag\_OTR=TE\oplus TA\)). If the associated data \(A\) is equal to the empty string, then the final tag \(Tag\_OTR\) will be equal to \(TE\).
### Simon's algorithm
Simon's problem: Given a boolean function \(f:\{0,1\}^{n}\rightarrow\{0,1\}^{n}\), there exists \(s\in\{0,1\}^{n}\), such that \(f\left(x\right)=f\left(y\right)\) for all \((x,y)\in\{0,1\}^{2n}\), where \(x\oplus y\in\{0^{n},s\}\)
Figure 1: The OTR encryption process except for the last set of plaintext
and the goal is to find \(s\).
This problem can be solved by looking for collisions. So the best time to solve it is \(\Theta(2^{n/2})\). On the other hand, Simon's algorithm[16] solves this problem with quantum complexity \(O(n)\), and it repeats the following five steps.
1. Initialized with \(2n\) qubits \(\left|0\right\rangle\left|0\right\rangle\), one of the registers applies the Hadamard transformation \(\mathrm{H}^{\otimes n}\) to obtain a quantum superposition. \[\frac{1}{\sqrt{2^{n}}}\sum_{x\in\{0,1\}^{n}}\left|x\right\rangle\left|0\right\rangle\] (1)
2. A quantum query on a function \(f\) maps it to the state, \[\frac{1}{\sqrt{2^{n}}}\sum_{x\in\{0,1\}^{n}}\left|x\right\rangle\left|f(x)\right\rangle\] (2)
3. Measure the second register to get a value \(f(z)\) based on the calculation and fold the first register to the state, \[\frac{1}{\sqrt{2^{n}}}(\left|z\right\rangle+\left|z\oplus s)\right\rangle\] (3)
4. Applying the Hadamard transformation \(H^{\otimes n}\) again to the first register yields, \[\frac{1}{\sqrt{2}}\frac{1}{\sqrt{2^{n}}}\sum_{y\in\{0,1\}^{n}}(-1)^{y.z}(1+(- 1)^{y.z})\left|y\right\rangle\] (4)
5. The vectors \(y\) such that \(y\cdot s=1\) has an amplitude of \(0\). Therefore, measuring the state in the computational base yields a random vector \(y\) such that \(y\cdot s=0\).
By repeating this subroutine \(O(n)\) times, one obtains \(n-1\) independent vectors with high probability orthogonal to \(s\), which can be recovered with basic linear algebra.
Figure 2: The encryption process of the last set of plaintext and the label generation process by OTR
## 3 Quantum Forgery Attack on OTR Using Simon's Algorithm
Our attack excludes the interference of the associated data, that is, setting the associated data equal to the empty string. Now, according to the label calculation formula \(T=TE\), we found that if \(d\geq 4\), then Simon's algorithm can be used to find the period value of \(TE\). Because of some characteristics of OTR tail processing, the forgery results are different when \(d=4\) and \(d>4\). So, we discuss it in two cases below. First let's look at the general case, if \(d>4\), then according to the OTR encryption algorithm, we can know that,
\[TE=E\left(3L^{*}\oplus\delta\oplus M[2]\oplus M[4]\oplus...\oplus M[d]\right) \tag{5}\]
where \(L^{*}=L\oplus\delta\). When \(d>4\), suppose \(|M[\text{d}]|=n\), then
\[M[2]=E\left(4\delta\oplus E\left(5\delta\oplus C[1]\right)\oplus C[2]\right) \oplus C[1] \tag{6}\]
\[M[4]=E\left(8\delta\oplus E\left(9\delta\oplus C[3]\right)\oplus C[4]\right) \oplus C[3] \tag{7}\]
By substituting Eq.6 and Eq.7 into Eq.5, we can get the relevant formulas of the ciphertext and \(TE\), as follows,
\[\begin{split} TE=E(3L^{*}\oplus\delta\oplus E(4\delta\oplus E(5 \delta\oplus C[1])\oplus C[2])\oplus C[1]\oplus\\ E(8\delta\oplus E(9\delta\oplus C[3])\oplus C[4])\oplus C[3] \oplus...\oplus M[d])\end{split} \tag{8}\]
By Eq.8, we find that whether \(d\) is odd or even and whether the length of the last block of plaintext is \(n\), our falsification is only related to \(C[1],C[2],C[3],C[4]\), so we assume \(d=5\), then
\[\begin{split} TE=E(3L^{*}\oplus\delta\oplus E(4\delta\oplus E(5 \delta\oplus C[1])\oplus C[2])\oplus C[1]\oplus\\ E(8\delta\oplus E(9\delta\oplus C[3])\oplus C[4])\oplus C[3] \oplus E(16\delta)\oplus C[5])\end{split} \tag{9}\]
We define the following function:
\[\begin{split} f_{a}:\{0,1\}^{n}&\rightarrow\{0,1\}^{ n}\\ x&\to Tag\_OTR(x||x\oplus\theta,\alpha||\beta)\\ &=E(3L^{*}\oplus\delta\oplus E(4\delta\oplus E(5\delta\oplus x) \oplus\alpha)\oplus E(8\delta\oplus E(9\delta\oplus x\oplus\theta)\oplus \beta)\oplus\theta\oplus E(16\delta)\oplus C[5])\end{split} \tag{10}\]
where \(\theta=C[1]\oplus C[3]\), and \(\alpha\),\(\beta\) (\(C[2]\),\(C[4]\)) are constants. We only need one call to the cryptographic oracle to complete the construction of the function \(f_{a}\). A quantum circuit is built for \(f\), as shown in Fig. 3. We find that \(f_{a}\) satisfies the requirements of Simon's algorithm. It is obvious to see that \(f_{a}\left(x\right)=f_{a}\left(x\oplus s\right)\) with \(s=13\delta\oplus\theta\).
\[\begin{split} f_{a}(x)=E(3L^{*}\oplus\delta\oplus E(4\delta\oplus E (5\delta\oplus x)\oplus\alpha)\oplus E(8\delta\\ \oplus E(9\delta\oplus x\oplus\theta)\oplus\beta)\oplus\theta \oplus E(16\delta)\oplus C[5])\end{split} \tag{11}\]
\[f_{a}(x\oplus s) =E(3L^{*}\oplus\delta\oplus E(4\delta\oplus E(5\delta\oplus x\oplus s )\oplus\alpha)\oplus E(8\delta\oplus E(9\delta\oplus x\oplus s\oplus\theta) \tag{12}\] \[\oplus\beta)\oplus\theta\oplus E(16\delta)\oplus C[5])\] \[=E(3L^{*}\oplus\delta\oplus E(4\delta\oplus E(5\delta\oplus x \oplus 13\delta\oplus\theta)\oplus\alpha)\oplus E(8\delta\oplus E(9\delta\oplus x \oplus 13\delta\oplus\theta\oplus\theta)\] \[\oplus\beta)\oplus\theta\oplus E(16\delta)\oplus C[5])\] \[=E(3L^{*}\oplus\delta\oplus E(4\delta\oplus E(9\delta\oplus x \oplus\theta)\oplus\alpha)\oplus E(8\delta\oplus E(5\delta\oplus x)\oplus \beta)\oplus\theta\oplus E(16\delta)\oplus C[5])\] \[=f_{a}(x)\]
We set \(C[1]\) and \(C[3]\) as two constants \(\chi\) and \(\varpi\), then \(E\left(5\delta\oplus C[1]\right)\) and \(E\left(9\delta\oplus C[3]\right)\) also as two constants (both are set \(\mu\)) and do not affect the period value. The function can also be defined through the tag generation function,
\[f_{b}:\left\{0,1\right\}^{n} \rightarrow\left\{0,1\right\}^{n}\] (13) \[x \rightarrow Tag_{o}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
communication process.
When \(d=4\), because of the special handling of parity at the end of the OTR, the function will be slightly different from the function design of \(d>4\). Now,
\[TE=E\left(26\delta\oplus E\left(4\delta\oplus E\left(5\delta\oplus C[1]\right) \oplus C[2]\right)\oplus C[1]\oplus E\left(8\delta\oplus E\left(9\delta\oplus C [4]\right)\oplus C[3]\right)\oplus C[4]\right) \tag{16}\]
It is not difficult to find that when \(d=4\), the positions of \(C[4]\) and \(C[3]\) are swapped in the function. It can be deduced that when \(d>4\), the tag of \(C=C[1]||C[2]||C[3]||C[4]||...||C[d]\) is also works for \(C^{\prime}=13\delta\oplus C[4]||12\delta\oplus C[3]||12\delta\oplus C[2]||13 \delta\oplus C[1]|.........||C[d]\). It is worth noting that due to space limitations, the forgery of \(d\leq 4\) is not introduced in this example. And in the encryption process, due to the limitation of the calculation structure of the tag, the OTR structure is not suitable using Simon's algorithm for quantum forgery attack when \(d\leq 4\).
## 4 Quantum Forgery Attack on Prost-OTR-Even-Mansour
The Prost-OTR authenticated encryption algorithm is a CAESAR candidate submitted by Kavun et al. [3, 4], which is inspired by OTR and combines new efficient permutations. Its variant uses the Prost permutation in the Even Mansour structure for cryptographic operations, and we call this variant the Prost-OTR-Even-Mansour structure. As shown in the Fig. 4, because the associated data has no effect on our attack, we assume that the associated data \(A\) is an empty string, and we won't introduce it here.
Prost-OTR-Even-Mansour still retains the special calculation method of the OTR structure for the tag, that is, the calculation method of the tag is affected by the parity of \(d\) and the length of the last plaintext block. For convenience, we first
Figure 4: Core part of the Prost-OTR-Even Mansour structure
assume that \(|M[d]|=n\), then
\[TE=k_{2}\oplus P\left(\Sigma\oplus\left(3\left(2^{\left\lceil d/2\right\rceil+1}+ 1\right)+1\right)L\oplus k_{1}\right). \tag{17}\]
where \(\Sigma=M[2]\oplus M[4]\oplus...\oplus M[d]\). Note that almost all forgery attacks allow a given message \(M\) to be known to the forger. Through Eq.17, we find that if the parameter \(L\) can be known, Simon's algorithm can be used to carry out a very powerful universal forgery attack on this structure. The following method is proposed by us to obtain the parameter \(L\).
First we intercept the plaintext \(d=2\) and \(4\). If \(d=2\), we can get Eq.18, and if \(d=4\), then we can get Eq.19.
\[TE=k_{2}\oplus P\left(\mathrm{M}[2]\oplus 16L\oplus k_{1}\right) \tag{18}\]
\[TE=k_{2}\oplus P\left(\mathrm{M}[2]\oplus\mathrm{M}[4]\oplus 26L\oplus k_{1}\right) \tag{19}\]
Next, we set \(M[2]\) as the input \(x\) and \(M[4]\) as the constant \(c\), then we can construct the function \(f_{c}\):
\[\begin{split} f_{c}:\left\{0,1\right\}^{n}& \rightarrow\left\{0,1\right\}^{n}\\ x&\to Tag\_OTR\left(x\right)\oplus Tag\_OTR \left(x||c\right)\\ &=P\left(x\oplus 16L\oplus k_{1}\right)\oplus P\left(x\oplus c\oplus 2 6L\oplus k_{1}\right)\end{split} \tag{20}\]
Now, we can apply Simon algorithm on this function \(f_{c}\). It is obvious to see that \(f_{c}\left(x\right)=f_{c}\left(x\oplus s\right)\) with \(s=c\oplus 10L\).
\[f_{c}\left(x\right)=P\left(x\oplus 16L\oplus k_{1}\right)\oplus P\left(x\oplus c \oplus 26L\oplus k_{1}\right) \tag{21}\]
\[\begin{split} f_{c}\left(x\oplus s\right)&=P\left(x \oplus s\oplus 16L\oplus k_{1}\right)\oplus P\left(x\oplus s\oplus c \oplus 26L\oplus k_{1}\right)\\ &=P\left(x\oplus c\oplus 10L\oplus 16L\oplus k_{1}\right)\oplus P \left(x\oplus c\oplus 10L\oplus c\oplus 26L\oplus k_{1}\right)\\ &=P\left(x\oplus c\oplus 26L\oplus k_{1}\right)\oplus P\left(x \oplus 16L\oplus k_{1}\right)\\ &=f_{c}\left(x\right)\end{split} \tag{22}\]
We can find the parameters \(L\) through \(10L=s\oplus c\). Now, \(\Sigma\oplus\left(3\left(2^{\left\lceil d/2\right\rceil-1}\right)+1\right)L\) can be used as input \(x\). \(f_{d}\) can be constructed as,
\[\begin{split} f_{d}:\left\{0,1\right\}^{n}& \rightarrow\left\{0,1\right\}^{n}\\ x&\to Tag\_OTR\left(x\right)\oplus P\left(x \right)=k_{2}\oplus P\left(x\oplus k_{1}\right)\oplus P\left(x\right)\end{split} \tag{23}\]
where \(P\) is the Prost permutation operation, which is public. When \(s=k_{1}\), \(f_{d}\left(x\right)=f_{d}\left(x\oplus s\right)\) can be obtained by Simon's algorithm. Knowing the key \(k_{1}\), we can calculate the key \(k_{2}\) in the following way.
\[k_{2}=P\left(x\oplus k_{1}\right)\oplus Tag\_OTR\left(x\right)=P\left(x\oplus k _{1}\right)\oplus k_{2}\oplus P\left(x\oplus k_{1}\right) \tag{24}\]
Now that we have the key \(k_{1}\) and \(k_{2}\), and we can perform a universal forgery attack, in which the attacker can calculate the ciphertext (\(C\)) and tag value (\(T\)) for
any message. So that the attacker can send \((C,T)\) of a forge message to the receiver, and the receiver cannot identify the sender. Also when \(d\) is an even number and \(|M[d]|=n\), we find that the parity affects the calculation method of \(\Sigma\) (see Fig. 2), but this does not affect our input and label calculation method. It is also feasible to use Simon's algorithm to attack. For the length of the last block of plaintext, OTR adopts pad zero-filling method, which also has no effect on the feasibility of our attack. To sum up, regardless of the value of \(d\) and the length of the last block of plaintext, we can perform a universal forgery attack.
## 5 Performance analysis
In this section, we analyze the performance of the two attacks proposed in Sect.3 (Attack 1) and Sect.4 (Attack 2) for two different structures. Since the attacks only require several blocks of ciphertext or plaintext (\(d>4\)), its contribution on performance can be ignored. Therefore, our analysis mainly includes success probability, query complexity and qubit number.
(1) Attacks analysis on OTR structure
Zheng's attack [30] proposed a forgery attack method in the case of only knowing one pair of plaintext-ciphertext (Zheng's attack 1) and multiple pairs of plaintext-ciphertext (Zheng's attack 2) for the OTR structure. They require \(O(n^{2})\) queries and \(O(n^{r})\) queries, respectively. The success probability of two situations is \(r^{2}2^{-n}\) and \(s^{2}(s+1)2^{r-n}\) respectively, where \(r=(d-1)/2\), \(n\) is the size of the block and \(s+1\) is the number of known plaintext and ciphertext pairs [30]. It can be seen that \(2^{n}\) (\(n\) is usually taken as 128) is a very large number, so their success probability has room for improvement.
Figure 5: Success probability curve
Attack 1 only needs to know some ciphertexts that are about to be forged. As an application of Simon's algorithm, Attack 1's success probability may be slightly lower than that of the strict Simon's algorithm, since the OTR structure does not necessarily meet the strict Simon's problem. In 2019, Shi _et al._ proved that even if the condition of Simon's algorithm is not strictly satisfied, it will still return the correct result with \(cn\) queries, with probability \(P\geq 1-2^{n}\times(0.6454)^{cn}\)[31]. If \(c\geq 4\), then the success probability of Attack 1 is
\[\begin{array}{l}P=1-2^{n}\times(0.6454)^{cn}\\ =1-2^{-n(-c\log_{2}(0.6454)-1)}\\ \approx 1-2^{-n(0.6317c-1)}\\ \geq 1-2^{-n},\end{array} \tag{25}\]
which easily approaches 1 and does not depend on the query complexity[31]. The success probability curve is shown in the Fig. 5. Therefore, its query complexity is \(cn=O(n)\), and the qubit number is \(O(n)\). The attack comparison of OTR is shown in Table 1.
As shown in Table 1, Zheng's attack 1 and 2 need plaintext-ciphertext pairs, while Attack 1 only need some ciphertexts. It is obvious that Attack 1's scenario is more realistic. We also have the lowest query complexity \(O(n)\). The success probability of Attack 1 is higher than Zheng's attack 1 and 2.
(2) Attacks analysis on Prost-OTR-Even-Mansour structure
Christoph's attack[5] needs to give the ciphertext and label of any two messages under two related keys, and then they can forge and modify the ciphertext and label of the message. Their attack is selective forgery attack. It has \(2^{-n/2}\) success probability[32]. The query complexity is not discussed because it is a related-key attack, but the query complexity of Attack 2 is not high either, it is \(O(n)\). Attack 2 only need some plaintexts and the attacker can forge all the messages, which is a universal forgery attack. Like Attack 1, Attack 2 requires \(O(n)\) queries and has a success probability of \(1-2^{n}\times(0.6454)^{cn}\). That is to say, our quantum forgery attack not only has a loose scenario but also has a high success probability. The attack comparison of Prost-OTR-Even-Mansour is shown in Table 2.
## 6 Conclusion
In this paper, based on Simon's algorithm, two forgery attacks (quantum forgery attack on OTR structure and quantum forgery attack on Prost-OTR-Even-Mansour
\begin{table}
\begin{tabular}{c c c c} \hline Attack & Plaintext(P)/Ciphertext(C) & Query & Success probability \(P\) \\ \hline Zheng’s attack 1[30] & P+C & \(O(n^{2})\) & \(r^{2}2^{-n}\) \\ Zheng’s attack 2[30] & P+C & \(O(n^{r})\) & \(s^{2}(s+1)2^{r-n}\) \\ Attack 1 & C & \(O(n)\) & \(1-2^{n}\times(0.6454)^{cn}\) \\ \hline \end{tabular}
\end{table}
Table 1: Attack comparison of OTR.
structure) are proposed. The former attack is based on ciphertext and the attacker does not know the content of the message and can interfere with message transmission. Different from the former attack, the latter attack is a universal forgery attack, which is based on several plaintext blocks. The attacker can obtain the content of the messages, and forge the messages purposefully. In view of this, we believe that quantum forgery attacks have a great advantage in other areas of authenticated encryption modes. However, if the attacker only queries classically, our quantum forgery attack will lose the attack premise. Therefore, we consider using quantum algorithms offline to improve the efficiency of breaking authenticated encryption modes, which will be one of our future research directions.
## Acknowledgments
This work is supported by the National Natural Science Foundation of China (62071240), the Innovation Program for Quantum Science and Technology (2021ZD0302902), and the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
|
2307.06155 | Relative Fractional Independence Number and Its Applications | We define the relative fractional independence number of a graph $G$ with
respect to another graph $H$, as
$$\alpha^*(G|H)=\max_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)},$$
where the maximum is taken over all graphs $W$, $G\boxtimes W$ is the strong
product of $G$ and $W$, and $\alpha$ denotes the independence number. We give a
non-trivial linear program to compute $\alpha^*(G|H)$ and discuss some of its
properties. We show that $\alpha^*(G|H)\geq \frac{X(G)}{X(H)} \geq
\frac{1}{\alpha^*(H|G)},$ where $X(G)$ can be the independence number, the
zero-error Shannon capacity, the fractional independence number, the Lov\'{a}sz
number, or the Schrijver's or Szegedy's variants of the Lov\'{a}sz number of a
graph $G$. This inequality is the first explicit non-trivial upper bound on the
ratio of the invariants of two arbitrary graphs, as mentioned earlier, which
can also be used to obtain upper or lower bounds for these invariants. As
explicit applications, we present new upper bounds for the ratio of the
zero-error Shannon capacity of two Cayley graphs and compute new lower bounds
on the Shannon capacity of certain Johnson graphs (yielding the exact value of
their Haemers number). Moreover, we show that $\alpha^*(G|H)$ can be used to
present a stronger version of the well-known No-Homomorphism Lemma. | Sharareh Alipour, Amin Gohari, Mehrshad Taziki | 2023-07-12T13:27:37Z | http://arxiv.org/abs/2307.06155v4 | # Relative Fractional Independence Number and Its Applications
###### Abstract
We define the relative fractional independence number of two graphs, \(G\) and \(H\), as
\[\alpha^{*}(G|H)=\max_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)},\]
where the maximum is taken over all graphs \(W\), \(G\boxtimes W\) is the strong product of \(G\) and \(W\), and \(\alpha\) denotes the independence number. We give a non-trivial linear program to compute \(\alpha^{*}(G|H)\) and discuss some of its properties. We show that
\[\alpha^{*}(G|H)\geq\frac{X(G)}{X(H)},\]
where \(X(G)\) can be the independence number, the zero-error Shannon capacity, the fractional independence number, the Lovasz number, or the Schrijver's or Szegedy's variants of the Lov'asz number of a graph \(G\). This inequality is the first explicit non-trivial upper bound on the ratio of the invariants of two arbitrary graphs, as mentioned earlier, which can also be used to obtain upper or lower bounds for these invariants. As explicit applications, we present new upper bounds for the ratio of the zero-error Shannon capacity of two Cayley graphs and compute new lower bounds on the Shannon capacity of certain Johnson graphs (yielding the exact value of their Haemers number). Moreover, we show that the relative fractional independence number can be used to present a stronger version of the well-known No-Homomorphism Lemma. The No-Homomorphism Lemma is widely used to show the non-existence of a homomorphism between two graphs and is also used to give an upper bound on the independence number of a graph. Our extension of the No-Homomorphism Lemma is computationally more accessible than its original version.
## 1 Introduction and related works
### Preliminaries
Let \(G\) be a finite, undirected graph without a loop or multiple edges. The set \(\mathcal{V}(G)\) denotes the vertex set of \(G\), and \(\mathcal{E}(G)\) denotes the edge set of \(G\). A set \(\mathcal{S}\subset\mathcal{V}(G)\) is said to be an independent set of \(G\) if the induced graph on \(\mathcal{S}\) has no edges, _i.e.,_\(uv\notin\mathcal{E}(G)\) for every \(u,v\in\mathcal{S}\). Let \(\alpha(G)\) be the cardinality of the largest independent set of \(G\). The quantity \(\alpha(G)\) is called the _independence number_ (or the _packing number_) of a graph \(G\). Computing \(\alpha(G)\) is an NP-hard problem [10].
The strong product of two graphs, \(G\boxtimes H\), is a graph whose vertex set is the Cartesian product of the vertex sets of \(G\) and \(H\). Distinct vertices \((u,u^{\prime})\) and \((v,v^{\prime})\) are adjacent in \(G\boxtimes H\) if and only if either \(u=v\) and \(u^{\prime}v^{\prime}\in\mathcal{E}(H)\), or \(uv\in\mathcal{E}(G)\) and \(u^{\prime}=v^{\prime}\), or \(uv\in\mathcal{E}(G)\) and \(u^{\prime}v^{\prime}\in\mathcal{E}(H)\).
The zero-error Shannon capacity (or the Shannon number) of a graph \(G\) is defined as
\[\mathscr{C}(G)=\lim_{n\to\infty}\alpha(G^{n})^{\frac{1}{n}}, \tag{1}\]
where \(G^{n}\) is the strong graph product of \(G\) with itself \(n\) times. It is known that the above limit exists and furthermore \(\mathscr{C}(G)\geq\alpha(G^{n})^{\frac{1}{n}}\) for every \(n\geq 1\). In general, computing the exact value of \(\mathscr{C}(G)\) is a challenging problem, even for simple graphs. Shannon number is an essential concept in information theory (See [1]). Lovasz proved that \(\mathscr{C}(C_{5})=\sqrt{5}\), where \(C_{5}\) is a cycle of length \(5\). However \(\mathscr{C}(C_{7})\) is not known. For more results on the Shannon capacities of odd cycles, see [1, 2]. Alon and Lubetzky in [1] show that the series of independence numbers in strong powers of a fixed graph can exhibit a complex structure, implying that the Shannon Capacity of a graph cannot be approximated (up to a sub-polynomial factor of the number of vertices) by any arbitrarily large, yet fixed, prefix of the series of independence numbers in strong powers. Nonetheless, various upper bounds on the Shannon capacity are obtained in the literature. As an example, note that
\[\alpha(G^{n})=\prod_{i=1}^{n}\frac{\alpha(G^{i})}{\alpha(G^{i-1})}=\prod_{i=1 }^{n}\frac{\alpha(G\boxtimes G^{i-1})}{\alpha(G^{i-1})}\leq\left[\sup_{W}\frac {\alpha(G\boxtimes W)}{\alpha(W)}\right]^{n} \tag{2}\]
where \(G^{0}\) is a trivial graph with one vertex, and the supremum is taken over all graphs \(W\). Letting
\[\alpha^{*}(G)=\sup_{W}\frac{\alpha(G\boxtimes W)}{\alpha(W)} \tag{3}\]
we deduce \(\mathscr{C}(G)\leq\alpha^{*}(G)\). Hales in [1] showed that the supremum in (3) is a maximum and moreover, \(\alpha^{*}(G)\) is equal to the _fractional independence number_ of a graph \(G\), which is defined via a linear program as follows:
\[\max\sum_{v\in\mathcal{V}(G)}w_{v} \tag{4}\]
where the maximum is over all weights \(w_{v}\geq 0\) such that \(\sum_{v\in\mathcal{S}}w_{v}\leq 1\) for every clique \(\mathcal{S}\subset\mathcal{V}(G)\). It is known that \(\alpha^{*}(G)\) is also equal to the fractional chromatic number of \(G^{c}\). The fractional independence number is generally larger than \(\mathscr{C}(G)\) for some graphs \(G\). There are also other upper bounds on \(\mathscr{C}(G)\), such as Lovasz number of a graph [15] or the Haemers number [1]. An upper bound on the Shannon capacity of a graph via a linear programming variation is given in [10]. In [1], a fractional version of Haemers bound is presented, and it is shown that this fractional version outperforms the Haemers bound. In general, it is challenging to compute the exact values of Lovasz number and Haemers number and fractional Haemers number of a given graph \(G\). However, for some special graphs, such as cycle graphs, Kneser graphs, and Johnson graphs with certain parameters, these values are computed exactly [1, 2]. The Lovasz number can be formulated as a semidefinite program and numerically approximated by the ellipsoid method in time bounded by a polynomial in the number of vertices of \(G\)[1]. Lovasz number can be used as an approximated value for the independence number of sparse graphs (See [1]).
### Our contribution
The relaxation in equation (2) yields a weak bound in general because the left-hand side considers the expression \(\alpha(G\boxtimes W)/\alpha(W)\) when \(W=G^{i}\) whereas the right-hand side takes a maximum over _all_ auxiliary graphs \(W\). We consider two graphs, \(G\) and \(H\), to obtain other bounds and tighten this relaxation. Instead of writing individual upper bounds on the Shannon number of \(G\) and \(H\)
we write a bound on the ratio of their capacities as follows:
\[\frac{\mathscr{C}(G)}{\mathscr{C}(H)} =\lim_{n\to\infty}\left(\frac{\alpha(G^{n})}{\alpha(H^{n})}\right)^ {\frac{1}{n}} \tag{5}\] \[=\lim_{n\to\infty}\left(\prod_{i=1}^{n}\frac{\alpha(G^{i}\boxtimes H ^{n-i})}{\alpha(G^{i-1}\boxtimes H^{n-i+1})}\right)^{\frac{1}{n}}\] \[=\lim_{n\to\infty}\left(\prod_{i=1}^{n}\frac{\alpha(G\boxtimes W_{ i})}{\alpha(H\boxtimes W_{i})}\right)^{\frac{1}{n}}\] (6) \[\leq\sup_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)} \tag{7}\]
where in (6) we set \(W_{i}=G^{i-1}\boxtimes H^{n-i}\) and the supremum in (7) is taken over all graphs \(W\).1
Footnote 1: The expansion used here mimics the following one widely used in network information theory (and in particular in information-theoretic security). Assuming \(I(\cdot;\cdot)\) is Shannon’s mutual information, for arbitrarily distributed random variables \((M,Y_{1},Y_{2},\cdots,Y_{n},Z_{1},Z_{2},\cdots,Z_{n})\) we have
\[I(M;Y^{n})-I(M;Z^{n})=\sum_{i=1}^{n}\left(I(M;Y^{i},Z_{i+1}^{n})-I(M;Y^{i-1},Z _{i}^{n})\right)=\sum_{i=1}^{n}\left(I(M;Y_{i}|Y^{i-1},Z_{i+1}^{n})-I(M;Z_{i} |Y^{i-1},Z_{i+1}^{n})\right)\]
The above expression is of the form \(I(M;Y_{i}|T_{i})-I(M;Z_{i}|T_{i})\) where \(T_{i}=Y^{i-1},Z_{i+1}^{n}\). It is then common in network information theory to bound the term \(I(M;Y_{i}|T_{i})-I(M;Z_{i}|T_{i})\) from above by relaxing the structure of \(T_{i}\) and computing the maximum of \(I(M;Y_{i}|T)-I(M;Z_{i}|T)\) over a wide class of auxiliary random variables \(T\).
Let us introduce the following notation for the term on the right-hand side of (7). Given two graphs \(G\) and \(H\), define
\[\alpha^{*}(G|H)\triangleq\sup_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H \boxtimes W)}, \tag{8}\]
where the supremum is over all graphs \(W\). For the special case of when \(H\) is a graph with just one vertex, \(\alpha^{*}(G|H)\) reduces to \(\alpha^{*}(G)\), the _fractional independence number_ (or the _Rosenfeld number_) of a graph \(G\). For this reason, we name \(\alpha^{*}(G|H)\) the _relative fractional independence number_ of graphs \(G\) and \(H\).
Having introduced the notion of relative fractional independence number, we present our results about this quantity, its applications, and its relation to other graph invariants. Throughout the rest of this paper, we explore the details of this concept.
In particular, we show that the supremum in (7) is a maximum attained by a finite graph \(W\). We also give a computable characterization of \(\alpha^{*}(G|H)\) in terms of a linear program. While the linear program generalizes the one in (4), its form is non-trivial and more involved than the one given in [1] (the linear program is based on the concept of pseudo-homomorphism-inverse defined in this paper). To the best of our knowledge, this characterization leads to the first explicit non-trivial upper bound on the ratio of the zero-error capacity of two arbitrary graphs, as discussed later in Section 3.3. Next, we discuss the relation of \(\alpha^{*}(G|H)\) with other invariants defined on graphs, such as the Lovasz number of a graph [13]. For a given graph \(G\), using the fractional independence number of \(G\) and another graph \(H\), we can give upper or lower bounds on the value of these invariants, or we can even compute some invariants of a graph that were unknown before. Moreover, while the typical approach to proving lower bounds on graphs' independence number (or the Shannon number) is by explicitly exhibiting an independent set, \(\alpha^{*}(G|H)\) provides an indirect approach to proving such lower bounds. In Section 3.4, we present a stronger version of the
No-Homomorphism Lemma utilizing the relative fractional independence number. Determining whether there exists a homomorphism between two graphs is a fundamental problem in graph theory. The No-Homomorphism Lemma is a well-known tool in graph theory used to demonstrate the non-existence of a homomorphism between two graphs. We show that there are cases where the original version of the lemma fails to establish the non-existence of a homomorphism, whereas our new version is capable of doing so.
**Notation:** Throughout this paper, we use capital letters such as \(G,H\), and \(W\) to denote graphs (finite, undirected graphs with no loops or multiple edges). As mentioned, \(\mathcal{V}(G)\) and \(\mathcal{E}(G)\) denote the vertex and edge set of \(G\). The complement of a graph \(G\), denoted by \(G^{c}\), is a graph with the same vertices as in \(G\), such that two distinct vertices of \(G^{c}\) are adjacent if and only if they are not adjacent in \(G\). We use \(C_{k}\) to denote a cycle graph of length \(k\). We show sets in calligraphic letters. We use the lowercase letters to either denote vertices of a graph (as in \(u,v,v_{1},v_{2},x,y\)) or real numbers (as in \(w_{1},w_{2},...\)). The bold letter \(\mathbf{w}\) denotes a vector of real numbers. For graphs \(G_{1}\) and \(G_{2}\), \(G=G_{1}+G_{2}\) is the graph obtained from the disjoint union between \(G_{1}\) and \(G_{2}\) by adding the edges \(\{xy:x\in\mathcal{V}(G_{1}),y\in\mathcal{V}(G_{2})\}\).
## 2 Characterizations and properties of \(\alpha^{*}(G|H)\)
This section discusses characterizations and properties of \(\alpha^{*}(G|H)\). Applications of \(\alpha^{*}(G|H)\) are discussed in Section 3.
### Characterizations of \(\alpha^{*}(G|H)\)
In order to provide our characterizations of \(\alpha^{*}(G|H)\), we define the notion of a _pseudo-homomorphism-inverse_.
**Definition 1**.: Given a graph \(G\), we use \(\mathcal{I}(G)\) to denote the set of all independent sets of the graph \(G\). We include the empty set in \(\mathcal{I}(G)\). Next, we say that \(\mathcal{S},\mathcal{T}\subset\mathcal{V}(G)\) are disconnected in \(G\) if \(\mathcal{S}\cap\mathcal{T}=\emptyset\) and there is no \(u\in\mathcal{S}\) and \(v\in\mathcal{T}\) such that \(uv\in\mathcal{E}(G)\). For example, if \(G=C_{7}\), with vertex set \(\{v_{1},\ldots,v_{7}\}\) and \(\mathcal{S}=\{v_{1},v_{3}\}\) and \(\mathcal{T}=\{v_{5}\}\), then \(\mathcal{S}\) and \(\mathcal{T}\) are disconnected.
A homomorphism \(\mathsf{g}:H\to G\) from a graph \(H\) to a graph \(G\) is a map \(\mathsf{g}:\mathcal{V}(H)\to\mathcal{V}(G)\) such that \(uv\in\mathcal{E}(H)\) implies \(\mathsf{g}(u)\mathsf{g}(v)\in\mathcal{E}(G)\). Assume that a homomorphism \(\mathsf{g}\) from \(H\) to \(G\) exists. Then, for any vertex \(w\in\mathcal{V}(G)\), the set \(\{v\in\mathcal{V}(H):\mathsf{g}(v)=w\}\) must be an independent set in \(H\) (empty set is considered to be an independent set). Thus, the "inverse" of the homomorphism is a mapping \(\mathsf{g}^{-1}:\mathcal{V}(G)\to\mathcal{I}(H)\) with the following properties:
1. \(\mathsf{g}^{-1}(v_{1})\) and \(\mathsf{g}^{-1}(v_{2})\) are disconnected in \(H\) if there is no edge between \(v_{1}\) and \(v_{2}\) in \(G\),
2. The sets \(\{\mathsf{g}^{-1}(v)\}\) for \(v\in\mathcal{V}(G)\) form a partition of \(\mathcal{V}(H)\).
Given two arbitrary graphs, \(H\) and \(G\), a homomorphism from \(H\) to \(G\) may not exist. Nonetheless, we can define a _pseudo-homomorphism-inverse_ by relaxing the condition (ii) above:
**Definition 2**.: Given two arbitrary graphs \(H\) and \(G\), a mapping \(f:\mathcal{V}(G)\to\mathcal{I}(H)\) is called a pseudo-homomorphism-inverse for graphs \(G\) and \(H\) provided that \(f(v_{1})\) and \(f(v_{2})\) are disconnected in \(H\) if there is no edge between \(v_{1}\) and \(v_{2}\) in \(G\). Let \(\mathcal{F}(H,G)\) be the class of all pseudo-homomorphism-inverses from \(G\) to \(H\).
Now, we are ready to present our main theorem.
**Theorem 1**.: _Assume that \(G\) is a graph with \(k\) vertices and \(\mathcal{V}(G)=\{v_{1},v_{2},\cdots,v_{k}\}\). Then_
1. _We have_ \[\alpha^{*}(G|H)=\max_{\mathbf{w}}\sum_{i=1}^{k}w_{i},\] (9) _where the maximization over_ \(\mathbf{w}=(w_{1},w_{2},\cdots,w_{k})\) _is subject to_ \(w_{i}\geq 0\) _and_ \(\sum_{i=1}^{k}w_{i}|f(v_{i})|\leq 1\) _for any pseudo-homomorphism-inverse_ \(f\in\mathcal{F}(H,G)\)_. In other words, we require_ \(\sum_{i=1}^{k}w_{i}|\mathcal{T}_{i}|\leq 1\) _for any collection of sets_ \(\mathcal{T}_{1},\cdots,\mathcal{T}_{k}\in\mathcal{I}(H)\) _such that_ \(\mathcal{T}_{i}\) _and_ \(\mathcal{T}_{j}\) _are disconnected in_ \(H\) _if there is no edge between_ \(v_{i}\) _and_ \(v_{j}\) _in_ \(G\)_._
2. _We have_ \[\frac{1}{\alpha^{*}(G|H)}=\min_{w_{i}\geq 0,\sum_{i=1}^{k}w_{i}=1}\max_{f} \sum_{i=1}^{k}w_{i}|f(v_{i})|,\] _where the maximum is over all pseudo-homomorphism-inverse_ \(f\in\mathcal{F}(H,G)\)_._
3. _Given a probability distribution_ \(p(f)\) _over_ \(f\in\mathcal{F}(H,G)\)_, we can consider a random pseudo-homomorphism-inverse_ \(F\in\mathcal{F}(H,G)\)_. We have_ \[\alpha^{*}(G|H)=\min\max_{v\in\mathcal{V}(G)}\frac{1}{\mathbb{E}[|F(v)|]},\] _where the minimum is over all possible distributions over pseudo-homomorphism-inverses._
4. _For every pair of graphs_ \(G\) _and_ \(H\)_, there is some graph_ \(W\) _such that_ \[\alpha^{*}(G|H)=\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)}.\] _In other words, the supremum in the definition of_ \(\alpha^{*}(G|H)\) _is a maximum._
We give two different proofs for the above theorem. The first proof is given in Section 4.1 while the second proof is given in Appendix D. The two proofs compliment each other. While the first proof is longer, it provides insights into the form of the linear program and the reason for the appearance of the pseudo-homomorphism-inverse. On the other hand, if we already know the form of the linear program, its correctness can be verified directly via the second proof (which is an extension of the proof presented in [12]).
An outline of the first proof is in order. We would like to maximize \(\alpha(W\boxtimes G)/\alpha(W\boxtimes H)\). Take some arbitrary graph \(W\). Let \(\mathcal{B}\) be a maximum independent set for \(W\boxtimes G\), _i.e._, \(|\mathcal{B}|=\alpha(W\boxtimes G)\). Then, for every vertex \(u\) of \(W\), we form the set \(\mathcal{B}_{u}=\{v\in\mathcal{V}(G):(u,v)\in\mathcal{B}\}\). Next, we partition the vertices of \(W\) according to \(\mathcal{B}_{u}\) (vertices with the same set \(\mathcal{B}_{u}\) will be placed in the same class). Next, we show that without loss of generality, we can add some edges to the graph \(W\) which do not impact the independent set \(\mathcal{B}\) for \(W\boxtimes G\) but can potentially decrease \(\alpha(W\boxtimes H)\). Once the edges are added, the graph \(W\) will have a particular structure from which we can explicitly compute \(\alpha(W\boxtimes H)\) and optimize over \(W\). In Appendix A, we compute the \(\alpha^{*}(C_{7}|C_{7}^{c})\) as a warm-up example.
As a sanity check, one can verify that when \(H\) is a single vertex, the linear program in (9) reduces to the one in (4).
### Properties of \(\alpha^{*}(G|H)\)
From the definition of \(\alpha^{*}(G|H)\) in (8), some immediate observations can be made: removing an edge from \(G\) does not decrease \(\alpha^{*}(G|H)\); similarly, adding an edge to \(H\) does not decrease \(\alpha^{*}(G|H)\). Observe that for any graph \(G\), we have \(\alpha^{*}(G|H)=\alpha^{*}(G)\) if \(H\) is a complete graph. Also, \(\alpha^{*}(G|G)=1\). As another example, assume that the graph H is universal [11], _i.e., \(\alpha(H\boxtimes W)=\alpha(H)\alpha(W)\)_ for all \(W\).2 For a universal graph \(H\), we have
Footnote 2: Examples of universal graphs are perfect graphs. A graph is a perfect graph iff its induced subgraphs include neither an odd cycle of length greater than five nor an odd anti-cycle of length greater than five. Examples of perfect graphs are complete graphs or a cycle of even length, \(C_{2k}\).
\[\alpha^{*}(G|H)=\frac{1}{\alpha(H)}\sup_{W}\frac{\alpha(G\boxtimes W)}{\alpha( W)}=\frac{\alpha^{*}(G)}{\alpha(H)}.\]
Also, if \(G=G_{1}\boxtimes G_{2}\) where \(G_{1}\) is universal then, \(\alpha^{*}(G|H)=\alpha^{*}(G_{2}|H)\). And if \(H=H_{1}\boxtimes H_{2}\) where \(H_{1}\) is universal then, \(\alpha^{*}(G|H)=\frac{\alpha^{*}(G|H_{2})}{\alpha(H_{1})}\). Other properties of \(\alpha^{*}(G|H)\) are given in the following theorem.
**Theorem 2**.: _For any graphs \(G_{1},G_{2},H_{1}\) and \(H_{2}\) we have:_
1. \(\alpha^{*}(G_{1}\boxtimes G_{2}|H_{1}\boxtimes H_{2})\leq\alpha^{*}(G_{1}|H_ {1})\alpha^{*}(G_{2}|H_{2}).\)__
2. \(\alpha^{*}(G_{1}+G_{2}|H)\leq\alpha^{*}(G_{1}|H)+\alpha^{*}(G_{2}|H).\)__
3. \(\alpha^{*}((G_{1}+G_{2})^{c}|H)=\max(\alpha^{*}(G_{1}^{c}|H),\alpha^{*}(G_{2} ^{c}|H)).\)__
Proof of the above theorem is given in Section 4.2.
To continue, we need a property of the fractional independence number given in the following lemma.
**Lemma 1**.: _For any arbitrary graphs \(G_{1},G_{2},G_{2},\cdots,G_{r}\), there is some graph \(W\) such that_
\[\alpha^{*}(G_{i})=\frac{\alpha(G_{i}\boxtimes W)}{\alpha(W)},\qquad\forall i.\]
_In other words, a common maximizer \(W\) for the optimization problems involving \(\alpha^{*}(G_{i})\) exists._
Proof of the above lemma is given in Section 4.3.
**Corollary 1**.: _Given two arbitrary graphs \(G\) and \(H\), Lemma 1 implies the existence of some \(W\) such that_
\[\frac{\alpha^{*}(G)}{\alpha^{*}(H)}=\frac{\frac{\alpha(G\boxtimes W)}{\alpha( W)}}{\frac{\alpha(H\boxtimes W)}{\alpha(W)}}=\frac{\alpha(G\boxtimes W)}{ \alpha(H\boxtimes W)}.\]
_Thus, from the definition of \(\alpha^{*}(G|H)\) we obtain_
\[\alpha^{*}(G|H)\geq\frac{\alpha^{*}(G)}{\alpha^{*}(H)}. \tag{10}\]
_Also, we have equality in the following case:_
\[\alpha^{*}(G\boxtimes H|H)=\alpha^{*}(G). \tag{11}\]
Proof.: One direction follows from (10). The other direction follows from
\[\alpha^{*}(G\boxtimes H|H)=\max_{W}\frac{\alpha(G\boxtimes(H\boxtimes W))}{\alpha(H \boxtimes W)}\leq\max_{W}\frac{\alpha(G\boxtimes W)}{\alpha(W)}=\alpha^{*}(G)\]
where the inequality follows from the fact that we are relaxing the set of graphs of the form \(H\boxtimes W\) to all graphs \(W\).
The inequality (10) motivates us to ask whether \(\alpha^{*}(G|H)\) serves as upper bounds on the ratio of other graph invariants for \(G\) and \(H\). The following theorem addresses this question.
**Theorem 3**.: _For any graphs \(G\) and \(H\),_
\[\alpha^{*}(G|H)\geq\frac{X(G)}{X(H)}\]
_where \(X(G)\) can be the independence number of \(G\), the fractional independence number of \(G\), the Lovasz number of \(G\)[13], Schrijver's variant of the Lovasz number of \(G\)[13, 14] or Szegedy's variant of the Lovasz number of \(G\)[15]._
Proof of the above theorem is given in Section 4.4. Theorem 3 does not hold when \(X(G)\) is the Haemers number of a graph \(G\), \(\mathcal{H}(G)\). As a counter-example, see Appendix B, where we give two graphs \(G\) and \(H\), such that \(\frac{\mathcal{H}(G)}{\mathcal{H}(H)}>\alpha^{*}(G|H)\). We do not know if Theorem 3 holds when \(X(G)\) is the fractional Haemers number of a graph \(G\).
### Some notes on the computation of \(\alpha^{*}(G|H)\)
As it is shown in part (i) of Theorem 1, to compute \(\alpha^{*}(G|H)\), we need to maximize a linear function subject to linear inequality constraints. These inequalities depend on the structure of \(G\) and \(H\) and the set of independent sets of graph \(H\). We must compute the independent sets of \(H\) to assign valid subsets \(\mathcal{T}_{i}\)'s to the \(v_{i}\)'s. Computing the maximum independent set is an NP-hard problem, and the number of constraints can be exponential, so computing \(\alpha^{*}(G|H)\) is generally a difficult problem. At the end of this section we show that computing \(\alpha^{*}(G|H)\) is an NP-hard problem. For some special cases we can simplify the calculation of \(\alpha^{*}(G|H)\).
A graph is called a vertex-transitive graph, also sometimes called a node symmetric graph, if and only if for its any pair of nodes \(v\) and \(u\), there exists an automorphism3 of the graph that maps \(v\) to \(u\)[12].
Footnote 3: An automorphism of a graph is a graph isomorphism with itself, i.e., a mapping from the vertices of the given graph \(G\) back to vertices of \(G\) such that the resulting graph is isomorphic with \(G\).
**Lemma 2**.: _Suppose that \(G\) is a vertex-transitive graph with \(k\) vertices. Then,_
\[\alpha^{*}(G|H)=k\min\frac{1}{\sum_{i=1}^{k}|\mathcal{T}_{i}|}=\frac{k}{\alpha (G^{c}\boxtimes H)},\]
_where the minimum is over any collection of sets \(\mathcal{T}_{1},\cdots,\mathcal{T}_{k}\in\mathcal{I}(H)\) such that \(\mathcal{T}_{i}\) and \(\mathcal{T}_{j}\) are disconnected in \(H\) if there is no edge between \(v_{i}\) and \(v_{j}\) in \(G\)._
Proof.: Assume \(\mathcal{V}(G)=\{v_{1},\cdots,v_{k}\}\) and \(\mathcal{T}_{1},\cdots,\mathcal{T}_{k}\in\mathcal{I}(H)\) be a collection for \(v_{1},\cdots,v_{k}\) such that \(\mathcal{T}_{i}\) and \(\mathcal{T}_{j}\) are disconnected in \(H\) if there is no edge between \(v_{i}\) and \(v_{j}\) in \(G\). Thus, we impose \(\sum_{i=1}^{k}w_{i}|\mathcal{T}_{i}|\leq 1\). Let \(\pi\) be an automorphism \(\pi:G\to G\). Then, assigning \(\mathcal{T}_{i}\) to the vertex \(\pi(v_{i})\) is also a valid choice. Therefore, we also impose \(\sum_{i=1}^{k}w_{\pi(v_{i})}|\mathcal{T}_{i}|\leq 1\) for any automorphism \(\pi\).
Suppose that \(G\) is a vertex-transitive graph, i.e., for any pair of vertices \(v_{r}\) and \(v_{s}\), there exists an automorphism \(\pi_{r,s}:G\to G\) that maps vertex \(v_{r}\) to vertex \(v_{s}\). Then, we have \(\sum_{i=1}^{k}w_{\pi_{r,s}(v_{i})}|\mathcal{T}_{i}|\leq 1\) holds for any automorphism \(\pi_{r,s}\). Averaging over all possible automorphisms \(\pi_{r,s}\), we obtain \(\bar{w}\sum_{i=1}^{k}|\mathcal{T}_{i}|\leq 1\), where \(\bar{w}=(w_{1}+w_{2}+\cdots+w_{k})/k\). Therefore, without loss of generality, we can restrict the maximization to vectors \((w_{1},w_{2},\cdots,w_{k})\) whose entries are all equal. Then, the linear program reduces to the one given in the lemma statement.
It remains to show that \(\max\sum_{i=1}^{k}|\mathcal{T}_{i}|=\alpha(G^{c}\boxtimes H)\). Let \(\mathcal{T}_{i}^{*}\) be a valid assignment of independent sets of \(H\) to the vertices of \(G\) such that \(\sum_{i=1}^{k}|\mathcal{T}_{i}^{*}|=\max\sum_{i=1}^{k}|\mathcal{T}_{i}|\). Consider all the pairs of vertices \(v_{i},u_{j}\), such that \(v_{i}\in\mathcal{V}(G)\) and \(u_{j}\in\mathcal{T}_{i}^{*}\). The set of these pairs are independent in \(G^{c}\boxtimes H\), so, \(\max\sum_{i=1}^{k}|\mathcal{T}_{i}|\leq\alpha(G^{c}\boxtimes H)\). On the other hand, if \((v_{i}^{*},u_{j}^{*})\)'s are the set of vertices in a maximum independent set, \(\mathcal{I}^{*}\), of \(G^{c}\boxtimes H\), then if we define \(\mathcal{T}_{i}\triangleq\{u_{j}^{*}|(v_{i}^{*},u_{j}^{*})\in\mathcal{I}^{*}\}\), we get a valid assignment. This shows \(\max\sum_{i=1}^{k}|\mathcal{T}_{i}|\geq\alpha(G^{c}\boxtimes H)\). Thus, \(\max\sum_{i=1}^{k}|\mathcal{T}_{i}|=\alpha(G^{c}\boxtimes H)\).
_Remark 1_.: Assume that \(G\) is a vertex-transitive graph such that \(G^{c}\) is a universal graph (for example \(G\) can be a graph with only one vertex or \(C_{2k}\)), and \(H\) is an arbitrary graph, then by Lemma 2, we have
\[\alpha^{*}(G|H)=\frac{k}{\alpha(G^{c}\boxtimes H)}=\frac{k}{\alpha(G^{c}) \times\alpha(H)}.\]
So, in this case, computing \(\alpha^{*}(G|H)\) is equivalent to computing \(\alpha(H)\), which is a well-known NP-hard problem. So, computing \(\alpha^{*}(G|H)\) is also an NP-hard problem.
## 3 Applications of the relative fractional independence number
In the following sections, we explore several applications of \(\alpha^{*}(G|H)\).
### New lower bounds on the Shannon capacity of certain Johnson graphs
Let \(J(n,3)\) be the Johnson graph whose vertices are the \(3\)-subsets of \(\{1,\ldots,n\}\), and two vertices are adjacent if their intersection has one element.4
Footnote 4: We follow the definition in [1]. In some papers, the compliment of this graph is called the Johnson graph.
For a graph \(G\), the Haemers number of \(G\), denoted by \(\mathcal{H}(G)\), is an upper bound on the Shannon number of \(G\). The Haemers number considers the rank of particular matrices associated with the graph \(G\)[1]. The author in [1] shows that \(\mathcal{H}(J(n,3))\leq n\). This shows that the Shannon capacity of \(J(n,3)\) is at most \(n\).
Using the relative fractional independence number, we prove a new lower bound on the Shannon capacity of \(J(n,3)\) for \(n=14,18,22,26\). This new lower bound implies that \(\mathcal{H}(J(n,3))=n\) for \(n=14,18,22,26\) which is unknown in the literature.5
Footnote 5: For \(n=4k\), if we partition the underlying \(n\)-set into classes of size four, then all \(3\)-subsets, which are subsets of one of these classes, form an independent set of size \(n\) in \(J(n,3)\). Hence, \(\alpha(J(n,3))\geq n\) when \(n\) is divisible by \(4\). Since \(\mathcal{H}(J(n,3))\leq n\), [1] deduces that \(\alpha(J(n,3))=\mathscr{C}(J(n,3))=\mathcal{H}(J(n,3))\). However, when \(n\) is not divisible by \(4\), the exact value of \(\mathcal{H}(J(n,3))\) is unknown.
We have \(\frac{\mathscr{C}(C_{2k+1})}{\mathscr{C}(J(4k+2,3))}\leq\alpha^{*}(C_{2k+1}|J(4k +2,3))=\frac{2k+1}{\max\sum_{i=1}^{2k+1}|\mathcal{T}_{i}|}\) where \(C_{2k+1}\) is a cycle of length \(2k+1\). Now we give a valid assignment of the independent sets of \(J(4k+2,3)\) to the vertices of \(C_{2k+1}\). We assign the set \(\mathcal{T}_{i}^{*}\) to \(v_{i}\) for \(1\leq i\leq 2k+1\) as follows: for \(i\neq 2k+1\), let \(\mathcal{T}_{i}^{*}=\) all the \(3\)-subsets of \(\{2i-1,2i,2i+1,2i+2\}\) and for \(i=2k+1\), \(\mathcal{T}_{i}^{*}=\) all the \(3\)-subsets of \(\{2i-1,2i,1,2\}\). Then, we have \(\alpha^{*}(C_{2k+1}|J(4k+2,3))=\frac{2k+1}{\max\sum_{i=1}^{2k+1}|\mathcal{T}_ {i}|}\leq\frac{2k+1}{\sum_{i=1}^{2k+1}|\mathcal{T}_{i}^{*}|}=\frac{2k+1}{4(2k+1) }=\frac{1}{4}\).
This implies \(\mathscr{C}(C_{2k+1})\leq\frac{1}{4}\mathscr{C}(J(4k+2,3))\leq\frac{1}{4}\mathcal{H }(J(4k+2,3))\), yielding the lower bounds \(\mathscr{C}(J(4k+2,3))\geq 4\mathscr{C}(C_{2k+1})\) and \(\mathcal{H}(J(4k+2,3))\geq[4\mathscr{C}(C_{2k+1})]\) (we take the ceiling as \(\mathcal{H}(J(n,3))\) is an integer number). Since \(\mathscr{C}(C_{7})\geq 3.2578\)[19], \(\mathscr{C}(C_{9})\geq 4.32\)[2], \(\mathscr{C}(C_{11})\geq 5.2895\)[3], \(\mathscr{C}(C_{13})\geq 6.2743\)[1], we get new lower bounds on the Shannon capacity and are able to conclude \(\mathcal{H}(J(n,3))=n\) for \(n=14,18,22,26\). Note that we found an indirect lower bound on the Shannon capacity of the Johnson graph without exhibiting an explicit independent set for this graph.
### Lower bound on the independence number
Given a graph \(G\), the common approach to proving lower bounds on the independence number of \(G\) is by exhibiting an independent set. For instance, in [1], Bohman computes a lower bound on the independence number of powers of an odd cycle. The author in [1] states that "Here we give a simpler proof... that uses a clever expansion process given in a paper of Baumert, McEliece, Rodemich, Rumsey, Stanley, and Taylor." The construction is used to show that for any natural number \(d\), \(m\geq 3\) and even number \(\beta>0\), we have
\[\alpha(C_{m+\beta}^{d})\geq\alpha(C_{m}^{d})\bigg{(}\frac{m+\beta}{m}\bigg{)}^ {d} \tag{12}\]
where \(C_{m}^{d}\) is the \(n\)-th power of a cycle of length \(m\). The above inequality is non-trivial when \(m\) is an odd number. Note that for any two graphs \(G\) and \(H\) and any natural number \(d\), (7) yields
\[\frac{\alpha(G^{d})}{\alpha(H^{d})}\leq\alpha^{*}(G|H)^{d}.\]
In Appendix C, we prove that \(\alpha^{*}(C_{n}|C_{m})=n/m\) for odd \(m\) and \(n\) satisfying \(n<m\). Thus, we recover the result of [1]. However, the more important point is that the structure of the cycle graphs is not a fundamental constraint in our calculation; we can prove a lower bound on \(\alpha(H^{d})\) in terms of \(\alpha(G^{d})\) if we can compute \(\alpha^{*}(G|H)\) for any arbitrary graphs \(G\) and \(H\).
### The ratio of zero-error capacities of two graphs
By Theorem 3, we obtain
\[\frac{\mathscr{C}(G)}{\mathscr{C}(H)}\leq\alpha^{*}(G|H). \tag{13}\]
This inequality yields a non-trivial upper bound on the ratio of Shannon number of \(G\) and \(H\). To compare this bound with the previously known bounds, observe that one can use individual bounds on \(\mathscr{C}(G)\) and \(\mathscr{C}(H)\) to write the following upper bound. In particular, we can write
\[\frac{\mathscr{C}(G)}{\mathscr{C}(H)}\leq\frac{\mathscr{C}(G)}{(\alpha(H^{d}) )^{1/d}} \tag{14}\]
for any natural number \(d\) and then use a known upper bound on \(\mathscr{C}(G)\) (such as the Lovasz theta function) to compute an upper bound on the ratio of the capacities of \(G\) and \(H\). However, calculating \(\alpha(H^{d})\) for large values of \(d\) is computationally difficult, and choosing small values of \(d\) can lead to weak bounds. As an example, for a small graph like \(C_{7}\) it is known that \(\alpha(C_{7}^{3})=33\) and for the larger values of \(d\) we only know the bounds \(108\leq\alpha(C_{7}^{4})\leq 115\) and \(367\leq\alpha(C_{7}^{5})\leq 401\)[19].
Also as we show in Appendix C, \(\frac{\mathscr{C}(C_{9})}{\mathscr{C}(C_{11})}\leq\alpha^{*}(C_{9}|C_{11})=\frac{ 9}{11}\), while the best we can have using the previous capacity results is that \(\mathscr{C}(C_{9})\leq 4.3601\) and \(\mathscr{C}(C_{11})\geq 148^{\frac{1}{3}}>5.2895\)[1] so \(\frac{\mathscr{C}(C_{9}^{*})}{\mathscr{C}(C_{11})}\leq\frac{4.3601}{5.2895}\). We have \(\frac{9}{11}\leq\frac{4.3601}{5.2895}\). This illustrates the importance of directly finding upper bounds on the ratio of the Shannon capacities of graphs \(G\) and \(H\) rather than studying the capacities of \(G\) and \(H\) separately. As a concrete example, in this section, we compute \(\alpha^{*}(G|H)\) for two Cayley graphs obtained by cyclic groups and obtain a novel result that (to the best of our knowledge) cannot be derived using known results.
**Definition 3**.: Let \(G\) be an abelian group and \(S\) be a subset of elements of \(G\) such that \(S=-S\) and \(0\not\in S\). The Cayley graph \(Cay(G,S)\) is a graph with the vertex set \(G\) such that \(a,b\in G\) are connected by an edge iff \(a-b\in S\). Cayley graphs are basic examples of vertex-transitive graphs.
**Proposition 1**.: _Let_
\[G =Cay(\mathbb{Z}_{n},\pm 1,\pm 2,\cdots\pm k), \tag{15}\] \[H =Cay(\mathbb{Z}_{m},\pm 1,\pm 2,\ldots,\pm k), \tag{16}\]
_where \(1\leq 2k<n<m\) are integers. Then, \(\alpha^{*}(G|H)\geq\frac{n}{m}\). Moreover, \(\alpha^{*}(G|H)=\frac{n}{m}\) if there are integers \(\ell,s\geq 0\) such that \(m=\ell n+s(k+1)\)._
Proof of the above proposition is given in Section 4.5. It utilizes the idea of defining a homomorphism from \(H\) to \(G\).
Proposition 1 implies
\[\frac{\mathscr{C}(Cay(\mathbb{Z}_{n},\pm 1,\pm 2,\cdots\pm k))}{\mathscr{C}(Cay( \mathbb{Z}_{m},\pm 1,\pm 2,\cdots\pm k))}\leq\frac{n}{m}\]
for \(m=\ell n+s(k+1)\) for some integers \(\ell,s\geq 0\). This inequality is an explicit and non-trivial upper bound on the ratio of the capacities of Cayley graphs. We believe this result is novel. To compare this bound with the previously known bounds, observe that one can use individual bounds on \(\mathscr{C}(G)\) and \(\mathscr{C}(H)\) to write the following upper bound \(\frac{\mathscr{C}(Cay(\mathbb{Z}_{n},\pm 1,\pm 2,\cdots\pm k))}{\mathscr{C}(Cay( \mathbb{Z}_{m},\pm 1,\pm 2,\cdots\pm k))}\leq\frac{\alpha^{*}(Cay(\mathbb{Z}_{n},\pm 1,\pm 2, \cdots\pm k))}{\alpha(Cay(\mathbb{Z}_{m},\pm 1,\pm 2,\cdots\pm k))}=\frac{n/(k+1)}{[m/(k+1)]}\), which is strictly greater that \(\frac{n}{m}\) if \(m\) is not divisible by \(k+1\).
### Homomorphism and the relative fractional independence number
A homomorphism \(\mathsf{g}:H\to G\) from a graph \(H\) to a graph \(G\) is a map \(\mathsf{g}:\mathcal{V}(H)\to\mathcal{V}(G)\) such that \(uv\in\mathcal{E}(H)\) implies \(\mathsf{g}(u)\mathsf{g}(v)\in\mathcal{E}(G)\). The well-known No-Homomorphism Lemma states that:6
Footnote 6: See also [11, Exercise 2.12] for an extension of the No-Homomorphism Lemma based on a notion of the maximum number of vertices in an induced subgraph of \(G\) that is homomorphic to an auxiliary graph \(K\).
**Lemma 3**.: _[_1_]_ _If there is a homomorphism from \(H\) to \(G\), and \(G\) is vertex transitive, then \(\frac{\alpha(G)}{\alpha(H)}\leq\frac{|\mathcal{V}(G)|}{|\mathcal{V}(H)|}\)._
No-Homomorphism Lemma can be used to show the non-existence of homomorphism from \(H\) to \(G\). Also, if we can show that there is a homomorphism from \(H\) to \(G\), then we can use No-Homomorphism Lemma to give an upper bound on the \(\alpha(G)\) (See [1]).
This section gives another extension of the No-Homomorphism Lemma using the relative fractional independence number.
**Lemma 4**.: _If \(G\) and \(H\) are two graphs where there is a homomorphism \(\mathsf{g}:H\to G\) then \(\alpha(G^{c}\boxtimes H)\geq|\mathcal{V}(H)|\)_
Proof.: Note that the collection of vertices \((\mathsf{g}(u),u)\) in \(G^{c}\boxtimes H\) for all \(u\in\mathcal{V}(H)\) is an independent set of size \(|\mathcal{V}(H)|\).
**Lemma 5**.: _If there is a homomorphism from \(H\) to \(G\), and \(G\) is vertex transitive, then \(\alpha^{*}(G|H)\leq\frac{|\mathcal{V}(G)|}{|\mathcal{V}(H)|}\)._
Proof.: Since \(G\) is vertex-transitive, according to Lemma 2,
\[\alpha^{*}(G|H)=\frac{|\mathcal{V}(G)|}{\alpha(G^{c}\boxtimes H)}.\]
By the previous lemma, since there is a homomorphism from \(H\) to \(G\), we have \(\alpha(G^{c}\boxtimes H)\geq|\mathcal{V}(H)|\). So the claim follows.
We have the following corollary by Lemma 5 and Theorem 3.
**Corollary 2**.: _If there is a homomorphism from \(H\) to \(G\), and \(G\) is vertex transitive, then \(\frac{X(G)}{X(H)}\leq\frac{|\mathcal{V}(G)|}{|\mathcal{V}(H)|}\), where \(X(G)\) can be the zero-error Shannon capacity of \(G\), the fractional independence number of \(G\), the Lovasz number of \(G\), Schrijver's or Szegedy's variants of the Lovasz number._
Now we give two examples to show that our extension of the No-Homomorphism Lemma is stronger than Lemma 3.
**Example 1**.: _Suppose that \(G\) is the Petersen graph with \(10\) vertices, so \(\alpha(G)=4\) and \(\alpha^{*}(G)=5\). Take any arbitrary graph \(H\) such that \(2.5\alpha(H)\geq|\mathcal{V}(H)|>2\alpha^{*}(H)\). As a simple example, we can take \(H\) as a cycle graph \(C_{3}\) with a path on two vertices connected to one of the vertices of \(C_{3}\). So, \(H\) has \(5\) vertices and \(\alpha(H)=\alpha^{*}(H)=2\). By Lemma 5, there is no homomorphism from \(G\) to \(H\) because \(\frac{\alpha^{*}(G)}{\alpha^{*}(H)}=\frac{5}{\alpha^{*}(H)}>\frac{10}{| \mathcal{V}(H)|}\). However, we cannot deduce this from the standard No-Homomorphism Lemma because \(\frac{\alpha(G)}{\alpha(H)}=\frac{4}{\alpha(H)}\leq\frac{10}{|\mathcal{V}(H)|}\)._
**Example 2**.: _Suppose that \(H=Cay(Z_{7},\{\pm 1,\pm 2\})\), i.e \(H\) is a graph with vertex set \(\{v_{1},\ldots,v_{7}\}\) and \(v_{i}\) is connected to \(v_{j}\) if \(i-j\mod 7=\pm 1\) or \(\pm 2\), and \(G=Cay(Z_{8},\{\pm 1,\pm 2\})\). Then \(|\mathcal{V}(H)|=7\) and \(|\mathcal{V}(G)|=8\) and \(\alpha(H)=\alpha(G)=2\). There is no homomorphism from \(H\) to \(G\), though this can not be deduced from No-Homomorphism Lemma, because \(\frac{2}{2}\leq\frac{8}{7}\). But, according to Lemma 5, if there exists a homomorphism from \(H\) to \(G\) then we should have \(\alpha^{*}(G|H)\leq\frac{8}{7}\). We know that \(\alpha^{*}(G|H)=\frac{8}{\alpha(G^{c}\boxtimes H)}\). By inspection \(\alpha(G^{c}\boxtimes H)=6\) and since \(8/6>8/7\), Lemma 5 implies that there exists no homomorphism from \(H\) to \(G\)._
## 4 Proofs
### Proof of Theorem 1
Observe that the theorem's second and third parts follow from the first part. For the third part, the dual of the linear program in the first part is as follows: assigning a non-negative weight \(\beta_{f}\) to \(f\in\mathcal{F}(H,G)\) and multiplying the equation \(\sum_{i=1}^{k}w_{i}|f(v_{i})|\leq 1\) by \(\beta_{f}\); we obtain the following dual to the linear program of the first part: \(\alpha^{*}(G|H)\) equals the minimum of \(\sum_{f}\beta_{f}\) such that
\[\sum_{f}\beta_{f}|f(v)|\geq 1,\qquad\forall v\in\mathcal{V}(G).\]
Equivalently, \(\alpha^{*}(G|H)\) equals the minimum over \(\beta_{f}\) of
\[\frac{\sum_{f}\beta_{f}}{\min_{v\in\mathcal{V}(G)}\sum_{f}\beta_{f}|f(v)|}.\]
By scaling \(\beta_{f}\), we can impose the constraint \(\sum_{f}\beta_{f}=1\) and view \(\beta_{f}\) as a probability distribution over \(f\in\mathcal{F}\). Hence the third part of the theorem follows. The third part implies the second part, as the second part implies that
\[\frac{1}{\alpha^{*}(G|H)}=\max\min_{v\in\mathcal{V}(G)}\mathbb{E}[|F(v)|].\]
Now, using the minimax theorem and by exchanging the order of minimum and maximum, we have
\[\frac{1}{\alpha^{*}(G|H)}=\min_{w_{v}\geq 0,\sum_{v}w_{v}=1}\max_{f}\sum_{v \in V(G)}w_{v}|f(v)|,\]
It remains to prove the first and fourth parts of the theorem. Below, we provide a proof for this. An alternative proof is given in Appendix D.
We begin with the following lemma.
**Lemma 6**.: _To compute \(\alpha^{*}(G|H)\), it suffices to take supremum over graphs \(W\) with the following structure: the vertex set of \(W\) partitions as follows: \(\mathcal{V}(W)=\bigcup_{\mathcal{S}\in\mathcal{I}(G)}\mathcal{W}_{\mathcal{S}}\) for some disjoint sets \(\mathcal{W}_{\mathcal{S}}\). The edge structure in \(W\) is as follows: for any \(\mathcal{S}_{1},\mathcal{S}_{2}\in\mathcal{I}(G)\) vertex \(u_{1}\in\mathcal{W}_{\mathcal{S}_{1}}\) is connected to \(u_{2}\in\mathcal{W}_{\mathcal{S}_{2}}\) if and only if \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are disconnected in \(G\). In particular, if \(\mathcal{S}_{1}=\mathcal{S}_{2}\) and \(u_{1},u_{2}\in\mathcal{W}_{\mathcal{S}}\), then \(u_{1}\) and \(u_{2}\) are not connected. Moreover, we have_
\[\alpha(W\boxtimes G)=\sum_{\mathcal{S}\in\mathcal{I}(G)}|\mathcal{W}_{ \mathcal{S}}|\times|\mathcal{S}|.\]
Proof.: Take some arbitrary graph \(W\). Let \(\mathcal{B}\subseteq\mathcal{V}(W)\times\mathcal{V}(G)\) be an independent set for \(W\boxtimes G\) of maximum size. For every \(u\in\mathcal{V}(W)\), let
\[\mathcal{B}_{u}=\{v\in\mathcal{V}(G):(u,v)\in\mathcal{B}\}.\]
Note that if \(\mathcal{B}_{u}=\emptyset\) for some vertex \(u\) of \(W\), we can simply remove this vertex from \(W\). This removal would not affect \(\alpha(W\boxtimes G)\) but can potentially reduce \(\alpha(W\boxtimes H)\). Therefore, without loss of generality, we can assume that \(\mathcal{B}_{u}\neq\emptyset\) for all vertices \(u\) of \(W\).
Since \(\mathcal{B}\) is an independent set for \(W\boxtimes G\), for every \(u\in\mathcal{V}(W)\), \(\mathcal{B}_{u}\) must be an independent set for \(G\). Thus, \(\mathcal{B}_{u}\in\mathcal{I}(G)\). This leads to the following partition of the vertex set \(\mathcal{V}(W)\): for every non-empty \(\mathcal{S}\in\mathcal{I}(G)\), let
\[\mathcal{W}_{\mathcal{S}}=\{u\in\mathcal{V}(W):\mathcal{B}_{u}=\mathcal{S}\}.\]
With this definition, the condition that \(\mathcal{B}\subseteq\mathcal{V}(W)\times\mathcal{V}(H)\) is an independent set for \(W\boxtimes G\) is equivalent with the following condition: take \(u_{1}\in\mathcal{W}_{\mathcal{S}_{1}}\) and \(u_{2}\in\mathcal{W}_{\mathcal{S}_{2}}\). If \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are not disconnected (_i.e.,_ either \(\mathcal{S}_{1}\cap\mathcal{S}_{2}\neq\emptyset\) or there is an edge in \(G\) between a vertex in \(\mathcal{S}_{1}\) and a vertex in \(\mathcal{S}_{2}\)) then there should not be any edge between \(u_{1},u_{2}\) in \(W\). If \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are disconnected; there may or may not be edges between vertices in \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\). We claim that, without loss of generality, we can add edges between the vertices in \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) if \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are disconnected since their addition would not affect \(\alpha(W\boxtimes G)\) but could potentially decrease \(\alpha(W\boxtimes H)\), which is desirable
as we want to maximize the ratio \(\alpha(W\boxtimes G)/\alpha(W\boxtimes H)\). Thus, the graph \(W\) will have the form given in the lemma statement. We also have
\[|\mathcal{B}|=\sum_{u\in\mathcal{V}(W)}|\mathcal{B}_{u}|=\sum_{\mathcal{S}\in \mathcal{I}(G)}|\mathcal{W}_{\mathcal{S}}|\times|\mathcal{S}|.\]
Consider a graph \(W\) with the structure in Lemma 6. We claim that
\[\alpha(W\boxtimes H)=\max_{f\in\mathcal{F}}\sum_{\mathcal{S}\in\mathcal{I}(G)} |\mathcal{W}_{\mathcal{S}}|\times|f(\mathcal{S})| \tag{17}\]
where \(\mathcal{F}\) is the set of all functions \(f:\mathcal{I}(G)\mapsto\mathcal{I}(H)\) such that for any distinct \(\mathcal{S}_{1},S_{2}\in\mathcal{I}(G)\), if \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are disconnected in \(G\) then \(f(\mathcal{S}_{1})\) and \(f(\mathcal{S}_{2})\) are disconnected in \(H\). First, observe that
\[\alpha(W\boxtimes H)\geq\max_{f\in\mathcal{F}}\sum_{\mathcal{S}\in\mathcal{I}( G)}|\mathcal{W}_{\mathcal{S}}|\times|f(\mathcal{S})|.\]
To see this, for every function \(f\in\mathcal{F}\), the following
\[\{(u,v)\big{|}u\in\mathcal{W}_{\mathcal{S}},v\in f(\mathcal{S})\text{ for some }\mathcal{S}\in\mathcal{I}(G)\}\]
is an independent set. On the other hand, take an independent set \(\mathcal{A}\) for \(W\boxtimes H\) of maximum size. Take some \(\mathcal{S}\in\mathcal{I}(G)\) and for every \(u\in\mathcal{W}_{\mathcal{S}}\) consider
\[\mathcal{A}_{u}=\{v\in\mathcal{V}(H):(u,v)\in\mathcal{A}\}.\]
Since \(\mathcal{A}\) is an independent set for \(W\boxtimes H\), we have \(\mathcal{A}_{u}\in\mathcal{I}(H)\). Consider \(|\mathcal{A}_{u}|\) for \(u\in\mathcal{W}_{\mathcal{S}}\), and let \(u^{*}\in\mathcal{W}_{\mathcal{S}}\) have maximum \(|\mathcal{A}_{u^{*}}|\). For all \(u\in\mathcal{W}_{\mathcal{S}}\), let us replace \(\mathcal{A}_{u}\) by \(\mathcal{A}_{u^{*}}\). This transformation will not decrease the size of \(\mathcal{A}\) (as \(|\mathcal{A}_{u^{*}}|\) had maximum size) and will keep \(\mathcal{A}\) as an independent set for \(W\boxtimes H\). If we make this transformation on all \(\mathcal{S}\in\mathcal{I}(G)\), we can force all \(u\in\mathcal{W}_{\mathcal{S}}\) mapped to the same independent set in \(\mathcal{I}(H)\), which we can call \(f(\mathcal{S})\in\mathcal{I}(H)\). Thus, the size of the independent set \(\mathcal{A}\) is no more than \(\sum_{\mathcal{S}\in\mathcal{I}(G)}|\mathcal{W}_{\mathcal{S}}|\times|f( \mathcal{S})|\). Hence,
\[\alpha(W\boxtimes H)\leq\max_{f\in\mathcal{F}}\sum_{\mathcal{S}\in\mathcal{I}( G)}|\mathcal{W}_{\mathcal{S}}|\times|f(\mathcal{S})|.\]
This implies (17). To sum this up, we obtain
\[\frac{\alpha(W\boxtimes G)}{\alpha(W\boxtimes H)}=\frac{\sum_{\mathcal{S}\in \mathcal{I}(G)}|\mathcal{W}_{\mathcal{S}}|\times|\mathcal{S}|}{\max_{f\in \mathcal{F}}\sum_{\mathcal{S}\in\mathcal{I}(G)}|\mathcal{W}_{\mathcal{S}}| \times|f(\mathcal{S})|}.\]
Observe that \(|\mathcal{W}_{\mathcal{S}}|\) can be set to any arbitrary natural number. Letting \(\omega_{\mathcal{S}}=|\mathcal{W}_{\mathcal{S}}|\) we obtain
\[\sup_{W}\frac{\alpha(W\boxtimes G)}{\alpha(W\boxtimes H)}=\sup_{\{\omega_{ \mathcal{S}}\in\mathbb{Z}\cup\{0\}:\mathcal{S}\in\mathcal{I}(G)\}}\ \min_{f\in\mathcal{F}}\frac{\sum_{\mathcal{S}\in\mathcal{I}(G)}\omega_{ \mathcal{S}}|\mathcal{S}|}{\sum_{\mathcal{S}\in\mathcal{I}(G)}\omega_{ \mathcal{S}}|f(S)|}.\]
In particular, if the supremum on the right-hand side is a maximum, the supremum on the left-hand side is also a maximum.
Next, we claim that to compute
\[\sup_{\{\omega_{\mathcal{S}}\in\mathbb{Z}\cup\{0\}:\mathcal{S}\in\mathcal{I}(H)\}} \ \min_{f\in\mathcal{F}}\frac{\sum_{\mathcal{S}\in\mathcal{I}(H)}\omega_{\mathcal{S} }|\mathcal{S}|}{\sum_{\mathcal{S}\in\mathcal{I}(H)}\omega_{\mathcal{S}}|f( \mathcal{S})|}, \tag{18}\]
without loss of generality, one can assume that \(\omega_{\mathcal{S}}=0\) if \(|\mathcal{S}|\neq 1\). To show this, we first show that without loss of generality, to compute (18), we can restrict the minimization over \(f\in\mathcal{F}\) satisfying
\[|f(\mathcal{S})|\geq\sum_{i\in\mathcal{S}}|f(\{i\})|,\]
for any set \(\mathcal{S}\in\mathcal{I}(G)\). To show this, take some arbitrary function \(f\) and some \(\mathcal{S}\in\mathcal{I}(G)\) such that
\[|f(\mathcal{S})|<\sum_{i\in\mathcal{S}}|f(\{i\})|.\]
Observe that \(f(\{i\})\) and \(f(\{j\})\) for \(i,j\in\mathcal{S}\) are disconnected independent sets in \(\mathcal{I}(H)\), since \(\{i\}\) and \(\{j\}\) are disconnected independent sets in \(\mathcal{I}(G)\). Thus,
\[\sum_{i\in\mathcal{S}}|f(\{i\})|=\big{|}\cup_{i\in\mathcal{S}}f(\{i\})\big{|}.\]
We claim that replacing \(f(\mathcal{S})\) by \(\cup_{i\in\mathcal{S}}f(\{i\})\) would not violate conditions on \(f\). This change would increase \(|f(\mathcal{S})|\) and hence would also increase the term \(\sum_{\mathcal{S}\in\mathcal{I}(G)}\omega_{\mathcal{S}}\cdot|f(\mathcal{S})|\). To show that the replacement would not violate conditions on \(f\), take some arbitrary set \(\mathcal{T}\in\mathcal{I}(G)\) where \(\mathcal{T}\) and \(\mathcal{S}\) are disconnected in \(G\). Then, the set \(\{i\}\) and \(\mathcal{T}\) are disconnected in \(G\), implying that \(f(\mathcal{T})\in\mathcal{I}(H)\) is disconnected from \(f(\{i\})\). Hence, \(f(\mathcal{T})\) is disconnected from \(\cup_{i\in\mathcal{S}}f(\{i\})\). Thus, the desired condition is satisfied by this replacement. To sum this up, without loss of generality, we can restrict the minimization over \(f\) to those satisfying
\[|f(\mathcal{S})|\geq\sum_{i\in\mathcal{S}}|f(\{i\})|.\]
Now, take some arbitrary set of \(\{\omega_{\mathcal{S}}\geq 0:\mathcal{S}\in\mathcal{I}(G)\}\). Then, consider the following:
\[\omega^{\prime}(\{i\}) =\sum_{\mathcal{S}:i\in\mathcal{S}}\omega_{\mathcal{S}}. \tag{19}\] \[\omega^{\prime}(\mathcal{S}) =0\qquad\text{if }|\mathcal{S}|>1. \tag{20}\]
Then, we have
\[\sum_{\mathcal{S}\in\mathcal{I}(G)}\omega^{\prime}_{\mathcal{S}} \cdot|S| =\sum_{i}\sum_{\mathcal{S}:\ i\in\mathcal{S}}\omega_{\mathcal{S}} \tag{21}\] \[=\sum_{\mathcal{S}\in\mathcal{I}(G)}\omega_{\mathcal{S}}\cdot| \mathcal{S}| \tag{22}\]
and
\[\sum_{\mathcal{S}\in\mathcal{I}(G)}\omega^{\prime}_{\mathcal{S}} \cdot|f(\mathcal{S})| =\sum_{i}\sum_{\mathcal{S}:\ i\in\mathcal{S}}\omega_{\mathcal{S}} \cdot|f(\{i\})| \tag{23}\] \[=\sum_{\mathcal{S}\in\mathcal{I}(G)}\omega_{\mathcal{S}}\sum_{i\in \mathcal{S}}|f(\{i\})|\] (24) \[\leq\sum_{\mathcal{S}\in\mathcal{I}(G)}\omega_{\mathcal{S}}\cdot| f(\mathcal{S})|. \tag{25}\]
Since this holds for every arbitrary function \(f\), this implies that without loss of generality, we can assume that \(\omega_{\mathcal{S}}\) is non-zero only when \(|\mathcal{S}|=1\). Assuming that \(\mathcal{V}(G)=\{v_{1},v_{2},\cdots,v_{k}\}\), let \(w_{i}=\omega_{\{i\}}\), and \(\mathcal{T}_{i}=f(\{i\})\). Then,
\[\sup_{W}\frac{\alpha(W\boxtimes G)}{\alpha(W\boxtimes H)}=\sup_{\{w_{i}\in \mathbb{Z}\cup\{0\}\}}\ \min_{\mathcal{T}_{1},\cdots,\mathcal{T}_{k}}\frac{\sum_{i=1}^{k}w_{i}}{\sum_{ i=1}^{k}w_{i}|\mathcal{T}_{i}|}\]
where the minimum is over any collection of sets \(\mathcal{T}_{1},\cdots,\mathcal{T}_{k}\in\mathcal{I}(H)\) such that \(\mathcal{T}_{i}\) and \(\mathcal{T}_{j}\) are disconnected in \(H\) if there is no edge between \(v_{i}\) and \(v_{j}\) in \(G\).
Observe that scaling \(w_{i}\) would not change the above expression. Therefore,
\[\sup_{W}\frac{\alpha(W\boxtimes G)}{\alpha(W\boxtimes H)}=\sup_{\{w_{i}\in \mathbb{Q},w_{i}\geq 0\}}\ \sum_{i=1}^{k}w_{i}\]
subject to \(\sum_{i=1}^{k}w_{i}|\mathcal{T}_{i}|\leq 1\) for any collection of sets \(\mathcal{T}_{1},\cdots,\mathcal{T}_{k}\in\mathcal{I}(H)\) satisfying the property on \(T_{i}\). Relaxing \(w_{i}\in\mathbb{Q}\) to \(w_{i}\in\mathbb{R}\) would not change the value of this linear program because the solution of this linear program is rational. The first part of the theorem thus follows. To show the last part, observe that the domain of the linear program is compact since \(w_{i}\leq 1\) for all \(i\). To see this, take \(\mathcal{T}_{i}\) as an independent set in \(H\) and \(\mathcal{T}_{j}=\emptyset\) for \(j\neq i\). The compactness of the domain of the linear program implies that the solution is obtained at some finite \((w_{1},\cdots,w_{k})\). Thus, the supremum in the definition of \(\alpha^{*}(G|H)\) is a maximum.
### Proof of Theorem 2
For the first part of the theorem, observe that
\[\alpha^{*}(G_{1}\boxtimes G_{2}|H_{1}\boxtimes H_{2}) =\sup_{W}\frac{\alpha(W\boxtimes G_{1}\boxtimes G_{2})}{\alpha(W \boxtimes H_{1}\boxtimes H_{2})}\] \[=\sup_{W}\frac{\alpha(W\boxtimes G_{1}\boxtimes G_{2})}{\alpha(W \boxtimes H_{1}\boxtimes G_{2})}\cdot\frac{\alpha(W\boxtimes H_{1}\boxtimes G _{2})}{\alpha(W\boxtimes H_{1}\boxtimes H_{2})}\] \[\leq\sup_{W}\frac{\alpha((W\boxtimes G_{2})\boxtimes G_{1})}{ \alpha((W\boxtimes G_{2})\boxtimes H_{1})}\cdot\sup_{W}\frac{\alpha((W \boxtimes H_{1})\boxtimes G_{2})}{\alpha((W\boxtimes H_{1})\boxtimes H_{2})}\] \[\leq\alpha^{*}(G_{1}|H_{1})\alpha^{*}(G_{2}|H_{2}).\]
For the second part, consider the max form (first part of Theorem 1) linear programs for computing \(\alpha^{*}(G_{1}+G_{2}|H)\), \(\alpha^{*}(G_{1}|H)\) and \(\alpha^{*}(G_{2}|H)\) and let us denote the set of constraints in these linear programs by \(Cons(G_{1}+G_{2}|H),Cons(G_{1}|H)\) and \(Cons(G_{2}|H)\), respectively. Thus, for instance, \(\alpha^{*}(G_{1}+G_{2}|H)=\max\sum_{v_{i}\in\mathcal{V}(G_{1})}w_{i}+\sum_{v_ {i}\in\mathcal{V}(G_{2})}w_{i}\) subject to \(Cons(G_{1}+G_{2})\). Also \(\alpha^{*}(G_{1}|H)=\max\sum_{v_{i}\in\mathcal{V}(G_{1})}w_{i}\) subject to \(Cons(G_{1}|H)\) and \(\sum_{v_{i}\in\mathcal{V}(G_{2})}w_{i}\) subject to \(Cons(G_{2}|H)\). It is clear that, \(Cons(G_{1}|H)\subset Cons(G_{1}+G_{2}|H)\) and \(Cons(G_{2}|H)\subset Cons(G_{1}+G_{2}|H)\), so
\[\alpha^{*}(G_{1}+G_{2}|H)\leq\alpha^{*}(G_{1}|H)+\alpha^{*}(G_{2}|H).\]
For the third part, it is clear that \(\alpha^{*}((G_{1}+G_{2})^{c}|H)\geq\max(\alpha^{*}(G_{1}^{c}|H),\alpha^{*}(G_ {2}^{c}|H))\). Consider the characterization in the third part of Theorem 1 to show the reverse direction. Take the optimal random mappings for \(\alpha^{*}(G_{1}^{c}|H)\) and \(\alpha^{*}(G_{2}^{c}|H)\) and combine them to construct a random mapping for \((G_{1}+G_{2})^{c}\). This random mapping implies that
\[\alpha^{*}((G_{1}+G_{2})^{c}|H)\leq\max(\alpha^{*}(G_{1}^{c}|H),\alpha^{*}(G_ {2}^{c}|H)).\]
### Proof of Lemma 1
Take some \(W\) such that
\[\alpha^{*}(G_{1}\boxtimes G_{2}\boxtimes\cdots\boxtimes G_{r})=\frac{\alpha(G_{1} \boxtimes G_{2}\boxtimes\cdots\boxtimes G_{r}\boxtimes W)}{\alpha(W)}.\]
We show that this choice of \(W\) works for us. Note that
\[\prod_{i=1}^{r}\alpha^{*}(G_{i}) =\alpha^{*}(G_{1}\boxtimes G_{2}\boxtimes\cdots\boxtimes G_{r}) \tag{26}\] \[=\frac{\alpha(G_{1}\boxtimes G_{2}\boxtimes\cdots\boxtimes G_{r} \boxtimes W)}{\alpha(W)}\] \[=\frac{\alpha(G_{2}\boxtimes\cdots\boxtimes G_{r}\boxtimes G_{1} \boxtimes W)}{\alpha(G_{1}\boxtimes W)}\cdot\frac{\alpha(G_{1}\boxtimes W)}{ \alpha(W)}\] \[\leq\max_{W^{\prime}}\frac{\alpha(G_{2}\boxtimes\cdots\boxtimes G_ {r}\boxtimes W^{\prime})}{\alpha(W^{\prime})}\cdot\max_{W^{\prime}}\frac{ \alpha(G_{1}\boxtimes W^{\prime})}{\alpha(W^{\prime})}\] \[=\alpha^{*}(G_{2}\boxtimes\cdots\boxtimes G_{r})\alpha^{*}(G_{1})\] \[=\prod_{i=1}^{r}\alpha^{*}(G_{i}).\]
Therefore, equality must hold in (26). Hence
\[\alpha^{*}(G_{1})=\frac{\alpha(G_{1}\boxtimes W)}{\alpha(W)}.\]
A similar argument establishes the desired equality for \(G_{i}\), \(i>1\).
### Proof of Theorem 3
Observe that
\[\alpha^{*}(G|H)=\sup_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)}\geq \frac{\alpha(G)}{\alpha(H)}\]
by choosing \(W\) as a trivial graph with just one node and no edges.
Next, assume that we have two quantities \(X(\cdot)\) and \(Y(\cdot)\) on a graph such that
\[\alpha(G\boxtimes W)\leq X(G)Y(W) \tag{27}\]
for any two graphs \(G\) and \(W\). Then, we have
\[\alpha^{*}(G|H) =\sup_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)}\] \[\geq\sup_{W}\frac{\alpha(G\boxtimes W)}{X(H)Y(W)}\] \[=\frac{1}{X(H)}\sup_{W}\frac{\alpha(G\boxtimes W)}{Y(W)}.\]
Consider the choice of \(X(G)=Y(G)=\vartheta(G)\), the Lovasz number of \(G\). Then (27) is satisfied. Moreover, as shown in [1, Theorem 2],
\[\sup_{W}\frac{\alpha(G\boxtimes W)}{\vartheta(W)}=\vartheta(G).\]
Thus, the desired inequality holds for the Lovasz number of \(G\).
Similarly, the choices of \(X(G)=\vartheta^{-}(G),Y(G)=\vartheta^{+}(G)\) and \(X(G)=\vartheta^{+}(G),Y(G)=\vartheta^{-}(G)\). Then, by [1, Lemma 7], (27) is satisfied. From [1, Theorem 8], we have
\[\sup_{W}\frac{\alpha(G\boxtimes W)}{\vartheta^{-}(W)}=\vartheta^{+}(G),\]
\[\sup_{W}\frac{\alpha(G\boxtimes W)}{\vartheta^{+}(W)}=\vartheta^{-}(G).\]
Therefore, we obtain the desired results for the Schrijver's or Szegedy's variants of the Lovasz number.
### Proof of Proposition 1
Note that for a vertex transitive graph \(G\) of size \(n\), \(\alpha^{*}(G)=\frac{n}{\omega(G)}\), where \(\omega(G)\) is the clique number of \(G\). Thus, \(\alpha^{*}(G)=\frac{n}{k+1}\) and \(\alpha^{*}(H)=\frac{m}{k+1}\). It follows that \(\alpha^{*}(G|H)\geq\frac{n}{m}\) and hence \(\alpha(G^{c}\boxtimes H)\leq m\). If there is a homomorphism from \(H\) to \(G\) then by Lemma 5, we have \(\alpha(G^{c}\boxtimes H)\geq m\) and hence we find that \(\alpha^{*}(G|H)=\frac{n}{m}\). We show that such a homomorphism exists if there are integers \(\ell,s\geq 0\) such that
\[m=\ell n+s(k+1).\]
Let \(\{1,\ldots,n\}\) and \(\{1,\ldots,m\}\) be the vertex sets for \(G\) and \(H\). If \(\ell=0\) define \(\mathsf{g}(i(k+1)+j)=j\) where \(1\leq j\leq k+1\) and \(i\geq 0\). If \(\ell>0\), define \(\mathsf{g}(in+j)=j\) for \(1\leq j\leq n\) and \(i=0,\ldots,\ell-1\) and define \(\mathsf{g}(\ell n+i(k+1)+j)=j\) for \(1\leq j\leq k+1\) and \(i\geq 0\). It can be verified that this is a homomorphism from \(H\) to \(G\).
## 5 Open problems and future directions
We showed that \(\alpha^{*}(G|H)\) is computable and found its value for certain classes of graphs. Finding more efficient algorithms for computing \(\alpha^{*}(G|H)\) for arbitrary graphs \(G\) and \(H\) is left as future work. Understanding the connections (if any) between \(\alpha^{*}(G|H)\) and the fractional Haemers numbers \(G\) and \(H\) is also an exciting problem to pursue.
In this paper, we considered the following expansions:
\[\alpha(G^{n})=\prod_{i=1}^{n}\frac{\alpha(G^{i})}{\alpha(G^{i-1})}=\prod_{i=1 }^{n}\frac{\alpha(G\boxtimes G^{i-1})}{\alpha(G^{i-1})}\leq\left[\sup_{W} \frac{\alpha(G\boxtimes W)}{\alpha(W)}\right]^{n}, \tag{28}\]
and
\[\frac{\alpha(G^{n})}{\alpha(H^{n})}=\prod_{i=1}^{n}\frac{\alpha(G\boxtimes G ^{i-1}\boxtimes H^{n-i})}{\alpha(H\boxtimes G^{i-1}\boxtimes H^{n-i})}\leq \left[\sup_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)}\right]^{n}. \tag{29}\]
Observe that we can also write other expansions, such as:
\[\alpha(G^{2n})^{s}\alpha(G^{n}) =\prod_{i=1}^{n}\left(\frac{\alpha(G^{2i})}{\alpha(G^{2i-2})} \right)^{s}\left(\frac{\alpha(G^{i})}{\alpha(G^{i-1})}\right)\] \[\leq\left[\sup_{W}\frac{\alpha(G\boxtimes G\boxtimes W\boxtimes W )^{s}\alpha(G\boxtimes W)}{\alpha(W\boxtimes W)^{s}\alpha(W)}\right]^{n}, \tag{30}\] \[\frac{\alpha(G^{n})^{1+t}}{\alpha(H^{n})^{1+s}} =\prod_{i=1}^{n}\left(\frac{\alpha(G\boxtimes G^{i-1}\boxtimes H ^{n-i})}{\alpha(H\boxtimes G^{i-1}\boxtimes H^{n-i})}\frac{\alpha(G\boxtimes G ^{i-1})^{t}}{\alpha(G^{i-1})^{t}}\frac{\alpha(H^{n-i})^{s}}{\alpha(H\boxtimes H ^{n-i})^{s}}\right)\] \[\leq\left[\sup_{W_{1},W_{2}}\frac{\alpha(G\boxtimes W_{1} \boxtimes W_{2})}{\alpha(H\boxtimes W_{1}\boxtimes W_{2})}\frac{\alpha(G \boxtimes W_{1})^{t}}{\alpha(W_{1})^{t}}\frac{\alpha(W_{2})^{s}}{\alpha(H \boxtimes W_{2})^{s}}\right]^{n}. \tag{31}\]
However, proving that the supremum is equal to a maximum in the above expressions seems challenging.
Next, we obtain a weak upper bound in (28) and (29) since we relax the supremum over all possible graphs \(W\). Our first open problem is the following conjecture. Proving or disproving the conjecture helps understand or improve the bounds in (28) and (29).
**Conjecture 1**.: _The limit \(\lim_{i\to\infty}\frac{\alpha(G\boxtimes G^{i-1})}{\alpha(G^{i-1})}\) exists for any graph \(G\). Moreover, for any \(c\in(0,1)\), the limit_
\[\lim_{n\to\infty}\frac{\alpha(G\boxtimes G^{nc}\boxtimes H^{n(1-c)})}{\alpha (H\boxtimes G^{nc}\boxtimes H^{n(1-c)})}\]
_exists for any graph \(G\) and \(H\)._
One approach to strengthen the relaxations in (28) and (29) is via the _tensorizing_ properties of graphs. Let \(T(\cdot)\) be a function that maps an arbitrary graph to a real number. We say that \(T\) satisfies a tensorization property if \(T(G^{i})=T(G)\) for all natural number \(i\). More generally, we may require that for any two graphs \(G\) and \(H\), the property \(T(G\boxtimes H)=\max(T(G),T(H))\) holds.7 Note that this definition of tensorization is similar to the one that has been studied and utilized for communication channels in the literature, e.g., see [1]. A tensorizing function allows us to restrict the supremum in (28) to all graphs \(W\) satisfying \(T(W)=T(G)\). In other words, we can write the following upper bound
Footnote 7: Or we may just impose the weaker condition \(T(G\boxtimes H)\leq\max(T(G),T(H))\).
\[\mathscr{C}(G)\leq\sup_{W:T(W)=T(G)}\frac{\alpha(G\boxtimes W)}{\alpha(W)}. \tag{32}\]
Some examples of tensorizing functions \(T(\cdot)\) are as follows:
* \(T(G)\) equals the diameter of \(G\).
* \(T(G)=\frac{\lambda_{2}(G)+1}{\lambda_{1}(G)+1}\) where \(\lambda_{1}\geq\lambda_{2}\geq\lambda_{3}\geq...\) are the eigenvalues of the adjacency matrix of \(G\).
* Ratio of any two additive expressions, e.g., \[T(G)=\frac{\log(\Theta(G))}{\log(\alpha^{*}(G))}\] where \(\Theta(G)\) is the Lovasz number of \(G\).
Unfortunately, evaluating (32) for these choices of \(T(\cdot)\) is hard. Moreover, other constraints might be needed to reflect the common structure of graphs \(G^{i}\) for \(i=1,2,\cdots\). For instance, for any arbitrary graph \(W\), if we add a new vertex to \(W\) and connect it to all existing vertices in \(W\), the terms \(\alpha(G\boxtimes W)\) and \(\alpha(W)\) will remain invariant, but the diameter and spectral decomposition of the adjacency matrix can change. Moreover, observe that (11) implies that when evaluating
\[\max_{W}\frac{\alpha(G\boxtimes W)}{\alpha(W)} \tag{33}\]
we can find a maximizer \(W\) that includes an arbitrary given graph \(H\) (see also Lemma 1). We do not know if such properties hold for the maximizers of
\[\max_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)}. \tag{34}\]
Such properties are useful to study because in (29), we identify the graph \(W\) as \(G^{i-1}\boxtimes H^{n-i}\). If we write the bound in (29) for two graph pairs \((G_{1},H_{1})\) and \((G_{2},H_{2})\) where \(G_{1}\) is a subgraph of \(G_{2}\) and \(H_{1}\) is a subgraph of \(H_{2}\), then \(G_{1}^{i-1}\boxtimes H_{1}^{n-i}\) will also be a subgraph of \(G_{2}^{i-1}\boxtimes H_{2}^{n-i}\) and this constraint may be added when we consider the two optimization problems jointly.
Another observation is that the upper bound
\[\frac{\mathscr{C}(G)}{\mathscr{C}(H)}\leq\alpha^{*}(G|H) \tag{35}\]
may be loose in general since for any graph \(T\), we have
\[\frac{\mathscr{C}(G\boxtimes T)}{\mathscr{C}(H\boxtimes T)} =\lim_{n\to\infty}\left(\frac{\alpha(G^{n}\boxtimes T^{n})}{ \alpha(H^{n}\boxtimes T^{n})}\right)^{\frac{1}{n}}\] \[=\lim_{n\to\infty}\left(\prod_{i=1}^{n}\frac{\alpha(G^{i} \boxtimes H^{n-i}\boxtimes T^{n})}{\alpha(G^{i-1}\boxtimes H^{n-i+1} \boxtimes T^{n})}\right)^{\frac{1}{n}}\] \[=\lim_{n\to\infty}\left(\prod_{i=1}^{n}\frac{\alpha(G\boxtimes W _{i})}{\alpha(H\boxtimes W_{i})}\right)^{\frac{1}{n}} \tag{36}\] \[\leq\sup_{W}\frac{\alpha(G\boxtimes W)}{\alpha(H\boxtimes W)} \tag{37}\]
where we set \(W_{i}=G^{i-1}\boxtimes H^{n-i}\boxtimes T^{n}\). Therefore,
\[\alpha^{*}(G|H)\geq\sup_{T}\frac{\mathscr{C}(G\boxtimes T)}{\mathscr{C}(H \boxtimes T)}. \tag{38}\]
Observe that for any two graphs \(G\) and \(H\) we have \(\mathscr{C}(G\boxtimes T)\geq\mathscr{C}(G)\mathscr{C}(T)\) and the strict inequality may occur [10]. Therefore, the supremum in the above inequality might occur for some non-trivial graph \(T\), implying that (35) is loose.
Finally, one can study other properties of \(\alpha^{*}(G|H)\). In particular, we do not know if strict inequality can hold in the first part of Theorem 2, i.e., if
\[\alpha^{*}(G_{1}\boxtimes G_{2}|H_{1}\boxtimes H_{2})<\alpha^{*}(G_{1}|H_{1}) \alpha^{*}(G_{2}|H_{2})\]
can occur.
Acknowledgment
The authors want to thank Dr. Omid Etesami for constructive discussions.
|
2309.01396 | Design and Testing of Cesium Atomic Concentration Detection System Based
on TDLAS | In order to better build the Neutral Beam Injector with Negative Ion Source
(NNBI), the pre-research on key technologies has been carried out for the
Comprehensive Research Facility for Fusion Technology (CRAFT). Cesium seeding
into negative-ion sources is a prerequisite to obtain the required negative
hydrogen ion. The performance of ion source largely depends on the cesium
conditions in the source. It is very necessary to quantitatively measure the
amount of cesium in the source during the plasma on and off periods (vacuum
stage). This article uses the absorption peak of cesium atoms near 852.1nm to
build a cesium atom concentration detection system based on Tunable Diode Laser
Absorption Spectroscopy (TDLAS) technology. The test experiment based on the
cesium cell is carried out, obtained the variation curve of cesium
concentration at different temperatures. The experimental results indicate
that: the system detection range is within 5*10E6-2.5*10E7 pieces/cm3 and the
system resolution better than 1*10E6 pieces/cm3. | LZ. Liang, SH. Liu, ZY. Song, Y. Wu, JL. Wei, YJ. Xu, YH. Xie, YL. Xie, CD. Hu | 2023-09-04T06:48:02Z | http://arxiv.org/abs/2309.01396v2 | ## Design and Testing of Cesium Atomic Concentration Detection System Based on TDLAS
## Design and Testing of Cesium Atomic Concentration Detection System Based on TDLAS
_L.z. Liang,\({}^{a,b,}\)1 SH. Liu,\({}^{b,a}\) ZY. Song,\({}^{c,a}\) Y. Wu,\({}^{d,a}\) JL. Wei,\({}^{a}\) YJ. Xu,\({}^{a}\) YH. Xie,\({}^{a}\) YL. Xie \({}^{a}\)and CD. Hu\({}^{a}\)_
Footnote 1: Corresponding author.
\({}^{a}\)_Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, 230031, China_
\({}^{b}\)_School of Science, Shandong Jianzhu University, Jinan, 250101, China_
\({}^{c}\)_Institute of Physical Science and Information Technology, Anhui University, Hefei, 230601, China_
\({}^{d}\)_School of Electronic and Information Engineering, Anhui Jianzhu University, Hefei, 230601, China_
### _E-mail_: [email protected]
ABSTRACT: In order to better build the Neutral Beam Injector with Negative Ion Source (NNBI), the pre-research on key technologies has been carried out for the Comprehensive Research Facility for Fusion Technology (CRAFT). Cesium seeding into negative-ion sources is a prerequisite to obtain the required negative hydrogen ion. The performance of ion source largely depends on the cesium conditions in the source. It is very necessary to quantitatively measure the amount of cesium in the source during the plasma on and off periods (vacuum stage). This article uses the absorption peak of cesium atoms near 852.1nm to build a cesium atom concentration detection system based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) technology. The test experiment based on the cesium cell is carried out, obtained the variation curve of cesium concentration at different temperatures. The experimental results indicate that: the system detection range is within \(5\times 10^{6}\)-\(2.5\times 10^{7}\) pieces/cm\({}^{3}\)and the system resolution better than \(1\times 10^{6}\) pieces/cm\({}^{3}\).
KEYWORDS: TDLAS, second harmonic, cesium concentration detection, negative ion source
## 1 Introduction
The negative ions in the negative hydrogen ion source are mainly formed by the plasma colliding with hydrogen atoms and positive ions on the low work function surface. The conversion rate depends on the surface work function. In an ion source, cesium deposition on the metal surface reduces the surface work function, thereby increasing the production of negative hydrogen ions in the hydrogen plasma\({}^{[1]}\). It is necessary to quantitatively measure the concentration of cesium based on the influence of cesium kinetics on the performance of ion sources\({}^{[2]}\). TDLAS technology is currently an advanced gas measurement technology that utilizes the narrow linewidth and adjustable wavelength of tunable semiconductor lasers with injection current to calculate gas concentration by analyzing the selective absorption degree of the laser by the gas to be measured. It has the advantages of high measurement accuracy, strong selectivity, and fast response speed\({}^{[3]}\).
This article introduces the design and functional testing of a cesium atom concentration detection system based on TDLAS technology, which detects the concentration and changes of cesium atoms using a cesium vapor pool. Sec.2 will introduce the TDLAS detection principle. Sec.3 will introduce the design principle and structure of the cesium vapor pool. Sec.4 introduces system testing, including qualitative and quantitative detection of cesium.
## 2 TDLAS detection principle
According to Lambert Beer's law, for a single frequency \(\nu\) laser, when passing through an absorbing sample cell, its intensity can be described by the following formula\({}^{[4]}\):
\[I\left(\nu\right)=I_{0}\left(\nu\right)\exp\left[-\alpha\left(\nu\right)L \right]=I_{0}\left(\nu\right)\exp\left[-\sigma\left(\nu\right)cL\right] \tag{1}\]
Where \(I\) is the output light intensity, \(I_{0}\) is the incident light intensity, \(\alpha\left(\nu\right)\) is the light absorption coefficient of the gas at frequency \(\nu\), \(\sigma\left(\nu\right)\) is the light Absorption cross section of the gas at frequency \(\nu\), \(L\) is the optical path length of the sample cell, and \(c\) is the concentration of the absorption gas.
In order to reduce the interference of noise signals, wavelength modulation technology is usually used to modulate the output wavelength of the laser. When the laser with the center frequency of \(\nu_{e}\) is modulated by the modulation wave with the frequency of \(\omega\), its Instantaneous phase and frequency can be expressed as:
\[\nu=\nu_{e}+\delta_{v}\cos\omega t \tag{2}\]
Where, \(\delta_{v}\) is the modulation amplitude, and the intensity of light passing through the sample cell can be expressed by the cosine Fourier series of \(I\left(\nu_{e}\right)\):
\[I\left(\nu_{e},t\right)=\sum\nolimits_{n=0}^{\infty}A_{n}\left(\nu_{e} \right)\cos\left(n\omega t\right) \tag{3}\]
Each harmonic component \(A_{n}\) of the harmonic can be measured through the Lock-in amplifier:
\[A_{n}\left(\nu_{e}\right)=\frac{2I_{0}cL}{\pi}\int_{0}^{\pi}-\sigma\left(\nu_ {e}+\delta_{v}\cos\theta\right)\cos n\theta d\theta \tag{4}\]
In the above equation, \(\theta=\omega t\). Therefore, each harmonic component is directly positively correlated with gas concentration, and second harmonic is usually used for detection in experiments.
## 3 Design of cesium vapor pool
### Design principle and structure of cesium cell
The system uses a controllable cesium vapor pool for experimental research. The design principle of controllable cesium vapor pool is based on saturated Vapor pressure and Ideal gas law. Under closed conditions, the pressure of vapor in phase equilibrium with solids or liquids at a certain temperature is called saturated Vapor pressure. The same substance has different saturated Vapor pressure at different temperatures, and increases with the increase of temperature. Because the Ideal gas law is an ideal model established by people to simplify the Real gas, the higher the temperature, the lower the pressure, and the closer to the Ideal gas. In practice, the relevant ideal conditions cannot be realized, so the cold spot is added in the design, and the system reduces the influence of the error of the Ideal gas law by bringing in the cold spot for compensation. The temperature of the cold spot varies from -20\({}^{\circ}\)C to 30\({}^{\circ}\)C. The following empirical equation is used to calculate the saturated Vapor pressure[7]:
\[\log\left(p/Pa\right)=5.006+A+BT^{-1}+C\log T+DT^{-3} \tag{5}\]
The empirical equation can effectively calculate the Vapor pressure with an accuracy of \(\pm\) 5% or higher. In the equation, \(A\), \(B\) and \(C\) are constants for calculating the cesium Vapor pressure, and \(T\) is the absolute temperature, where \(A\) is 4.165, \(B\) is -3830, and there are no constants \(C\) and \(D\). The empirical equation obtained by substituting constants is:
\[\log\left(p/Pa\right)=9.171-\frac{3830}{T} \tag{6}\]
The Ideal gas law is a Equation of state that describes the relationship between pressure, volume, amount of substance and temperature when Ideal gas is in equilibrium. The expression is:
\[pV=nRT \tag{7}\]
The number of particles is calculated as follows:
\[N=\frac{p\times V\times NA}{R\times T} \tag{8}\]
Where \(N\) is the Particle number, \(p\) is the pressure, \(V\) is the volume, \(N\)A is the Avogadro constant, \(T\) is the absolute temperature, and \(R\) is the Gas constant.
According to the above saturated Vapor pressure theory and the Ideal gas state, it can be calculated that the concentration of cesium atoms in the closed cavity changes with temperature.
Figure 1 shows the structure of the cesium vapor pool. Cesium is a light golden yellow active metal that is solid at room temperature, has a low melting point, high reactivity with oxygen and water, and can form compounds with various elements. Therefore, stainless steel is chosen as the cesium cell material. The cesium cell consists of a cross shaped Vacuum chamber, with a length of 150.8mm and a volume of 7.70823\(\times\)10\({}^{5}\)m\({}^{3}\). Cesium is stored at position A of the Vacuum chamber, and the cesium vapor is evaporated to the whole chamber by heating. A thermocouple is set at position B to detect the temperature of the cavity and facilitate the control of the amount of cesium vapor. Ruby window is selected at position C at both ends of the optical path in the pool, which can increase the transmittance of infrared laser. Position D is a detachable flange used to inject cesium into the cavity. The E position valve is used for Vacuum chamber vacuum pumping. The position F is for semiconductor cooling sheets for cold spot cooling.
### Temperature control system
In the experiment, to obtain cesium vapor, it is necessary to heat the cesium in the vapor cell to evaporate it into the cesium cell in the form of vapor[8]. This system uses the SPEC-TE2 intelligent temperature controller model from Chuangpu Instrument Technology Co., Ltd. (Part 4 in Figure 3) to control the amount of cesium vapor by wrapping heating wires on the outer wall of the cesium vapor cell. This temperature controller can set up to 60 temperature control segments as needed to heat up, maintain insulation, and cool down the cesium cell, and monitor the real-time temperature of the cesium cell through thermocouples.
## 4 Systems design
### Experimental setup
The schematic diagram of the cesium atomic concentration detection system structure is shown in Figure 2, which mainly consists of three parts: a signal generation driving unit, an optical transmission testing unit, and a signal acquisition and processing unit.
Figure 1: Structure diagram of cesium vapor cell
he signal generating unit includes a signal generator, a laser controller, and a laser, mainly used to generate the required wavelength of laser for experiments. The signal generator outputs the low-frequency Sawtooth wave scanning signal superimposed with the high-frequency sine wave signal, which is loaded onto the laser through the laser controller to achieve the scanning and modulation of the required laser wavelength. The optical transmission unit includes a collimator, a filter, and a controllable cesium vapor cell. After being collimated by the collimator, the modulated laser is absorbed and attenuated by the cesium vapor cell, which is then transmitted to the photodetector. The cesium vapor is obtained by heating the cesium cell through a temperature controller. The signal acquisition and processing system includes a photodetector, a signal amplifier, a Lock-in amplifier, a data acquisition card and signal analysis software. The photodetector converts the attenuated optical signal into an electrical signal, which is amplified by the signal amplifier and then transmitted to the Lock-in amplifier. The Lock-in amplifier filters the noise and extracts the second harmonic of the signal, Finally, the data acquisition card converts electrical signals into digital signals and transmits them to computer software for data processing. Figure 3 shows the overall diagram of the constructed experimental system.
### Qualitative testing of the system
Main experimental parameters: low-frequency Sawtooth wave signal frequency is 50Hz, Sawtooth wave
Figure 3: Experimental diagram of cesium atom concentration detection system
Figure 2: Schematic diagram of the cesium atom concentration detection system structure
amplitude is 1000mVpp, high-frequency sinusoidal modulation signal frequency is 4kHz, and sinusoidal output amplitude is 150mV; The driving current of the laser is 76 mA, and the working temperature of the laser is 35.7\({}^{\circ}\)C; The phase of the Lock-in amplifier is about -127.6\({}^{\circ}\), the output voltage sensitivity is 10mV, and the time constant is 1ms.
By controlling the temperature of the cesium vapor cell for cesium vapor evaporation, the cesium vapor cell contains a certain concentration of cesium vapor[9]. The signal generator outputs low-frequency Sawtooth wave scanning signal superimposed with high-frequency sine wave signal, which is loaded onto the laser through the laser controller to realize scanning and modulation of 852.1nm laser wavelength. Build a system absorption optical path and collimate the modulated laser through a collimator before passing through the cesium vapor cell. The cesium vapor will attenuate the laser and generate an absorption signal, as shown in channel 1 of Figure 4. The photodetector converts the attenuated optical signal into an electrical signal, which is amplified by the signal amplifier and transmitted to the Lock-in amplifier. The Lock-in amplifier filters the noise and extracts the second harmonic from the signal to obtain the second harmonic signal proportional to the concentration, as shown in Figure 4.4, Channel 2. Based on the following experimental results, it can be determined that the system can qualitatively detect cesium.
### Quantitative detection of the system
Set the temperature of the temperature controller to gradually increase in a gradient trend, starting from 30 \({}^{\circ}\)C and increasing by 2 \({}^{\circ}\)C to 52 \({}^{\circ}\)C each time. Set a rise time of 10 minutes and a holding time of 30 minutes for each gradient to stabilize the cesium vapor in the steam cell[10]; By collecting the amplitude of the second harmonic signal to calibrate the cesium concentration, the trend of cesium concentration changing with the cesium cell temperature is shown in Figure 5.
The experimental results show that the concentration of cesium atoms increases with the increase of temperature, and the trend of the two is the same. This indicates that the cesium vapor cell can achieve temperature and concentration control, and the system can quantitatively detect the concentration of cesium in the cesium cell[11][12]. But when the temperature of the cesium cell exceeds 52 \({}^{\circ}\)C, that is, when the concentration in the cavity increases to a certain range, the concentration no longer changes, the absorption signal tends to saturation, and the absorption peak widens and flattens, as shown in Figure 6. Therefore, it can be concluded that the system detection range is within 5\(\times\)10\({}^{6}\)-2.5\(\times\)10\({}^{7}\)N/cm\({}^{3}\), system resolution better than 1\(\times\)10\({}^{6}\) N/cm\({}^{3}\).
Figure 4: (a) Original signal map after modulation, (b) Absorption signal map
Figure 5: Trend chart of cesium concentration changing with cesium cell temperature.
Figure 6: Signal under absorption saturation
Figure 7: System Stability Experiment Results
### Stability of the system
The main purpose of this experiment is to verify the stability of concentration (harmonic signal) changes under a continuous temperature change. Heat the cesium cell from 30 \({}^{\circ}\)C for 1 hour to 52 \({}^{\circ}\)C, keep it warm for 45 minutes, and then cool it down to 30 \({}^{\circ}\)C for 1 hour. Repeat the experiment twice continuously, and the results are shown in Figure 7. The trend of concentration increase and decrease in the two experiments is basically the same, and the error of the highest concentration in the two experiments is within 1.44%. From the experimental results, it can be concluded that the system has good stability.
## 5 Conclusion
This article introduces the design and functional testing of a cesium atom concentration detection system based on TDLAS technology, which detects the concentration and changes of cesium atoms using a cesium vapor pool. Through testing, it was found that the system can perform qualitative and quantitative detection of cesium. The gradient curve of concentration variation with temperature shows that the detection range of the system is within 5\(\times\)10\({}^{6}\)-2.5 \(\times\)10\({}^{7}\)N/cm\({}^{3}\), system resolution better than 1\(\times\)10\({}^{6}\) N/cm\({}^{3}\). Two consecutive repeated experiments were conducted on the system, and the trend of concentration change was basically the same. The maximum concentration error between the two experiments was within 1.44%, indicating that the system has good stability. The cesium atom concentration detection technology of this system has a guiding role in optimizing the performance of negative ion sources.
###### Acknowledgements.
This work was supported by the HFIPS Director's Fund (YZJJQY202204 and 2021YZGH02), Comprehensive Research Facility for Fusion Technology Program of China under Contract No. 2018-000052-73-01-001228 and National Key R&D Program of China (2017YFE300103, 2017YFE300503)
|
2307.11429 | Emergence of Debye scaling in the density of states of liquids under
nanoconfinement | In the realm of nanoscience, the dynamic behaviors of liquids at scales
beyond the conventional structural relaxation time, $\tau$, unfold a
fascinating blend of solid-like characteristics, including the propagation of
collective shear waves and the emergence of elasticity. However, in classical
bulk liquids, where $\tau$ is typically of the order of 1 ps or less, this
solid-like behavior remains elusive in the low-frequency region of the density
of states (DOS). Here, we provide evidence for the emergent solid-like nature
of liquids at short distances through inelastic neutron scattering measurements
of the low-frequency DOS in liquid water and glycerol confined within graphene
oxide membranes. In particular, upon increasing the strength of confinement, we
observe a transition from a liquid-like DOS (linear in the frequency $\omega$)
to a solid-like behavior (Debye law, $\sim\omega^2$) in the range of $1$-$4$
meV. Molecular dynamics simulations confirm these findings and reveal
additional solid-like features, including propagating collective shear waves
and a reduction in the self-diffusion constant. Finally, we show that the onset
of solid-like dynamics is pushed towards low frequency along with the
slowing-down of the relaxation processes upon confinement. This
nanoconfinement-induced transition, aligning with k-gap theory, underscores the
potential of leveraging liquid nanoconfinement in advancing nanoscale science
and technology, building more connections between fluid dynamics and materials
engineering. | Yuanxi Yu, Sha Jin, Xue Fan, Mona Sarter, Dehong Yu, Liang Hong, Matteo Baggioli | 2023-07-21T08:45:10Z | http://arxiv.org/abs/2307.11429v3 | # Unveiling the solid-like dynamics of liquids at low-frequency via nano-confinement
###### Abstract
At frequencies higher than the inverse of the structural relaxation time \(\tau\), the dynamics of liquids display several solid-like properties, including propagating collective shear waves and emergent elasticity. However, in classical bulk liquids, where \(\tau\) is typically of the order of 1 ps or less, this solid-like behavior cannot be observed in the low-frequency region of the vibrational density of states (VDOS), below a few meV. In this work, we provide compelling evidence for the emergent solid-like nature of liquids at short distances through inelastic neutron scattering measurements of the low-frequency vibrational density of states (VDOS) in liquid water and glycerol confined within graphene oxide membranes. In particular, upon increasing the strength of confinement, we observe a continuous evolution from a liquid-like VDOS (linear in the frequency \(\omega\)) to a solid-like behavior (Debye law, \(\sim\omega^{2}\)) in the range of 1-4 meV. Molecular dynamics simulations confirm these findings and reveal additional solid-like features, including propagating collective shear waves and a reduction in the self-diffusion constant. Finally, we show that the onset of solid-like dynamics is pushed towards low frequency because of the slowing-down of the relaxation processes upon confinement, and that the scale at which solidity emerges is qualitatively compatible with k-gap theory and the concept of gapped momentum states. Our results provide a convincing experimental proof of the continuity between liquids and solids, as originally advocated by Frenkel and Maxwell, and a deeper understanding of the dynamics of liquids across a wide range of length scales.
## Introduction
From a structural point of view, liquids are profoundly different from solids. They do not display long-range order and they cannot be defined using the theoretical concept of spontaneous symmetry breaking and its direct consequences, such as the existence of long-wavelength phonon modes and the elasticity that emerges from that [1]. This poses fundamental questions in the description of the low-energy collective dynamics of liquids which are predominantly relaxational, rather than vibrational, and usually described with the theory of hydrodynamics [2]. It comes then as no surprise that the dynamical properties of liquids and solids at large distances and over long timescales - the hydrodynamic regime - are substantially different. To make this distinction more precise, it is instructive to introduce the concept of structural relaxation time \(\tau\), which determines the speed of atomic re-arrangements in a liquid and the extent of the hydrodynamic regime. For bulk water at room temperature \(\tau\) is of the order of 1 ps [3], close to the value for the Maxwell relaxation time [4, 5]. In the rest of the manuscript, we will use the term low-frequency to indicate the range of energies below the inverse of the structural relaxation time for the bulk liquid, in which we do not expect any solid-like features. This corresponds to an energy of approximately \(\hbar\omega<4\)meV for water at room temperature.
A direct manifestation of the contrast between solids and liquids can be observed in the low-frequency behavior of their vibrational density of states (VDOS), \(g(\omega)\). 3D bulk solids display a characteristic quadratic scaling at low frequency \(g(\omega)\propto\omega^{2}\), which is known as the Debye law [6]. The law indicates that the low-frequency dynamics of solids are governed by collective vibrational and propagating modes known as phonons, which can be identified with the Goldstone modes for the spontaneously broken translational symmetry [7]. On the contrary, the low-frequency VDOS of liquids does not obey Debye law, but it rather exhibits two distinctive characteristics. First, the zero frequency value of \(g(\omega)\) does not vanish and directly relates to the liquid relaxational dynamics, _i.e._, a non-zero self-diffusion constant [8]. Second, the low-frequency scaling of the VDOS for 3D bulk liquids is linear in frequency, \(g(\omega)\propto\omega\), as derived from theoretical arguments [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19], confirmed by instantaneous normal mode (INM) simulations [9, 20], and demonstrated by neutron scattering experiments [21, 22, 23, 24]. For simplicity, in the rest of the manuscript, we will refer to the quadratic Debye behavior as solid-like and to the linear scaling in frequency as liquid-like (see insets in Figure 1). Since the dynamics of liquids for frequencies \(\omega<1/\tau\) do not involve any solid-like motion, their DOS cannot display any Debye-like behavior at low frequency, as experimentally confirmed in [21, 22, 23].
After acknowledging that the low-frequency dynamics of bulk liquids are drastically different from those in solids, a question arises as whether this sharp contrast persists when such a regime is abandoned, _i.e._, at high frequencies or short scales. It has been early realized by Frenkel [25] (see also Zwanzig [26], or the more recent reviews [27, 28]) that liquid dynamics involve solid-like oscillatory motion around a position of local equilibrium interrupted by diffusive jumps toward different potential minima. These jumps constitute the fundamental origin of fluidity (liquid flow) and they happen at an average rate \(1/\tau\) (see Figure 1), where \(\tau\) is the structural Maxwell relaxation time or the average time for a particle to jump out of the cage formed by its neighbors. As a consequence, the dynamics of liquids are expected to be solid-like for frequencies faster than this relaxation rate, \(\omega\gg 1/\tau\), simply because relaxation has no time to take place. In other words, one does expect the appearance of solid-like vibrational modes in the high-frequency dynamics of liquids. This is indeed the case, as reported by many works [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41].
Another simple way to model a continuous transition between a propagating oscillatory large frequency behavior and a low-frequency liquid-like collective diffusive dynamics is by using the so-called telegrapher equation, or k-gap theory [28]:
\[\omega^{2}+i\omega\tau_{g}^{-1}=v^{2}k^{2}, \tag{1}\]
Figure 1: **The short scale solid-like nature of liquids.**_At short distances and short times, liquids exhibit solid-like properties. First, below a critical length-scale, propagating shear waves are expected in liquids, instead of the large wavelength shear diffusion. Second, for times below a structural relaxation scale \(\tau\), the dynamics is oscillatory and confined at the bottom of the potential landscape basin. Hence, the vibrational density of states of liquids is expected to be liquid-like at large scales and to become solid-like (Debye law) at short scales._
where \(v\) is the asymptotic speed of propagation of the collective waves at large frequency, and \(k,\omega\) are respectively the wave-vector and the frequency of the collective mode. In the expression above, \(\tau_{g}\) is a timescale governing relaxation processes and expected to decrease with temperature. By using Maxwell interpolation, one can make the assumption of identifying \(\tau_{g}\) with the Maxwell relaxation time \(\tau_{M}\equiv\eta/G_{\infty}\), where \(\eta\) and \(G_{\infty}\) are respectively the shear viscosity and the instantaneous shear modulus [27, 28]. Despite this assumption looks reasonable, a formal derivation of this equivalence does not exist (see discussion in [42]). On top of that, we are aware of only one instance, liquid Ga [43], in which this equivalence is qualitatively confirmed. Because of these reasons, at this stage we will consider \(\tau_{g}\) as an independent parameter which, as we will see, can be extracted from the dispersion relation of gapped shear waves in liquids (see for example [44]). Back to Eq.(1), in the low frequency limit, the first term can be neglected and the dynamics is purely diffusive (liquid-like), with macroscopic diffusion constant \(\mathcal{D}=v^{2}\tau_{g}\) (not to be confused with the single-particle self-diffusion constant \(D\)). In the opposite limit, the second term can be neglected and the dynamics is purely vibrational and solid-like. A more careful analysis reveals still a crucial difference between the large frequency behavior arising from Eq.(1) and the dynamics of phonons in solids. Indeed, expanding the solution of Eq.(1) at large frequency, we obtain a dispersion of the type \(\omega=vk-i/2\tau_{g}\), whose imaginary part is wave-vector independent and therefore qualitatively different from that of phonons in solids [1] (where at finite temperature and low wave-vector is quadratic in \(k\), _i.e._, Akhiezer damping). This distinction is compatible with the observed broadening of high-frequency solid-like modes in liquids, which arises physically from the distribution of the liquid local structures [35].
Equation (1) appears in the description of several physical systems [28] (_e.g._, Cattaneo heat conduction equation) and plays a fundamental role in the understanding of the dynamics of collective shear waves in liquids, where it has been relabeled as the k-gap equation [27]. Besides modelling the high-frequency solid-like collective motion in liquids, Eq.(1) presents another striking prediction. In particular, it suggests the existence of shear waves with finite real frequency in liquids above a certain wave-vector cutoff, known as k-gap, and given by \(k_{g}=1/(2v\tau_{g})\). This prediction has been verified in simulations of classical liquids [44, 45, 46, 47, 48, 49, 50, 51, 52] and plasmas [53, 54, 55, 56, 57, 58], but experimentally confirmed only in a two-dimensional Yukawa dusty plasma [59]. In the literature, it is often claimed that the presence of a k-gap is tantamount to saying that below a critical length-scale \(L_{g}=2\pi/k_{g}\), and independently of the value of the frequency, the collective dynamics of liquids is akin to that of solids as it presents propagating shear waves. As we will see, this argument is incorrect from a theoretical perspective since the emergence of a finite real frequency does not guarantee the presence of propagating modes. A more conservative approach requires that the real part of the frequency is at least larger than its imaginary part, or in other words, that the wave is able to propagate over a few wave-lenths before decaying. This second criterion applied to the k-gap equation qualitatively agrees with Frenkel original argument \(\omega>1/\tau_{g}\) (see Supplementary Information). As we will argue, this liquid-like to overdamped to propagating dynamics is in perfect agreement with the previous experimental analysis for water presented in [60]. There, it has been shown that the dynamics of room temperature water is liquid-like (hydrodynamic) for \(q<2\)nm\({}^{-1}\) (to be compared with our value for \(k_{g}\) given by \(\approx 2.2\)nm\({}^{-1}\)), overdamped in the range \(2\)nm\({}^{-1}<q<4\)nm\({}^{-1}\), and solid-like above that scale.
In parallel to this discussion, in recent years there has been intense progress in revealing and understanding the emergent short-scale elasticity of liquids [61, 62, 63, 64, 65, 66, 67, 68], which has ultimately led to the question of where the hydrodynamic limit really is [69, 70]. The solid-like vibrational dynamics at short-scale are obviously connected to the emergence of macroscopic rigidity at short distances. A large part of this research program has been focused on disclosing the solid-like nature of liquids by considering confined systems, and therefore directly probing their short-scale dynamics. Confined liquid systems include liquids in nanoporous materials, droplets, liquid films, and liquids affected by interfacial interactions on the microscopic scale. Under these circumstances, the interactions between atoms or molecules in confined liquids undergo significant changes due to geometric constraints, resulting in distinct manifestations in the VDOS. The effects of confinement on liquid structure, transport and dynamics have been explored in several works, specially for the case of water [71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86].
In confined liquids, the mechanical relaxation times have been shown to increase dramatically as compared to bulk behavior [83]. A strong enhancement of the thermal conductivity has also been observed under confinement [77] and attributed, following Frenkel's idea [25], to the presence of additional transverse oscillations occurring because of the larger liquid relaxation time. More closely to the topic of this manuscript, the low-energy behavior of the VDOS of confined water has been investigated in [78] and more recently in [86]. In [78] a distinctive difference between fragile confined water, where \(g(\omega)\propto\omega^{1.5}\), and strong water, where \(g(\omega)\propto\omega^{2}\), was observed and explained using mode-coupling theory and the idea of a crossover between heteoregeneity dominated dynamics to phonon-like excitations. In [86], a reduction of the fraction of unstable instantaneous normal modes and the diffusion constant as a function of the confinement width, indicating the solid-like emergent dynamics, has been reported using simulations. More importantly, the low-frequency power-law of the instantaneous normal mode VDOS has been studied as a function of the width and the direction. Interestingly, [86] found that the power-law in the direction of confinement grows gradually by reducing the width and reaches a value around \(\approx 1.7\), close to Debye law.
In summary, despite the effects of confinement on the structure and dynamics of liquids being largely explored in the past, a convincing experimental proof of the emergence of a solid-like collective dynamics under confinement is still missing. Furthermore, it is not yet clear how to connect the emergence of solid-like dynamics in bulk liquids at high-frequency to
the same phenomenon in confined liquids at low frequency. In this study, we employed a combination of inelastic neutron scattering and molecular dynamics simulations to analyze two liquids nano-confined within graphene oxide membranes (GOM): nano-confined water at 280K and nano-confined glycerol at 300K. We discovered that the scaling of the low-frequency (below 4meV) vibrational density of states gradually transits from a liquid-like linear behavior to higher powers, close to the Debye solid-like quadratic scaling, as the confinement size decreases. Through the analysis of the current correlation functions within the two confined systems, we attribute this behavior to the emergence of collective transverse modes within the confined liquids and the slowing down of the structural relaxation processes. The validation of this phenomenon in two different systems suggests that these emergent solid-like vibrational modes induced by confinement may be a universal feature for confined liquids, consistent with the observation of elastic behavior in confined liquids previously reported in the literature. Using k-gap theory, and the physical picture of liquids originally introduced by Frenkel and Maxwell, we explain the observed phenomenology from simple principles, and we provide a qualitative estimate for the onset of solid-like features which is in good agreement with the experimental and simulation results. Our findings might provide a theoretical basis and guidance for the design and application of confined liquid systems, such as nanoporous materials, droplets, and liquid films, in the fields of energy storage, catalysts, and biomedicine. More fundamentally, they provide direct evidence of the idea of continuity between the solid and liquid phases of matter.
## Experimental and simulation results
Using inelastic neutron scattering, we measured the vibrational density of states (VDOS) of water confined within GOM at 280K and glycerol confined within GOM at 300K, for various confinement sizes. Details regarding sample preparation and experimental methods are provided in the Methods section. See also [87] for a similar setup.
As a reference, we present the data for bulk water at 280K and glycerol at 300K in Figure 1(a) and Figure 1(b) respectively. Both systems exhibit a linear scaling of the VDOS at low frequencies, consistent with previous experimental results [21, 22, 23]. Notice that the results for water are consistent with previous estimates for the structural relation time, or equivalently Maxwell relaxation time, which at room temperature is around 1 ps [3, 4, 5]. This implies that the dynamics of water at room temperature is expected to be fully liquid-like (linear in frequency) below \(\approx\) 4.13 meV, consistent with our fits. These findings are further supported by molecular dynamics simulations (see next section). In stark contrast, under extremely confined conditions, where the water (or glycerol) to GOM weight ratio reaches 0.1 (0.2) respectively, the scaling of the low-frequency VDOS deviates significantly from the linear liquid-like scaling. The VDOS displays a power law behavior \(\omega^{\alpha}\) where the exponent \(\alpha\) lies between the quadratic Debye law for solids and the linear in frequency law for liquids. The relationship between the weight ratio (liquid to GOM) and the confinement size is examined by X-ray diffraction in Figure 1(c), and the detailed results are presented in Table 1, where the inter-layer distance in GOM varies from 7 A to 1.3 nm for water when this ratio changes from 0.1 to 0.7 while it varies from 1 nm to 1.6 nm for glycerol when the ratio changes from 0.2 to 0.8. The extreme confinement size provided by GOM for both liquids is approximately 7 A (see Figure 1(c)). At that scale, the power \(\alpha\) reaches the value of \(\approx\) 1.58 and \(\approx\) 1.89 respectively for water and glycerol. In our previous work [87], we have demonstrated that the VDOS of GOM itself does not contribute significantly to the VDOS of the confined system in this frequency range. Therefore, the low-frequency scaling just reported must arise from the changes in the dynamics within the confined liquids themselves.
By fitting with a simple power law function, we found that the scaling of the low-frequency VDOS of confined water and glycerol gradually deviates from the linear scaling of the bulk liquid as the confinement size is reduced. This suggests that under unidirectional confinement, the influence on low-frequency vibrational modes in liquids positively correlates with the degree of confinement, and the scaling tends to approach the Debye value typical of solids. The small angle X-ray scattering (SAXS) for water confined in GOM, shown in Figure 1(d), is employed to provide further proof that the liquid remains in the liquid phase under confinement and no crystalline structure is formed. The presence of Bragg peaks, which symbolize crystalline solid ice, only occurs at lower temperatures, below 270K. At room temperature, due to the absence of characteristic Bragg peaks characteristic of crystalline ice, water confined within GOM remains in the liquid state. The diffusive behavior of the water confined in GOM, as shown in Figure 1(b), has also been reported in [88]. Therefore, the additional modes observed in the low-frequency VDOS do not originate from a solid phase but rather from the confined liquid water itself.
To confirm the universality of this faster-than-linear scaling in confined liquids, we conducted molecular dynamics (MD) simulations to replicate the experimentally setup for water and glycerol. A snapshot of the two setups is provided in the top panels of Figure 3. In the MD simulations, we utilized additional confinement sizes to validate the experimental results in depth. More details regarding the MD simulations can be found in the Methods and Supplementary Information sections. For the water and glycerol systems, the confinement sizes ranged respectively from 10A to 100A and 15A to 100A (see Figures 2(a) and 2(b)). In both of the systems, the largest size is considered as a bulk sample. As shown in Figures 2(c) and 2(d), the results of confined water and glycerol from the MD simulations are consistent with the experimental observations. More precisely, the low-frequency VDOS exhibits a larger scaling as the confinement size is reduced, strongly deviating from the linear scaling of the bulk state. For the maximum confinement simulated, we observe respectively a power law scaling \(\omega^{1.42}\) for water and \(\omega^{1.55}\) for glycerol,
which are compatible but slightly smaller than the experimental values. This indicates that the simulations can qualitatively reproduce the experimental conditions but they underestimate the confinement strength. Interestingly, both in experiments and simulations we find that for the same level of confinement the power law for glycerol is larger than the one of water. This implies that glycerol exhibits a solid-like dynamics on larger scales with respect to water. Assuming that the critical scale for the emergence of solidity is roughly given by \(L_{c}\propto v\tau\), we can, at least qualitatively, explain this observation. In particular, by following Maxwell's idea and interpreting \(\tau\) with the Maxwell relaxation time, the critical length becomes proportional to the viscosity of the liquid. That said, glycerol is more viscous than water, \(\approx 1400\) times larger at room temperature and therefore the solid-like dynamics are expected to emerge on larger scales, as observed in both experiments and simulations. Equivalently, under the same confinement conditions, glycerol should display dynamics closer to the ideal solid limit than water, as it does.
Figure 2: **Liquid to solid dynamics under confinement.****(a)**_The experimental non-normalized VDOS of confined water at different hydration level. All datasets are fitted in the same energy interval (from \(0.8\) meV to \(4.5\) meV) and rescaled such that the first data points overlap for better comparison. **(b)**_The experimental non-normalized VDOS of confined glycerol._ **(c)**_The XRD data of GOM sample with different hydration levels. **(d)**_The small angle X-ray scattering for water confined in GOM signaling the absence of Bragg peaks in the room temperature sample._
## Explaining the low-frequency solidity of liquids under confinement
In the previous section, by means of experiments and simulations, we have shown that by decreasing the confinement size from several nanometers to 7A the low-frequency behavior of the vibrational density of states of water and glycerol drastically changes. In particular, we observed a continuous transition in the vibrational density of states (VDOS) from a linear scaling in frequency dependence in bulk liquids to a solid-like Debye law in strongly nanoconfined liquids. In order to explain this behavior, we rely on the interpretation of liquid dynamics originally proposed by Maxwell and Frenkel [25] (see also Zwanzig [26]), and later re-interpreted by Trachenko and collaborators [27, 28]. The main idea, as previously introduced, is that the dynamics of liquids are characterized by solid-like quasi-harmonic vibrations around local minima of the potential landscape interrupted by thermally activated re-arrangements corresponding to jumps over the barrier towards a different minimum (see inset in Figure 1). This relaxational processes happen at an average rate \(\tau^{-1}\), where \(\tau\) is a so far undetermined re-arrangement timescale. Following this simple picture, one could immediately argue that for times shorter than \(\tau\) a liquid should not be too different from a solid, _i.e._, the Frenkel criterion. Mathematically, this framework can be described using the telegrapher equation, Eq.(1), which determines the dynamics of the collective (shear) modes in liquids, up to the lowest order in the frequency \(\omega\) and the wave-vector \(k\) and provides a simple interpolation between a solid-like behavior at large frequencies
Figure 3: **The MD simulations for confined water and confined glycerol.**_The snapshot of the confined system for water **(a)** and glycerol **(b).**_(c): The simulation low-frequency VDOS for confined water under different confinement sizes \(L\) (reported in the inset). **(d)**: The simulation low-frequency VDOS for confined glycerol under different confinement sizes \(L\) (reported in the inset). In both panels, the dashed lines indicated the fitted values for the low-frequency scaling at the lowest and highest degree of confinement._
and a liquid-like dynamics at short frequencies.
By solving Eq.(1), and focusing on the real part of the frequency, one finds the simple expression:
\[\mathrm{Re}(\omega)=v\sqrt{k^{2}-k_{g}^{2}}\qquad\mathrm{where}\qquad k_{g}\equiv \left(2v\tau_{g}\right)^{-1}, \tag{2}\]
where we have kept a distinction between the structural relaxation time \(\tau\) (\(\approx 1\mathrm{ps}\) for bulk water) and the timescale \(\tau_{g}\) appearing in the k-gap equation above. In bulk liquids, \(\tau_{g}\) decreases with temperature, and therefore \(k_{g}\) can be taken as a measure of fluidity. The larger the k-gap the stronger the relaxational dynamics compared to the solid-like vibrations.
Using MD simulations, we obtained the dispersion relation of the collective shear waves in liquid water, as shown in Figure 4a. For the bulk sample at 280K, we find that \(k_{g}\approx 0.23\mathrm{\AA}^{-1}\), which is of the same order as liquid gallium at 313K [45] and liquid sodium at 393K [35]. Most importantly, this value is in quantitative agreement with a previous estimate for water at room temperature, where the system has been proven to be fully hydrodynamic below a value of \(\approx 0.2\mathrm{\AA}^{-1}\)[60]. It should be noted that, as explained in [60], the dynamics of bulk water at room temperature becomes underdamped as is the case in solids only above \(\approx 0.4\mathrm{\AA}^{-1}\). This implies that the requirement of having the wave-vector larger than the k-gap \(k_{g}\) is not enough to have a well-defined underdamped solid-like dynamics in liquids. As explained in the Supplementary Information, this is consistent with the theoretical mode when the latter is properly analyzed. More precisely, above but near the k-gap, \(\mathrm{Re}(\omega)<\mathrm{Im}(\omega)\), meaning that the dynamics are not liquid-like but still overdamped. This is consistent with the range of \(0.2\mathrm{\AA}^{-1}<k<0.4\mathrm{\AA}^{-1}\) found experimentally in [60]. More interestingly, at a quantitative level, \(0.4\mathrm{\AA}^{-1}\) is approximately twice of the k-gap, and converted in frequency scale is very close to the Frenkel criterion (see Supplementary Information for more details about this).
In order to prove the emergence of solidity under confinement, in Figure 4a, we analyze the dispersion relation of the collective share waves in the water sample by reducing the confinement size. The detailed results of the fit can be found in Table 2 in the SI. We observe that the momentum gap of the collective modes decreases with making the confinement size \(L\) smaller. In other words, under strong confinement the gap in the wave-vector closes and the dispersion relation approaches the solid-like one \(\mathrm{Re}(\omega)=vk\). Before analyzing this behavior in detail, we seek further confirmation of this trend by examining the experimental and simulation diffusion constants as a function of the confinement size. As shown in Figure 4b, the diffusion constant obtained from the mean square displacement (see Figure 9) increases monotonically with the confinement size \(L\) and approaches the value for bulk water for larger \(L\). A decrease of the self-diffusion constant with confinements size, and a dramatic increase of the viscous forces have been already reported in simulations and experiments [88, 89, 90, 82, 86]. For comparison, in the same panel, we also show the behavior of the diffusion constant for confined glycerol at 300K. The diffusion constant in glycerol is approximately two orders of magnitude smaller than that of water, as the dynamics therein are much slower and the effective viscosity much larger. Nevertheless, apart from the different magnitude, the functional dependence of the diffusion constant on the confinement length \(L\) is very similar to that of water, hinting towards a possible universal behavior.
Using Eq.(2), one can extract the parameters \(v\) and \(\tau_{g}\). Despite the precision of the fitting is limited, specially for very low values of \(L\), the analysis can still provide a useful qualitative estimate. In Figure 4c, we show the behavior of \(k_{g}\) and \(\tau_{g}\) as a function of the confinement size \(L\). As explicitly verified in the SI (see Table 2), the shear waves speed (and the speed of longitudinal sound as well) are to a first approximation constant with respect to \(L\), and therefore their effect can be neglected. On the contrary, the relaxation time \(\tau_{g}\) and the momentum gap \(k_{g}\) are very sensitive to the confinement scale. The relaxation time \(\tau_{g}\) grows approximately linearly with \(L\), and the momentum gap follows approximately the inverse behavior \(k_{g}\propto 1/\tau_{g}\). From a physical perspective, these results in turn imply that, increasing the strength of the confinement, collective shear waves in the liquid start to propagate at lower frequencies and the solid-like dynamics extends to lower frequency or longer distances. Moreover, the dynamics and the local-rearrangements become extremely slow, as a further proof that the fluidity of the system is getting hindered a short distances. At this point, we notice that for the bulk sample the relaxation time \(\tau_{g}\) is \(\approx 2\mathrm{ps}\). This is about a factor of two larger than the structural relaxation time and the Maxwell relaxation time reported for bulk water at room temperature [3, 4, 5]. This implies that both \(\tau\) and \(\tau_{g}\) are a measure of relaxation processes but they do not quantitatively agree, at least in water. We suspect that the qualitative agreement in liquid Ga [45] is a fortunate coincidence of the simplicity of the system. Nevertheless, it is interesting to notice that the two timescale are very close to each other and indeed of the same order.
In Figure 4d, we reveal a direct linear correlation between the size of the momentum gap for collective shear waves and the single-particle self-diffusion constant. The two quantities show an approximate linear relation of the type \(k_{g}\propto D\) confirming that they are both direct probes of the fluidity of a system. The larger the gap of shear waves, the larger the self-diffusion constant and the ability of the system to flow and be a fluid.
At this point, it is reasonable to ask how the transition from the linear in frequency liquid-like behavior to the quadratic Debye law happens upon moving the confinement size \(L\) and how it is related to the re-appearance of collective shear waves and the behavior of the momentum gap \(k_{g}\). In order to reach a qualitative argument, a simplified toy model based on the k-gap equation (1) can be constructed (see SI for more details). By neglecting the effects of the other excitations and concentrating on
Figure 4: **The emergence of solid-like dynamics in confined water and confined glycerol in MD simulations.** _(a) Dispersion curves for transverse waves in confined water under different confinement sizes._ _(b)_ _Self-diffusion constant increasing with the confinement size in confined water and confined glycerol._ _(c)_ _The momentum gap extracted from the dispersion curves in panel_ _(a)_ _and the corresponding relaxation time_ \(\tau_{g}\) _as a function of the confinement size L._ _(d)_ _The linear correlation between_ \(k_{g}\) _and the diffusion constant._
the gapped shear waves, and by assuming a spherical phase space for their wave-vector unaffected by the confinement size, one can derive a simple expression for the contribution of the shear waves to the VDOS given by:
\[g_{\text{shear}}(\omega)\propto\omega\sqrt{\omega^{2}+\frac{1}{4\tau_{g}^{2}}}\,. \tag{3}\]
Let us emphasize that this derivation considers only the dynamics of shear-waves and ignores the presence of other excitations (_e.g._, longitudinal waves). Because of this reason, this should only serve as a qualitative model to show analytically the crossover between the two regimes. The exact value for this crossover, which in this simple scenario is \(\approx 1/\tau_{g}\), must not be taken too seriously but also as a qualitative guidance. As we will indeed see, the crossover scale extracted from the experimental data is qualitative, but not quantitative, in agreement with \(1/\tau_{g}\) obtained from simulations and shown in Figure 4c. Despite the simplicity of this derivation, one striking prediction emerges. Precisely, one would expect a transition from the liquid-like linear VDOS to a Debye law at frequencies \(\omega\gg 1/\tau_{g}\), which is another manifestation of the Frenkel criterion. In other words, we expect that the deviations from the purely linear behavior will appear only above the inverse relaxation time \(\tau_{g}^{-1}\). Since \(\tau_{g}\) increases by reducing the confinement size \(L\), as shown in Figure 4c, one would expect the Debye scaling to reach lower and lower frequencies upon confining the liquid. In order to check this prediction, in Figure 5a, we revisit the experimental data for confined water under this new perspective and we estimate the frequency at which the linear behavior is lost as a function of the confinement size \(L\). We denote such an energy scale as the crossover frequency \(\omega_{\times}\). In bulk water at 280K we do not observe any crossover and the low-frequency VDOS is perfectly linear below 4meV. This is in perfect agreement with the idea that the solid-like dynamics can appear only above \(1/\tau\). For bulk water, \(\tau=1\) ps and therefore the cutoff for solid-like vibrations is around 4.15meV, too high. Notice that our results do not suggest that solid-like dynamics cannot appear but rather that cannot be visible in the low-frequency VDOS. The reason is that they must appear at frequencies which are too high, where other optical and high-energy modes (_e.g._, the prominent peak found in water at around 7meV) will hide their Debye-like contribution. As proved in Figure 4, under confinement the relaxation time strongly increases, moving the onset for solid-like vibrations in confined liquids to energies below the bulk cutoff of 4meV. Indeed, for higher degrees of confinement - shorter \(L\) - we start observing a crossover between a linear behavior and a quadratic Debye like one in the window between 1 and 4meV (blue region in Figure 5a). The crossover frequency becomes smaller by decreasing the confinement size \(L\), ranging from \(\approx 2.8\)meV at \(L=12.8\)A to 1.9meV at \(L=7.8\)A. This is compatible with the relaxation time becoming longer and longer under confinement and the onset of solid-like dynamics, \(\sim 1/\tau\), appearing at lower frequencies, below the experimental cutoff of \(\approx 4\)meV (see Figure 5b for a representation of this phenomenon).
This simple argument also suggests that the fractional power laws extracted in the previous section from the experimental data using a single power fitting function are the result of a combination of a liquid-like linear term and a Debye quadratic contribution. The smaller the confinement size \(L\), the more the Debye contribution enters at low frequency and the larger the power extracted from a single power fitting law. This also explains why a perfect Debye scaling is not recovered for small values of \(L\). This would happen only in the unrealistic limit \(\tau\to\infty\) in which the crossover frequency goes to zero and the relaxational dynamics are completely frozen. For the shortest confinement scale we can achieve, \(L\approx 7.8\)A, we see a crossover frequency of \(\omega_{\times}\approx 1.9\)meV, implying that below such a scale a residual linear in frequency liquid-like contribution is still present. Hence, the total power is still smaller than the ideal Debye value equal to two.
## Discussion
In this study, we investigated the vibrational density of states (VDOS) of water and glycerol liquids nano-confined in multiple-layer graphene oxide membranes (GOM) using inelastic neutron scattering and molecular dynamics simulations. We focused on the low-frequency (below 4meV) scaling of the confined liquid's VDOS, its variation with the confinement length and its relation with the relaxational dynamics and the dispersion of gapped shear waves. Experimental observations revealed that the low-frequency VDOS of the confined liquid evolves from a linear behavior in the bulk state to a Debye-like quadratic behavior under strong confinement, resembling the phenomenology of a solid at short distances.
Through molecular dynamics simulations, we explored a broad range of confinement sizes and confirmed the gradual solid-like evolution of both liquids as the confinement size decreases. Analyzing the vibrational modes in the different directions, we found that emergent solid-like vibrational modes primarily arise along the confined direction, in agreement with previous simulation results [86], and that spatial confinement strongly affects the dispersion relation of transverse waves in the liquid. As the confinement size decreases, both the diffusive processes and the structural rearrangements are slowed down, and the transverse dispersion curves progressively resemble those of a solid, with no wave-vector gap. Upon confinement, the longitudinal waves of the liquid are not significantly affected. On the contrary, the momentum gap for collective shear waves, typical of liquids, closes by decreasing \(L\) and beautifully correlates with the behavior of the self-diffusion constant \(D\). In a nutshell, decreasing the confinement size \(L\) is akin to decreasing the temperature \(T\) and going towards the solid phase in a
continuous fashion. In addition, we have found that the relaxation time \(\tau_{\rm g}\) extracted from the k-gap equation is strongly slowed down upon confinement.
Building upon Frenkel-Maxwell's ideas [25] and the more recent k-gap theory [28], we provided a simple physical picture for the experimental and simulation observations. In particular, we have corroborated that the appearance of solid-like dynamics at low frequency, below 4meV, is a result of a slow down of the relaxation time under confinement. This confirms the idea that at frequencies above the inverse of the relaxation time \(1/\tau\), solid-like vibrational modes appear. Under nanoconfinement conditions, such a energy scale becomes smaller than 4meV and strongly alters the low-frequency VDOS of liquids. This mechanism is accompanied by the reduction of the gap for transverse shear waves, which is again controlled, according to k-gap theory, by the inverse of the relaxation scale. In other words, we have shown that the closing of the k-gap under confinement is the microscopic origin behind the solid-like low-frequency VDOS of liquids under confinement and it perfectly correlates with the decrease of the self-diffusion constant. This full picture gives a direct qualification and verification of the Frenkel criterion and of the idea of continuity between the liquid and solid phases of matter.
In conclusion, our study elucidates the impact of spatial confinement on the vibrational dynamics and on the onset of solid-like vibrational modes in liquids. Our findings contribute to a better understanding of the behavior of liquids under confinement, and on the short-scale dynamics of liquids in general, providing important insights into the fundamental aspects of their vibrational properties. Further investigations are necessary to explore different confinement geometries, liquid systems, and environmental conditions to deepen our knowledge of liquid dynamics and their implications for various fields such as materials science, nanotechnology, and energy storage.
## Methods
**Sample preparation.** The GOM sample was synthesized using the modified Hummers' method [91]. Initially, the sample was dehydrated by heating it from room temperature to 313 K and subsequently annealed at this temperature for 12 hours under vacuum conditions to achieve a dry state. The oxidation rate of the GOM sample was determined to be 28% using X-ray Photoelectron Spectroscopy (XPS). After dehydration, the GOM sample was sealed in a desiccator and exposed to water vapor to facilitate the adsorption of water molecules onto the surface and between the interlayers of the GOM sheets. The exposure time of the sample was adjusted to control the hydration levels. The final hydration levels were determined by measuring the
Figure 5: **Pushing the onset of solid-like dynamics in liquids upon confinement.** _(a) A re-visitation of the experimental data in Figure 2. The colored region represents the liquid-like regime, where solid-like vibrational modes cannot exist. **(b) A** pictorial representation of the underlying physics based on k-gap theory (see Supplementary information). The lines from black to red corresponds to smaller confinement size \(L\). The colored disks locates the position of the crossover between linear behavior and quadratic behavior – the onset of solid-dynamics – upon confinement. The orange region indicates the low-frequency regime, where the power-law scalings can be experimentally detected, which for water at room temperature extends up to \(\approx 4\)meV. For bulk water at room temperature, solid-like vibrations are expected only above that scale, within the magenta region._
weight of the sample before and after water adsorption, providing a quantitative assessment of the absorbed water content.The dehydrated GOM is soaked in glycerol for varying amounts of time and dried at 40 \({}^{\circ}C\) for 24 hours after removal to thoroughly remove the water. The specific gravity of glycerol is determined by the soaking time of the samples.
**Powder X-ray Diffraction (PXRD).** The powder X-ray diffraction (PXRD) data for GOM at various hydration levels were obtained using a Rigaku Mini Flex600 X-ray diffractometer. The instrument was equipped with a Cu K\(\alpha\) source (\(\lambda\) = 1.5406 A) and operated at 40 kV and 15 mA. The data were collected over a scanning range of 10\({}^{\circ}\) to 60\({}^{\circ}\) at a scan rate of 10\({}^{\circ}\)/min. The analysis of the PXRD data was performed using MDI Jade software.
**Small Angle X-ray Scattering (SAXS).** SAXS measurements were utilized to track the changes in interlayer distance within graphene-oxide membranes (GOM) during temperature reduction and ice formation. The experiments were conducted at the BL16B1 beamline of the Shanghai Synchrotron Radiation Facility (SSRF) using X-rays with a wavelength of 1.24A. The SAXS patterns were captured using a Pilatus 2M detector, featuring a resolution of 1475 pixels \(\times\) 1679 pixels and a pixel size of 172\(\mu m\times 172\mu m\). To ensure accurate data collection, each SAXS frame had an acquisition time of 10 s. The sample-to-detector distance for SAXS measurements was maintained at 258 mm.
**Scanning Electron Spectroscopy (SEM).** The SEM images were acquired using a MIRA 3 FE-SEM operating at a 5 kV accelerating voltage.
**Inelastic Neutron Scattering (INS).** Due to the significantly larger incoherent scattering cross section of hydrogen atom [92], the intensity measured in neutron signals in the system are primarily influenced by the incoherent scattering function, which predominantly reflects the self-motion of water molecules within the sample(the contribution from the graphene oxide will be very small [87]). The experimental vibrational density of states (DOS), denoted as \(g(\omega)\), can be derived from the dynamic structure factor, \(S(q,\omega)\), using the approach [93]:
\[g(\omega)\,=\,\int\frac{\hbar\omega}{q^{2}}S(q,\omega)\left(1-e^{-\frac{\hbar q }{k_{B}T}}\right)dq, \tag{4}\]
where \(\hbar\) is the Planck constant, \(q\) is the scattering wavevector, \(\omega\) is the frequency which related to the energy transfer, \(k_{B}\) is the Boltzmann constant, and T is the temperature. For water confined in GOM, three samples were fabricated with different weight ratio \(h\) (gram water/gram GOM) : 0.1, 0.4 and 0.7. The inelastic neutron scattering experiment were conducted by the time-of-flight cold neutron spectrometer - CELICAN at Australian Nuclear Science and Technology Organisation (ANSTO). The incident neutron energy is 14.9 meV with an energy resolution \(\Delta E=0.5\) meV at the elastic peak [94]. The \(q\) range covered is from 0.08A\({}^{-1}\) to 4.5A\({}^{-1}\). The samples were contained inside aluminum foils which were sealed in aluminum sample cans under helium atmosphere. The empty can signal was subtracted as background at each temperature. The detector efficiency was normalized using a vanadium standard. Data reduction and DOS extraction were performed by LAMP software package [95] and the scripts are available upon request. The experiment for glycerol confined in GOM was conducted using the cold neutron multi-chopper spectrometer LET at ISIS in UK [96]. The confined glycerol samples were also fabricated in three different weight ratio \(h\) (gram glycerol/gram GOM): 0.2,0.4 and 0.8 The measurement was done with incident energy 22.78 meV which covers the energy transfer range up to 13.5 meV and the \(q\) range from 0.34A\({}^{-1}\) to 7.08A\({}^{-1}\). Data reduction was performed on Mantid software packages [97].
**Molecular dynamics (MD) simulations.** The classical MD simulation was performed via the large-scale atomic/molecular massively parallel simulator (LAMMPS) [98] to simulate water and glycerol at 280 K and 300K, respectively. Water molecules are simulated with the widely used four-point TIP4P/2005 [99] model where the energy parameter \(\epsilon_{O-O}\) is 0.008 eV and the size parameter \(\sigma_{O-O}\) is 3.16 A. The interactions potential of water molecules is modeled by 12-6 Lennard-Jones (LJ) potential with a cutoff radius of 12 Aand the electrostatic forces are calculated with PPPM algorithm with an accuracy of 10\({}^{-4}\). The system of glycerol was modeled using Material Studio and simulated using the AMBER force field parameters, which have been proven to accurately describe the dynamics and structure of the glycerol system [100]. All initial structures were relaxed at given temperatures with the isothermal-isobaric (NPT) ensemble for 300 ps. The Nose-Hoover thermostat and Parrinello-Rahman barostat to control the temperature and pressure, and then switched to NVT ensemble at 100 ps (equilibrium time) to calculate dynamical properties. The Newton equation of motion was integrated using the velocity-Verlet algorithm with a time step of 1 fs per step. We freeze the bottom and top layer (\(\sim\)3 A) of the slab systems to force the structure to remain 2D confined during the whole simulation. A snapshot of the slab geometry can be found in Figure 3 a,b in the main text. The VDOS is calculated by the Fourier transform of the oxygen velocity autocorrelation function:
\[C_{v}(\omega)=\int_{-\infty}^{\infty}C_{v}(t)\exp(-i\omega t)dt\,. \tag{5}\]
The velocity auto-correlation function (VAF) is defined as:
\[C_{v}(t)=<\mathbf{v}(0)\cdot\mathbf{v}(t)> \tag{6}\]
where \(\mathbf{v}(0)\) are the oxygen velocities.
The Diffusion constant is calculated by the mean square displacements of the water molecules:
\[D=\frac{1}{6}\frac{d}{dt}\left\langle\left|\mathbf{r}_{i}(t)-\mathbf{r}_{i}(0) \right|^{2}\right\rangle, \tag{7}\]
where \(\mathbf{r}_{i}(t)\) is the position of the \(i^{th}\) particle at time \(t\).
The current density is a vector quantity which can be decomposed into a longitudinal part containing the component parallel to the \(\mathbf{q}\)-vector and a transverse part containing the perpendicular component, according to
\[\mathbf{j}(\mathbf{q},t)=\mathbf{j}_{L}(\mathbf{q},t)+\mathbf{j}_{T}(\mathbf{q},t) \tag{8}\]
where we have defined
\[\mathbf{j}_{T}(\mathbf{q},t)=\sum_{i}^{N}(\mathbf{v}_{i}(t)\cdot\hat{\mathbf{q}}) \hat{\mathbf{q}}e^{i\mathbf{q}\cdot\mathbf{r}_{i}(t)}, \tag{9}\] \[\mathbf{j}_{T}(\mathbf{q},t)=\sum_{i}^{N}[(\mathbf{v}_{i}(t)-(\mathbf{v}_{i}(t) \cdot\hat{\mathbf{q}})\hat{\mathbf{q}}]e^{i\mathbf{q}\cdot\mathbf{r}_{i}(t)}, \tag{10}\]
and use \(\hat{\mathbf{q}}\) to denote the unit vector.
The current correlation functions can now be computed as:
\[C_{L}(\mathbf{q},t)=\frac{1}{N}\left\langle(\mathbf{j}_{L}(\mathbf{q},t)\cdot\mathbf{j}_{L}(- \mathbf{q},0))\right\rangle \tag{11}\]
\[C_{T}(\mathbf{q},t)=\frac{1}{N}\left\langle(\mathbf{j}_{T}(\mathbf{q},t)\cdot\mathbf{j}_{T}(- \mathbf{q},0))\right\rangle \tag{12}\]
Then the velocity current spectra could be derived using:
\[C_{L,T}(\mathbf{q},\omega)=\int dte^{i\omega t}Re(C_{L,T}(\mathbf{q},t)). \tag{13}\]
Since simple fluids are isotropic, we average \(C_{L,T}(\mathbf{q},\omega)\) over all directions of the wavevector \(\mathbf{q}\).
\[C_{L,T}(q,\omega)=\frac{1}{N_{q}}\sum_{|\mathbf{q}|=q}C_{L,T}(\mathbf{q},\omega)\,, \tag{14}\]
where \(N_{q}\) is the number of directions used for averaging.
## Data availability
The datasets generated and analysed during the current study are available upon reasonable request by contacting the corresponding authors.
## Code availability
The code that support the findings of this study is available upon reasonable request by contacting the corresponding authors.
## Acknowledgements
We would like to thank H. Xu, A. Zaccone, K. Trachenko, L. Noirez and T. Keyes for fruitful discussions on the topic of liquids. We thank Dr.Victoria Garcia Sakai from ISIS Neutron and Muon Facility for assistance with the inelastic neutron scattering. We acknowledge the Instrumental Analysis Center of Shanghai Jiao Tong University for assistance with structural characterization via SEM and XRD. We thank Dr. Xiaran Miao from BL16B1 beamline of Shanghai Synchrotron Radiation Facility (SSRF) for help with synchrotron X-ray measurements. This work was supported by NSF China (11974239), the Innovation Program of Shanghai Municipal Education Commission(2019-01-07-00-02-E00076),and the student innovation center at SJTU. M.B. acknowledges the support of the Shanghai Municipal Science and Technology Major Project (Grant No.2019SHZDZX01) and the sponsorship from the Yangyang Development Fund. We acknowledge the support of the Australian Centre for Neutron Scattering, ANSTO and the Australian Government through the National Collaborative Research Infrastructure Strategy, in supporting the neutron research infrastructure used in this work via ACNS proposal P7273.
## Author contributions
Y.Y. performed the experimental measurements with the help of D.Y. and M.S.; M.B., D.Y. and L.H. conceived the idea of this work; Y.Y., S.J, X.F. implemented the MD simulations; Y.Y. performed the analysis of the experimental and simulation data with the help of S.J. and X.F.; M.B. and Y.Y. wrote the manuscript with the help of D.Y. and L.H.
## Competing interests
The authors declare that no competing interests exist.
|
2306.07162 | Odd elastohydrodynamics: non-reciprocal living material in a viscous
fluid | Motility is a fundamental feature of living matter, encompassing single cells
and collective behavior. Such living systems are characterized by
non-conservativity of energy and a large diversity of spatio-temporal patterns.
Thus, fundamental physical principles to formulate their behavior are not yet
fully understood. This study explores a violation of Newton's third law in
motile active agents, by considering non-reciprocal mechanical interactions
known as odd elasticity. By extending the description of odd elasticity to a
nonlinear regime, we present a general framework for the swimming dynamics of
active elastic materials in low-Reynolds-number fluids, such as wave-like
patterns observed in eukaryotic cilia and flagella. We investigate the
non-local interactions within a swimmer using generalized material elasticity
and apply these concepts to biological flagellar motion. Through simple
solvable models and the analysis of {\it Chlamydomonas} flagella waveforms and
experimental data for human sperm, we demonstrate the wide applicability of a
non-local and non-reciprocal description of internal interactions within living
materials in viscous fluids, offering a unified framework for active and living
matter physics. | Kenta Ishimoto, Clément Moreau, Kento Yasuda | 2023-06-12T14:48:46Z | http://arxiv.org/abs/2306.07162v2 | # Odd elastohydrodynamics: non-reciprocal living material in a viscous fluid
###### Abstract
Motility is a fundamental feature of living matter, encompassing single cells and collective behavior. Such living systems are characterized by non-conservativity of energy and a large diversity of spatio-temporal patterns. Thus, fundamental physical principles to formulate their behavior are not yet fully understood. This study explores a violation of Newton's third law in motile active agents, by considering non-reciprocal mechanical interactions known as odd elasticity. By extending the description of odd elasticity to a nonlinear regime, we present a general framework for the swimming dynamics of active elastic materials in low-Reynolds-number fluids, such as wave-like patterns observed in eukaryotic cilia and flagella. We investigate the non-local interactions within a swimmer using generalized material elasticity and apply these concepts to biological flagellar motion. Through simple solvable models and the analysis of _Chlamydomonas_ flagella waveforms and experimental data for human sperm, we demonstrate the wide applicability of a non-local and non-reciprocal description of internal interactions within living materials in viscous fluids, offering a unified framework for active and living matter physics.
## I Introduction
Motility is one of the main features of living matter, from a single cell to a swarm of birds or a human crowd [1; 2; 3]. In the last few decades, the dynamics of motile active agents, both individual and collective behavior, have been intensively studied, giving rise to a rapidly expanding research field in physics bridging non-equilibrium statistical physics, biophysics, and continuum mechanics, now known as active matter and living matter physics. A crucial feature of these systems is that inner activity units convert energy into mechanical forces. In turn, Newton's third law may be violated when we regard it as an open system, with its mechanical energy being injected from microscopic active units. Therefore, the mechanical interactions between the units can be non-reciprocal [4; 5].
The concept of reciprocity is also widely used in continuum mechanics. Recently, violation of the Maxwell-Betti reciprocity in elasticity has been discovered in an active system, and termed odd elasticity [6; 7; 8]. The elastic matrix in the constitutive stress-strain relation is then allowed to contain non-symmetric components, and it generates a self-sustained propagating wave. Odd elasticity reflects the non-conservative forces generated by microscopic active units and provides an effective material constitutive relation for active and living matter. This formulation was shown to effectively describe active locomotion as an autonomous system without controlled, tuned actuation [9].
Motile agents at the cellular scale are usually immersed in viscous fluids and are self-propelled by their deformation, as seen in swimming microorganisms [10; 11]. The motility of microswimmers, a term used for active agents in a low-Reynolds-number fluid, is, however, only possible when their deformations are non-reciprocal, which is known as the scallop theorem [12; 13; 11].
Recent theoretical studies on the swimming dynamics of odd-elastic materials [14; 15] revealed the relations between the violation of Maxwell-Betti reciprocity and the non-reciprocal deformation for microswimming around an equilibrium configuration, demonstrating that the swimming velocity is proportional to the magnitude of odd elasticity.
A traveling wave is a typical example of non-reciprocal deformation ubiquitously observed in biological microswimmers. Indeed, many eukaryotic cells use a flexible slender appendage, called a flagellum or cilium, for propulsion by generating a wave. Examples include tail motions of sperm cells and breaststrokes of _Chlamydomonas_ green algae [16]. This evolutionarily conserved filament is actuated by inner molecular motors in coordination, resulting in a periodic traveling wave with a self-organized nature. The flagellar whip-like motion is therefore regarded as a limit cycle oscillator, and the generic form of flagellar swimming is provided by Hopf bifurcation [17]. Recent theoretical and numerical studies using elaborate elastohydrodynamic models also found the emergence of the various flagellar waveform patterns via Hopf bifurcation [18; 19; 20; 21]. Moreover, refinements of videomicroscopy of biological flagella have enabled the detailed analyses of waveforms, and found that the flagellar shape dynamics are well described by a noisy limit cycle that reflects internal activity [22; 23; 24; 25; 26; 27].
The self-sustained wave for an odd-elastic material, however, is insufficient to describe the flagellar waveform, because the odd-elastic waves are dissipated rather than sustained by the fluid viscosity, similar to the classical (passive) elastic response in a viscous medium [28]. Hence, nonlinearity is required for an odd-elastic system to exhibit a stable limit cycle [15], calling for a more
general, nonlinear odd constitutive relation to deal with biological flagellar swimming. In fact, the importance of nonlinear odd elasticity has been reported as a topical challenge within the field of active matter studies.[8]
The primary aim of this study is therefore to extend the odd-elastic description of microswimmers to a nonlinear regime to deal with stable periodic deformations, as seen in biological flagellar motion. This theory, which we call _odd elastohydrodynamics_, therefore provides a unified framework for the study of non-local, non-reciprocal interactions of an elastic material in a viscous fluid.
Using this generic formulation, we can access the interactions inside an active elastic material, while these are masked by fluid dynamic coupling when observing flagellar motion under a microscope. To distinguish the non-reciprocal activity from the passive elastic response, we introduce a new concept, the _odd-elastic modulus_, as a spatial Fourier transform in an extended space. The real and imaginary parts of this complex function possess proper symmetry and characterize the reciprocal and non-reciprocal interactions, respectively.
The secondary aim of this study is then to apply our theory to biological flagellar swimmers. By examining the odd-elastic modulus based on simple mathematical models and biological experimental data, we show the wide applicability of a non-local and non-reciprocal description of internal interactions within living materials.
The contents of this paper are summarized as follows. In Section II, we provide a setup for the theoretical formulation of odd elastohydrodynamics to describe an active elastic material in a viscous fluid. We also discuss the connection between Hopf bifurcation and nonlinear odd elasticity and express the dynamics of a microswimmer undergoing periodic deformation. In Section III, we introduce the concept of the odd-elastic modulus.
In Sections IV and V, we apply our theory to understand the inner mechanical interactions that biological flagellar motion exhibit. To gain physical intuition regarding non-local, non-reciprocal interactions encoded by nonlinear odd elasticity, we start with simple and solvable models in Section IV. We also discuss how the odd-elastic modulus captures the inner interactions of these example models. In Section V, we numerically investigate the extended bending modulus in model flagellar waveforms for _Chlamydomonas_ and sperm cells, together with experimental data. With these, we propose a new continuum description of living soft matter in a viscous fluid by means of nonlinear odd elasticity. The discussion and conclusions are provided in Section VI.
One of the advantages of the odd-elastic description of activity is the application of the autonomous equations of motion. These allow us to analyze some general features of microswimming with periodic deformation, including theoretical formulae for the average swimming velocity. In Appendix A, to complete our general theory of odd elastohydrodynamics, we further extend our framework to encompass fluctuations in shape gaits by internal actuation, following biological observations of a noisy limit cycle in shape space. Exploiting the autonomous structure of the odd-elastic formulation and the gauge-field formulation for microswimming, we investigate the effects of internal active noise on swimming velocity. The role of odd elasticity is further discussed in terms of non-equilibrium thermodynamics.
## II Odd elastohydrodynamics of microswimmers
### Shape and deformation of a swimmer
To describe the motion of a deforming microswimmer in a fluid, we need to specify the position and orientation together with the instantaneous shape of the swimmer. In Fig. 1, we present a schematic of a general elastic microswimmer. The rigid body motion is defined by the translation and rotation between the laboratory frame \(\{\mathbf{e}_{1},\cdots,\mathbf{e}_{d}\}\) and swimmer-fixed frame \(\{\mathbf{e}_{1}^{(s)},\cdots,\mathbf{e}_{d}^{(s)}\}\). We assume that the swimmer moves in a \(d\)-dimensional space, where \(d=1,2\), or \(3\). \(d=1\) indicates linear motion, \(d=2\) indicates planar motion, and \(d=3\) corresponds to general three-dimensional motion in space. The origin of the swimmer-fixed frame is set to be the swimmer's position and is denoted by \(\mathbf{x}=(x_{1},\cdots,x_{d})^{\mathrm{T}}\). The number of angular degrees of freedom to specify the orientation in \(d\)-dimensional space is \(d^{\prime}=d(d-1)/2\). We let \(n\) be the number of degrees of freedom for rigid motion, that is, \(n=d+d^{\prime}\), and introduce a \(n\)-dimensional vector to represent the position and orientation as \(\mathbf{z}_{0}=(x_{1},\cdots,x_{d},\theta_{1},\cdots,\theta_{d^{\prime}})^{ \mathrm{T}}\in\mathbb{R}^{n}\).
We assume that the shape of the swimmer is parameterized by \(N\) shape coordinates as \(\mathbf{\sigma}=(\sigma_{1},\dots,\sigma_{N})^{\mathrm{T}}\in\mathbb{R}^{N}\). For the shape coordinates, we employ, for example, displacements from the equilibria of the material units or relative angles between neighboring material units. We will later introduce generalized elastic forces and torques associated with the shape coordinates (see also Fig. 1).
Let us denote representations in the swimmer-fixed coordinates by superscript \((s)\) and introduce the extended coordinates vector \(\mathbf{z}=(\mathbf{z}_{0}^{(s)};\mathbf{\sigma})\in\mathbb{R}^{n+N}\) and its associated velocity vector, \(\dot{\mathbf{z}}=(\dot{\mathbf{z}}_{0}^{(s)};\dot{\mathbf{\sigma}})\in\mathbb{R}^{n+N}\), where the semicolon indicates vertical concatenation and the dot symbol indicates a time derivative. The velocity in the swimmer-fixed coordinates \(\dot{\mathbf{z}}_{0}^{(s)}\) is a physical quantity that is obtained from the force and torque balance equations as explained below. The vector \(\mathbf{z}_{0}^{(s)}\), computed by integrating \(\dot{\mathbf{z}}_{0}^{(s)}\), is introduced for later use but does not represent a physical position or orientation when \(d=3\) due to the non-commutative nature of the dynamics. We set the origin of the shape coordinates, \(\mathbf{\sigma}=\mathbf{0}\), to be the equilibrium configuration without any
internal or external forces.
This description includes, in particular, several minimal mathematical models of swimmers. For example, Najafi-Golestanian's three-sphere model Najafi and Golestanian (1998); Najafi and Golestanian (2000); Najafi and Golestanian (2001) is a swimmer consisting of three spheres connected in a straight line by two rods and moving in one direction by changing the lengths of the rods; thus, the degrees of freedom are \((n,N)=(1,2)\). Purcell's three-link swimmer Farrell (1982); Najafi and Golestanian (1998); Najafi and Golestanian (2001) is another minimal model, which consists of three rods connected by two hinges to form a snake-like robot, and can swim in a plane by changing the angles of the hinges. The degrees of freedom are therefore \((n,N)=(3,2)\) for this model. The shape parameters are the lengths of two arms for the three-sphere model and the two relative angles for the three-link swimmer.
### Odd-elastohydrodynamic equations
The dynamics of a three-dimensional self-deforming elastic object in a viscous fluid are well represented by the Stokes equation:
\[\eta\nabla^{2}\mathbf{u}=\nabla p, \tag{1}\]
where the velocity field \(\mathbf{u}\) satisfies the incompressibility condition \(\nabla\cdot\mathbf{u}=0\). Here, \(p\) is the pressure field, and the viscosity \(\eta\) is assumed to be constant. Due to the linearity of the Stokes equations, the hydrodynamic forces and torques conjugate to the extended coordinates, denoted symbolically by \(\mathbf{f}^{\rm hyd}\), and are proportional to the time derivative of the extended coordinates. This linear relation is represented by a positive-definite matrix, called a generalized grand resistance matrix \(\mathbf{M}\)Landau and Lifshitz (1980); hence, \(\mathbf{f}^{\rm hyd}=-\mathbf{M}\dot{\mathbf{z}}\). Due to the negligible inertia, these forces and torques are balanced by internal or external forces and torques, which we denote by \(\mathbf{f}\) and introduce below.
We now define an "elasticity" matrix (or equivalently an elastic matrix) through a general stress-strain constitutive relation as a function of the shape parameters, \(\mathbf{K}(\mathbf{\sigma})\in\mathbb{R}^{N\times N}\), to represent all the internal forces and torques, including the internal activity force as well as the ordinary passive elastic response. To be more precise, this generalized elastic matrix is defined by mapping from shape coordinates to internal forces and torques, given by \(\mathbf{f}=-\mathbf{K}(\mathbf{\sigma})\mathbf{\sigma}\). This generalized elasticity is reduced to that of an elastic spring when we take the displacement of the material point for the shape coordinates and to that of a torque spring (torsion spring) when we employ the relative angle along a filament as the shape coordinates. At the equilibrium configuration (\(\mathbf{\sigma}=\mathbf{0}\)), the generalized elastic force vanishes (\(\mathbf{f}=\mathbf{0}\).) The non-symmetric part may have non-zero values, that is, \(\mathbf{K}\neq\mathbf{K}^{\rm T}\); this corresponds to odd elasticity and effectively represents the non-conservative, internal activity of the self-deforming material. If it is linearly odd-elastic, the elastic matrix \(\mathbf{K}\) is a constant matrix, although it is, in general, determined by the instantaneous shape of the object. The balance equations for the forces and torque, \(\mathbf{f}^{\rm hyd}+\mathbf{f}=\mathbf{0}\), are therefore summarized in the following form (Bergman, 2001):
\[-\mathbf{M}(\mathbf{\sigma})\dot{\mathbf{z}}=\mathbf{L}(\mathbf{\sigma})\mathbf{z}. \tag{2}\]
Note that the matrix \(\mathbf{M}\) only depends on the spontaneous shape of the swimmer. The right-hand side of Eq. (2) represents a general elastic force, including both the passive elastic response and internal actuation, and \(\mathbf{L}\) is a \((n+N)\times(n+N)\) matrix containing the elastic matrix \(\mathbf{K}\) as \(L_{n+\alpha,n+\beta}=K_{\alpha\beta}\) with the other components being zero, namely, \(L_{ij}=L_{i\alpha}=L_{\alpha j}=0\). Throughout this paper, we use Roman indices such as \(i,j=\{1,\dots,n\}\) for the translation and orientation of the object, Greek indices such as \(\alpha,\beta=\{1,\dots,N\}\) for the shape coordinates, and the Einstein summation convention for repeated indices.
By inverting the resistance matrix, we can decompose the shape dynamics from the rigid body motion in the form
\[\dot{\mathbf{z}_{0}}=-\mathbf{P}\mathbf{K}\mathbf{\sigma}\,\ \ \dot{\mathbf{\sigma}}=- \mathbf{Q}\mathbf{K}\mathbf{\sigma}. \tag{3}\]
The matrices \(\mathbf{P}\) and \(\mathbf{Q}\) are respectively given by \(P_{i\alpha}=N_{i,n+\alpha}\) and \(Q_{\alpha\beta}=N_{n+\alpha,n+\beta}\), with \(\mathbf{N}=\mathbf{M}^{-1}\). Note that the second equation (3) provides an
Figure 1: Schematic of general odd-elastic microswimmer. This example swimmer moves in a three-dimensional space (\(d=3\)). The position and rotation of the swimmer are represented by the relative motions between the laboratory frame \(\{\mathbf{e}_{x},\mathbf{e}_{y},\mathbf{e}_{z}\}\) and the swimmer-fixed frame \(\{\mathbf{e}^{(s)}_{x},\mathbf{e}^{(s)}_{y},\mathbf{e}^{(s)}_{z}\}\). The shape of the swimmer is parameterized by \(N\) shape variables \(\mathbf{\sigma}=(\sigma_{1},\cdots,\sigma_{N})\) In this schematic, we use displacements of material units from equilibrium positions as the shape variables. In typical linear elastic theory, a recovery force is applied that is proportional to the displacement using a spring constant, as indicated by \(k\) in this schematic. In the current study, however, we generalize this elastic force to include internal actuation, which is represented by odd elasticity.
autonomous dynamical system in shape space, and the non-symmetric part of the elastic matrix plays the role of an internal actuation to drive the deformation. The first equation determines the translation and rotation of the swimmer and coincides with the equation of the kinematic swimming problem, in which the shape gait is a given function.
### Periodic swimming by nonlinear odd elasticity
We now consider a microswimmer undergoing a periodic deformation with a particular focus on flagellar-like filament dynamics. For eukaryotic flagella, internal molecular motors synchronously actuate the elastic filament to generate a periodic waveform. While the emergent waveform is obtained by elastohydrodynamic mechanical coupling, the onset of wave generation from a straight equilibrium configuration is well formulated by a Hopf bifurcation [17; 19; 20; 22].
To illustrate the limit cycle behavior in the shape space, we reproduce in Fig. 2 the figures on human sperm swimming from Ishimoto et al. [26], in which principal component analysis (PCA) was performed to reduce the dimensionality of the flagellar waveform obtained from experimental observations. The authors found that the flagellar waveforms are well represented by noisy limit cycle orbits in the two-dimensional shape space spanned by the lowest PCA modes [Fig. 2(a, b)]. The embedded limit cycle orbit was then extracted and used to analyze the time-periodic swimming dynamics of human sperm [Fig. 2(c)].
To derive a generic description for such time-periodic swimming, we employ the normal form of the Hopf bifurcation, which may be written as [35]
\[\frac{dz}{dt}=cz+b|z|^{2}z, \tag{4}\]
where \(z\in\mathbb{C}\). The parameters \(c\) and \(b\) are both complex numbers: \(c=\lambda+i\omega\in\mathbb{C}\) and \(b=\mu+i\xi\in\mathbb{C}\) with real-valued parameters \(\lambda,\omega,\mu\), and \(\xi\). Let us introduce the _apparent_ shape space in which the shape dynamics are described by the normal form (4) and denote the shape coordinates in this shape space by \(\mathbf{q}\). The apparent shape coordinates do not always coincide with the shape coordinates \(\mathbf{\sigma}\) used in the stress-strain relation. To distinguish \(\mathbf{\sigma}\) from \(\mathbf{q}\), we now refer to \(\mathbf{\sigma}\) as _intrinsic_ shape coordinates. We then assume the existence of a transformation from the apparent shape coordinates to the intrinsic shape coordinates given by a full-rank matrix \(\mathbf{W}\in\mathbb{R}^{N\times N}\), that is, \(\mathbf{\sigma}=\mathbf{W}\mathbf{q}\). The matrix \(\mathbf{W}\) may be obtained by PCA and we will later examine detailed construction of the matrix with some examples (Sections IV and V).
From the normal form of Eq. (4), the dynamics in the apparent shape space are separated into the limit cycle in the \(q_{1}-q_{2}\) space and the damping dynamics in the remaining \((N-2)\) dimensions. Let us introduce the _apparent_ elastic matrix \(\hat{\mathbf{K}}\in\mathbb{R}^{N\times N}\) to distinguish the apparent elasticity from the _intrinsic_ elasticity \(\mathbf{K}\) in Eq. (3), and write the dynamics in the form
\[\dot{\mathbf{q}}=-\hat{\mathbf{K}}\mathbf{q}\ \ \text{with}\ \ \hat{\mathbf{K}}(\mathbf{q})= \begin{pmatrix}\hat{\mathbf{K}}^{\text{LC}}&\mathbf{O}\\ \mathbf{O}&\hat{\mathbf{K}}^{\text{d}}\end{pmatrix}, \tag{5}\]
where the two-dimensional nonlinear elastic matrix \(\hat{\mathbf{K}}^{\text{LC}}\in\mathbb{R}^{2\times 2}\) represents the limit cycle in Eq. (4), and the \((N-2)\)-dimensional matrix \(\hat{\mathbf{K}}_{d}\in\mathbb{R}^{(N-2)\times(N-2)}\) in the right-bottom block expresses the stable modes around the Hopf bifurcation. All the eigenvalues of \(\hat{\mathbf{K}}^{\text{d}}\) therefore have non-negative real parts.
After relabeling the parameters \(\lambda,\omega,\mu\), and \(\xi\) in Eq. (5) as \(k^{\text{c}},k^{\text{o}},k^{\text{ne}}\), and \(k^{\text{no}}\) with an additional minus sign, we may write the components of \(\hat{\mathbf{K}}^{\text{LC}}\) as
\[\hat{K}^{\text{LC}}_{\alpha\beta}=(k^{\text{e}}+k^{\text{ne}}r^{2})\delta_{ \alpha\beta}+(k^{\text{o}}+k^{\text{no}}r^{2})\epsilon_{\alpha\beta} \tag{6}\]
for \(\alpha,\beta\in\{1,2\}\). Here, \(\delta_{\alpha\beta}\) is the Kronecker delta, \(\epsilon_{\alpha\beta}\) is the two-dimensional Levi-Civita permutation symbol, and \(r=(q_{1}^{2}+q_{2}^{2})^{1/2}\). With these terms, the dynamics in the apparent shape coordinates are translated from the normal form of the Hopf bifurcation into dynamics described by odd-elastic interactions. The four parameters \(k^{\text{e}},k^{\text{o}},k^{\text{ne}}\), and \(k^{\text{no}}\) are then interpreted as even linear elasticity, odd linear elasticity,
Figure 2: Human sperm swimming as example of microswimmer with noisy limit cycle. (a) The lowest two PCA modes were obtained from experimental data. The horizontal axis indicates the normalized arc length along the flagellum from the head-tail junction. (b) Projections of the shape onto the two-dimensional PCA shape space. Each circle indicates the shape at a different time. (c) Superposed snapshots of a swimming human sperm with a time-periodic beat, obtained by a direct numerical simulation of the Stokes equations. The waveform was extracted from the experimental data for swimming human sperm as a limit cycle in the two-dimensional PCA shape space. Figures reproduced from [26] with permission under the creative commons license ([http://creativecommons.org/licenses/by/4.0/](http://creativecommons.org/licenses/by/4.0/)).
even nonlinear elasticity, and odd nonlinear elasticity, respectively.
In this Section II.3, we introduced the normal form for the limit cycle in shape space, which then couples with hydrodynamics to generate the net displacement, i.e., locomotion or swimming. In this theoretical framework, the swimming dynamics are fully described by an autonomous system. Hence, by integrating this system over the cycle of shape deformation, we may obtain a general formula for the average swimming velocity for a small-amplitude swimmer [15, 36].
The position and the orientation in \(d\) dimensions are represented by an element of the \(d\)-dimensional Euclidean group \(\mathrm{SE}(d)\). We represent these by \(\mathcal{R}\in\mathbb{R}^{n\times n}\) and the time evolution is provided by its generator, \(\mathcal{A}\), via \(\dot{\mathcal{R}}=\mathcal{R}\mathcal{A}\). With the linearity of the Stokes equation, we may rewrite this generator as \(\mathcal{A}=\mathcal{A}_{\alpha}q_{\alpha}\), and the third-rank tensor \([\mathcal{A}_{\alpha}]_{ij}=A_{ij\alpha}\) is the connection of the gauge group \(\mathrm{SE}(d)\). If the swimmer exhibits a periodic motion with period \(T_{c}\), the displacement and rotation after one beat cycle are obtained by a loop integral in shape space [38, 36, 37, 38] as
\[\mathcal{R}(T)=\mathcal{R}_{0}\bar{\mathrm{P}}\exp\left[\int_{0}^{T_{c}} \mathcal{A}(t)\,dt\right]=\mathcal{R}_{0}\overline{\mathrm{P}}\exp\left[\oint \mathcal{A}_{\alpha}\,dq_{\alpha}\right], \tag{7}\]
where we write \(\mathcal{R}(t=0)=\mathcal{R}_{0}\) and introduce a path-ordering operator \(\overline{\mathrm{P}}\). The integral in the last term is performed over a closed loop in shape space. After expanding for a small \(\mathbf{q}\) up to its quadratic term, the swimming velocity over one beat cycle is obtained for a small-amplitude deformation by using a fourth-rank tensor \(\mathcal{F}\), called the curvature of the gauge field, as
\[\overline{A_{ij}}=\frac{1}{2}F_{ij\alpha\beta}\overline{q_{\alpha}\dot{q}_{ \beta}}, \tag{8}\]
where the overline indicates the average over one deformation cycle. Hence, the average swimming velocity is proportional to the areal velocity enclosed by the limit cycle in shape space. Using our odd-elastic representation, Eqs. (5)-(6), the limit cycle exists only when \(k^{\mathrm{e}}<0\). Then, the swimming formula can be computed as
\[\overline{A_{ij}}=F_{ij12}\left[\frac{k^{\mathrm{o}}}{2}\frac{|k^{\mathrm{e}} |}{k^{\mathrm{ne}}}+\frac{k^{\mathrm{no}}}{2}\left(\frac{|k^{\mathrm{e}}|}{k^ {\mathrm{ne}}}\right)^{2}\right]. \tag{9}\]
The terms on the right-hand side are proportional to the odd-elastic coefficients. This equation generalizes the swimming formula for an odd-elastic swimmer from the linear to the non-linear regime [15].
In addition, as observed in many biological systems, the shape gait, or the coefficients for the odd elasticity in our framework, temporally fluctuates. Further, these active fluctuations provides an important link to non-equilibrium statistical physics. Therefore, internal noise should be taken into consideration to precisely evaluate locomotion. Thus, to complete our theory, we thoroughly investigated the impact of noise on the swimming velocity, extending the swimming formula (9) for noisy limit cycles. The detailed calculations can be found in Appendix A.
## III Non-local, non-reciprocal interactions and odd-elastic modulus
In this section, we focus on the interactions between the units of active material undergoing periodic deformation in a viscous fluid. To characterize its non-local, non-reciprocal interactions, we introduce the concept of the odd-elastic modulus.
By changing the variables from \(\mathbf{q}\) to \(\mathbf{\sigma}\), the intrinsic elastic matrix can be derived from the apparent elastic matrix as
\[\mathbf{K}=\mathbf{Q}^{-1}\mathbf{W}\hat{\mathbf{K}}\mathbf{W}^{-1}. \tag{10}\]
As already described, the elastic force or torque including the passive and active elastic response is symbolically given by
\[f_{\alpha}=-K_{\alpha\beta}\sigma_{\beta}. \tag{11}\]
When considering the intrinsic shape coordinates, we usually chose the displacement from the equilibrium for a material unit and the relative distance or angle between neighboring material units.
In the example of an odd three-sphere swimmer [14], the intrinsic elastic matrix is given by
\[K_{\alpha\beta}=k^{\mathrm{e}}\delta_{\alpha\beta}+k^{\mathrm{o}}\epsilon_{ \alpha\beta}, \tag{12}\]
for \(\alpha,\beta\in\{1,2\}\). For a Purcell swimmer with odd-elastic hinges [15], the same intrinsic elastic matrix is considered, where \(f_{i}\) indicates the torque at the \(i\)-th hinge and \(\sigma_{i}\) is the \(i\)-th relative angle between neighboring rods.
We further extend this form into a \(N\times N\) matrix representation. A schematic is shown in Fig. 3(a), where the size of the matrix is set to \(N=10\), with the colors indicating the values of the matrix components, which are chosen arbitrarily for illustration purposes. The off-diagonal components express non-local interactions. The matrix is then decomposed into its symmetric and antisymmetric parts, i.e., \(\mathbf{K}=\mathbf{K}_{\mathrm{e}}+\mathbf{K}_{\mathrm{o}}\). These correspond to the even and odd elastic matrices, respectively, and the anti-symmetric matrix represents the non-reciprocal interactions between the units of material.
Let us introduce the Lagrangian coordinate of the material for a point with shape index \(\alpha\), which is denoted by position \(\mathbf{s}_{\alpha}\in\mathbb{R}^{d}\). For simplicity and later use, here we focus on a one-dimensional elastic object such as a filament or rod, and assume its length at rest to be \(\ell\). For the Lagrangian coordinates, we take an equally-spaced arclength along the material at rest and represent it by \(s_{\alpha}\in[0,\ell]\) with its separation being \(\Delta\ell=\ell/N\). We then
rewrite Eq. (11), representing the force or torque acting on a material point with a Lagrange label \(s_{\alpha}\), by a non-local interaction represented by a kernel, \(\kappa(s_{\alpha},s_{\beta})\), as
\[f(s_{\alpha})=-\sum_{\beta}\kappa(s_{\alpha},s_{\beta})\sigma_{j}(s_{\beta}). \tag{13}\]
For large \(N\), it is also useful to consider a continuum representation. By dividing both terms in (13) by the spatial discretization, we obtain
\[\frac{f(s_{\alpha})}{\Delta\ell}=-\sum_{\beta}\frac{\kappa(s_{\alpha},s_{ \beta})}{\Delta\ell}\frac{\sigma_{j}(s_{\beta})}{\Delta\ell}\,\Delta\ell. \tag{14}\]
Each term in Eq.(14) represents the density (quantity per unit length): \(\bar{f}=f/\Delta\ell\) and \(\bar{\sigma}=\sigma/\Delta\ell\) are the force or torque per unit length and the displacement per unit length, respectively, whereas \(\bar{\kappa}=\kappa/\Delta\ell\) is interpreted as an extension of the elastic modulus.
In the continuum representation, as is usually considered, we assume that these densities are well-behaved functions. Hence, by replacing the summation with an integral, the continuous form in the large-\(N\) limit is obtained as
\[\bar{f}(s)=-\int_{0}^{\ell}\bar{\kappa}(s,s^{\prime})\bar{\sigma}(s^{\prime}) \,ds^{\prime}, \tag{15}\]
where the integral is performed over the material.
A non-energy-conserving active material may possess non-symmetric components in the elastic matrix (\(K_{\alpha\beta}\neq K_{\beta\alpha}\)), and this corresponds to non-reciprocity of non-local elastic interactions:
\[\bar{\kappa}(s,s^{\prime})\neq\bar{\kappa}(s^{\prime},s). \tag{16}\]
As illustrated in Fig. 3(b), we then decompose the kernel function into reciprocal (even) and non-reciprocal (odd) components as \(\bar{\kappa}(s,s^{\prime})=\bar{\kappa}_{\rm e}(s,s^{\prime})+\bar{\kappa}_{ \rm o}(s,s^{\prime})\). Note that the non-reciprocity expressed by Eq. (16) refers to the breakdown of the symmetry under the change of two material points, \((s,s^{\prime})\mapsto(s^{\prime},s)\), and differs from the reciprocity with respect to the physical coordinates, \((i,j)\mapsto(j,i)\). The macroscopic odd elasticity often refers to non-reciprocity in the physical coordinates, while the odd-elastic modulus here refers to microscopic odd-elastic interactions [6; 8].
To extract the non-reciprocal interactions encoded in the kernel in Eqs. (13) and (15), we introduce a new physical quantity. Let us first consider the two-dimensional Fourier transform of the non-local elastic modulus in a space spanned by \(s\) and \(s^{\prime}\). The transformation of \(\bar{\kappa}(s,s^{\prime})\) with a wave vector \((\nu_{s},\nu_{s^{\prime}})\) is then given by
\[\tilde{\kappa}(\nu_{s},\nu_{s^{\prime}})=\int\int\bar{\kappa}(s,s^{\prime})e ^{-i(\nu_{s}s+\nu_{s^{\prime}}s^{\prime})}\,dsds^{\prime}, \tag{17}\]
where the integral is performed over the two Lagrangian coordinates, i.e., \((s,s^{\prime})\in[0,\ell]\times[0,\ell]\).
To characterize the interactions between the units of material, it is useful to decompose the wave vector into diagonal and perpendicular components (See Fig. 3), rather than horizontal and vertical components. The diagonal part of the elastic matrix, along the \(\nu\)-direction in Fig. 3, represents spatial variations of the ordinary elastic response, while the off-diagonal parts represent non-local interactions. To characterize this non-local behavior, we consider the wave vector along the \(\hat{\nu}\)-direction in Fig. 3, which is perpendicular to the diagonal direction, given by \(\nu_{s}+\nu_{s^{\prime}}=0\).
By plugging this relation into Eq. (17) and introducing \(\hat{\nu}=(\nu_{s}-\nu_{s^{\prime}})/2\), we can obtain a complex function,
\[\tilde{\kappa}(\hat{\nu})=\int\int\bar{\kappa}(s,s^{\prime})e^{-i\hat{\nu}(s- s^{\prime})}\,dsds^{\prime}, \tag{18}\]
which we hereafter call the _odd-elastic modulus_. The physical meaning of this quantity becomes clearer when the kernel function is decomposed into reciprocal and non-reciprocal components, which are symmetric and anti-symmetric, respectively, with respect to the exchange of two material points \((s,s^{\prime})\mapsto(s^{\prime},s)\) as
\[\bar{\kappa}_{\rm e}(s,s^{\prime})=\bar{\kappa}_{\rm e}(s^{\prime},s)\ \mbox{and}\ \bar{\kappa}_{\rm o}(s,s^{\prime})=-\bar{\kappa}_{\rm o}(s^{\prime},s). \tag{19}\]
Substituting this decomposition into Eq.(18), we have
Figure 3: Schematics of elastic matrix \(K_{\alpha\beta}\) and its continuum representation \(\bar{\kappa}(s,s^{\prime})\) with decomposition into symmetric even elasticity (reciprocal interaction) and anti-symmetric odd elasticity (non-reciprocal interaction), (a) \({\bf K}={\bf K}_{\rm e}+{\bf K}_{\rm o}\) and (b) \(\bar{\kappa}=\bar{\kappa}_{\rm e}+\bar{\kappa}_{\rm o}\). The two-dimensional Fourier transform is associated with a two-dimensional wave vector \((\nu_{s},\nu_{s^{\prime}})\). To characterize the non-reciprocal nature of the interactions encoded in the intrinsic elasticity, we consider the Fourier modes along the diagonal components and in the perpendicular direction, respectively indicated by \(\nu\) and \(\hat{\nu}\) in the schematics. The odd-elastic modulus, Eq. (18), is defined by the Fourier modes in the \(\hat{\nu}\) direction.
\(\tilde{\kappa}(\dot{\nu})=\tilde{\kappa}_{\rm e}(\dot{\nu})+\tilde{\kappa}_{\rm o}( \dot{\nu})\) with
\[\tilde{\kappa}_{\rm e} = \int\int\bar{\kappa}_{\rm e}(s,s^{\prime})e^{-i\dot{\nu}(s-s^{ \prime})}\,dsds^{\prime}, \tag{20}\] \[\tilde{\kappa}_{\rm o} = \int\int\bar{\kappa}_{\rm o}(s,s^{\prime})e^{-i\dot{\nu}(s-s^{ \prime})}\,dsds^{\prime}. \tag{21}\]
From the relations in Eq. (19), we find that \(\tilde{\kappa}_{\rm e}\) is a real function, whereas \(\tilde{\kappa}_{\rm o}\) is a pure imaginary function. Hence, the newly introduced odd-elastic modulus allows us to characterize reciprocal and non-reciprocal interactions using their real and imaginary components.
The odd-elastic modulus \(\tilde{\kappa}\) is equivalent to the Fourier spectrum if the elastic interactions only depend on \(s-s^{\prime}\). By definition, it is readily found that the real part of the odd-elastic modulus is an even function of \(\dot{\nu}\), and this represents the even elasticity, characterizing the non-local, reciprocal elastic interactions. The imaginary part, in contrast, encodes the odd elasticity and non-local, non-reciprocal elastic interactions, and is an odd function of \(\dot{\nu}\).
In later sections, we will examine an active elastic filament, where \(f_{\alpha}\) in Eq. (11) indicates the torque and \(\sigma_{\beta}\) encodes the relative angle as in the case of the Purcell swimmer. Then, the density \(\bar{\kappa}(s,s^{\prime})\) represents an extension of the bending modulus, while \(\bar{\sigma}(s^{\prime})\) corresponds to the local curvature. In this case, we will call the odd-elastic modulus \(\tilde{\kappa}\) an _odd-bending modulus_, because it generalizes the linear relation between the torque and local curvature of the filament.
## IV Odd-elastic modulus for active filaments
In the previous sections, we examined the properties of a general odd-elastic material around the Hopf bifurcation. In the following sections, focusing on one-dimensional active filaments such as the flagella of _Chlamydomonas_ and sperm cells, we will exploit this framework to analyze the intrinsic elastic interactions that result in a limit cycle oscillation in the apparent shape space. We therefore assume that the swimmer shape gait is a known function obtained, for instance, from experimental observations.
### One-dimensional elastic sphere-spring system
To gain insights into the non-local, non-reciprocal interactions inside an elastic material, we start with a solvable one-dimensional model. Our first example is a one-dimensional sphere-spring system, where \(N\) spheres of radius \(a\) are connected by elastic springs and form a one-dimensional array as schematically shown in Fig. 4(a). The system is immersed in a viscous medium with viscosity \(\eta\), and each sphere experiences viscous drag with a drag coefficient \(\gamma=6\pi a\eta\), whereas we neglect the viscous drag on the elastic springs. We use the Lagrangian label for the sphere, \(s_{\alpha}\), which corresponds to the position of a sphere at rest. We assume a periodic boundary condition, \(s_{N+1}=s_{1}\), and the material points are equally spaced with discretization, \(\Delta\ell=\ell/N\), where \(\ell\) is the total length of the spatial period of the problem.
Let \(\zeta_{\alpha}\) be the displacement of the sphere labeled by \(\alpha\) as the shape variable \(\sigma_{\alpha}\). We first consider a local elastic interaction using elastic springs with a spring constant \(k\) between neighboring spheres. The equation of motion is then given by
\[\bar{m}\ddot{\zeta}_{\alpha}=-\gamma\dot{\zeta}_{\alpha}+k\left(\zeta_{\alpha +1}-2\zeta_{\alpha}+\zeta_{\alpha-1}\right). \tag{22}\]
In the large-\(N\) limit, its continuous representation can be obtained, where we take the \(\Delta\ell\to 0\) limit with the mass density \(\bar{m}=m/\Delta\ell\), drag per unit length \(\bar{\gamma}=\gamma/\Delta\ell\) and \(\bar{k}=k\Delta\ell\) kept constant. This argument leads to the well-known continuum equation for the displacement field \(\zeta(s,t)\) via
\[\bar{m}\frac{\partial^{2}\zeta}{\partial t^{2}}=-\bar{\gamma}\frac{\partial \zeta}{\partial t}+\bar{k}\frac{\partial^{2}\zeta}{\partial s^{2}}. \tag{23}\]
What we examine by odd-elastic modulus takes a different approach; we specify the interactions for a given wave pattern, rather than deriving differential equations from local microscopic interactions. Hence, instead of Eq. (22), we consider a general non-local elastic force given by Eq. (11), and our non-local sphere-spring system as
\[m\ddot{\zeta}_{\alpha}=-\gamma\dot{\zeta}_{\alpha}-K_{\alpha\beta}\zeta_{ \beta}. \tag{24}\]
We then solve \(K_{\alpha\beta}\) for a given wave pattern \(\zeta_{\alpha}(s_{\alpha},t)\).
We first consider the frictionless case, where \(\gamma=0\). Our goal here is to calculate the kernel function \(K_{\alpha\beta}\) that sustains a given traveling wave. To do so, we consider a
Figure 4: Schematics of example models for intrinsic elasticity. (a) One-dimensional sphere-spring system. Spheres of radius \(a\) are connected by linear springs. (b) Elastic filament immersed in a viscous fluid. Assuming a small-amplitude deformation from a straight line, we parameterize the filament shape by the height \(h\).
traveling wave pattern with wavenumber \(\nu\) and angular frequency \(\omega\) as
\[\zeta_{\alpha}=A\sin(\nu s_{\alpha}-\omega t) \tag{25}\]
with an arbitrary amplitude \(A\). Then, we set an orthogonal basis \(\mathbf{W}=(\mathbf{w}^{(1)},\mathbf{w}^{(2)},\cdots)\), such that \(w_{\alpha}^{(1)}=\sin(\nu s_{\alpha})\) and \(w_{\alpha}^{(2)}=\cos(\nu s_{\alpha})\). Through direct calculations of \(\zeta_{\alpha}\) and \(\tilde{\zeta}_{\alpha}\) and comparison of both terms in Eq. (24), we readily obtain
\[\mathbf{K}=m\omega^{2}\mathbf{W}\begin{pmatrix}1&&\\ &1&\\ &&\mathbf{O}\end{pmatrix}\mathbf{W}^{\mathrm{T}}, \tag{26}\]
where the symbol in the bottom right block \(\mathbf{O}\) indicates that non-designated components are all zero. This leads to a matrix representation of the non-local elasticity,
\[K_{\alpha\beta}=m\omega^{2}\cos[\nu(s_{\alpha}-s_{\beta})]. \tag{27}\]
The continuum representation is obtained by taking the large-\(N\) limit, with the mass density \(\bar{m}\) (mass per unit length) kept constant, as
\[\bar{\kappa}(s,s^{\prime})=\bar{m}\omega^{2}\cos[\nu(s-s^{\prime})]. \tag{28}\]
Here, the scaling \(\bar{\kappa}=\kappa/\Delta\ell\) is different from that used for Eq. (23), where \(\bar{k}=k\Delta\ell\) is assumed to be constant.
The wavenumber of the propagating wave is selective in the non-local sphere-spring system, which is qualitatively different from the wave equation (24), where the interactions are local and waves with an arbitrary frequency may propagate. These differences become clearer when we solve the non-local sphere-spring system using the kernel representation of Eq. (27).
To do so, we first consider a time-frequency Fourier transform, \(\zeta_{\alpha}(t)=\int\xi_{\alpha}(p)e^{-ipt}dp\), where \(\xi_{\alpha}(p)\) is a Fourier component with frequency \(p\). We then have
\[\omega^{2}\mathbf{W}\begin{pmatrix}1&&\\ &1&\\ &&\mathbf{O}\end{pmatrix}\mathbf{W}^{\mathrm{T}}\mathbf{\xi}(p)=p^{2}\mathbf{\xi}(p), \tag{29}\]
which forms an eigenvalue problem. This may be exactly solved and the inverse Fourier transform leads to a general solution, given by
\[\zeta_{\alpha} = c_{1}\sin(\nu s_{\alpha}-\omega t+\varphi_{1}) \tag{30}\] \[+c_{2}\cos(\nu s_{\alpha}-\omega t+\varphi_{2})+\sum_{\alpha=3}^{ N}c_{\alpha}w_{\alpha}^{(k)},\]
with constants \(\varphi_{1}\), \(\varphi_{2}\) and \(c_{\alpha}\) (\(\alpha=1,\ldots,N\)) determined by initial conditions. As the final term in Eq. (30) is a time-independent constant, only a sinusoidal wave with wavenumber \(\nu\) can propagate, and thus the wave pattern is robust against disturbance.
We then consider the same oscillatory behavior for the overdamped limit of the dynamics in Eq. (24), that is, \(m=0\). Due to viscous drag, to sustain the traveling wave, we need to inject some energy into the system. We therefore expect that the elastic representation of \(K_{\alpha\beta}\) should contain non-reciprocal, odd components.
Using an analysis similar to that for the frictionless dynamics above, we then have
\[\mathbf{K}=\gamma\mathbf{\omega}\mathbf{W}\begin{pmatrix}1&&\\ -1&&\\ &&\mathbf{O}\end{pmatrix}\mathbf{W}^{\mathrm{T}}, \tag{31}\]
yielding the matrix representation,
\[K_{\alpha\beta}=\gamma\omega\sin[\nu(s_{\alpha}-s_{\beta})]. \tag{32}\]
The continuum representation is also obtained using similar arguments as
\[\bar{\kappa}(s,s^{\prime})=\bar{\gamma}\omega\sin[\nu(s-s^{\prime})], \tag{33}\]
where the drag per unit length \(\bar{\gamma}=\gamma/\Delta l\) is assumed to be constant in the large-\(N\) limit.
For overdamped dynamics (\(m=0\)), the elastic interactions are non-local, as in the previous case, but no longer reciprocal. The matrix form, Eq. (32), not only selects a specific wavenumber but also sustains the associated traveling wave by the non-reciprocal components. Moreover, the elasticity is purely odd in the sense that the matrix is anti-symmetric, that is, \(\mathbf{K}=-\mathbf{K}^{\mathrm{T}}\) and \(\bar{\kappa}(s,s^{\prime})=-\bar{\kappa}(s^{\prime},s)\).
In the underdamped case with non-zero \(m\) and \(\gamma\), the results are the sum of the even and odd terms. The necessity for energy injection is consistent with the existence of the odd elasticity in the interactions, and the injected energy is dissipated by the viscosity of the medium.
The odd-elastic moduli for the frictionless system and the overdamped system are simply calculated from the Fourier transform of Eqs. (28) and (33) as
\[\tilde{\kappa}(\hat{\nu})=\frac{\bar{m}\omega^{2}}{2}\left[\delta(\hat{\nu}- \nu)+\delta(\hat{\nu}+\nu)\right] \tag{34}\]
and
\[\tilde{\kappa}(\hat{\nu})=\frac{i\bar{\gamma}\omega}{2}\left[\delta(\hat{\nu}- \nu)-\delta(\hat{\nu}+\nu)\right], \tag{35}\]
respectively, where \(\nu\) is the wavenumber for a given wave pattern in Eq. (25) and \(\delta(x)\) is again the Dirac delta function. \(\tilde{\kappa}\) is a real, even function in the frictionless system, while it is a purely imaginary, odd function in the overdamped system. The wavenumber of the traveling wave is clearly captured by the singular peaks.
In summary, the sphere-spring example shows that the odd-elastic modulus captures the internal mechanisms that robustly and selectively sustain a given wave pattern. Moreover, by considering its real and imaginary parts, we can distinguish the conservative, reciprocal elastic force from the non-reciprocal force, the latter of which is associated with the energy input required in a dissipative environment.
### Small-amplitude elastohydrodynamic filament
As a second example of the use of the intrinsic non-local elastic matrix, we examine the small-amplitude dynamics of an active filament at low Reynolds number. This is one of the simplest cases in elastohydrodynamics and has been studied with regard to various aspects of filament dynamics in viscous fluid for more than half a century [39; 40; 41; 21]. In particular, the model in this section is known to be generic near the critical point of the instability [17].
As shown schematically in Fig. 4(b), we consider an infinitely long filament and let \(h(x,t)\) be the height of the elastic filament from the \(x\)-axis, with its projection to the \(x\)-axis. We impose a periodic boundary condition with a spatial period of \(\ell\) and discretize it by \(N\) equally spaced points as in the previous example. The Lagrange label for the filament is taken by a projection onto the \(x\)-axis so that we have \(s_{\alpha}=x_{\alpha}\). The shape variables are therefore the local height, \(\sigma_{\alpha}=h(x_{\alpha})\). Under the assumption of a small-amplitude oscillation, the continuous limit of the elastohydrodynamics of the filament follows a partial differential equation given by [42; 17]
\[\xi_{\perp}\frac{\partial h}{\partial t}=-\kappa\frac{\partial^{4}h}{\partial s ^{4}}+\frac{\partial f}{\partial s}, \tag{36}\]
where \(\xi_{\perp}\) is the perpendicular drag coefficient per unit length for a slender filament, \(\kappa\) is the bending modulus, and \(f(s,t)\) is the pairwise force acting on the filament per unit length.
In elastohydrodynamics studies, the driving force \(f(s,t)\) is often a given function; otherwise, finer models for molecular activity are required to describe the dynamics of \(f(s,t)\). In contrast, here, we remove the driving force term from Eq. (36) and consider instead a non-local, non-reciprocal bending modulus, via
\[\xi_{\perp}\frac{\partial h}{\partial t}=-\int_{-\infty}^{\infty}\bar{\kappa}( s,s^{\prime})\frac{\partial^{4}h}{\partial s^{4}}(s^{\prime})\,ds^{\prime}, \tag{37}\]
that will effectively play the role of driving actuation. We then again examine the kernel function \(\bar{\kappa}(s,s^{\prime})\) that produces a sinusoidal traveling wave \(h(s,t)=A\sin(\nu s-\omega t)\). To sustain this wave, through calculations similar to those given above, we obtain the non-local elastic kernel as
\[\bar{\kappa}(s,s^{\prime})=\frac{\xi_{\perp}\omega}{\nu^{4}}\sin[\nu(s-s^{ \prime})]. \tag{38}\]
As in the overdamped case of the sphere-spring system, this is non-reciprocal and purely odd in terms of the exchange of the positions, while no even components emerge.
Although we use a continuum equation in this example, we can start from its discrete version, where the relative angles between neighboring angles are used as shape parameters. This formulation is used in the next section to deal with more general elastohydrodynamic interactions, such as finite-amplitude flagellar waveforms, where we perform numerical estimations.
## V Odd-elastic modulus for biological swimmers
In this section, to further clarify intrinsic non-local interactions in flagellar swimming, we investigate finite-amplitude biological models of flagellar waveforms in a representative sperm cell and _Chlamydomonas_ beat, in addition to experimental data for human sperm flagella provided by Ishimoto et al. [26].
The elastic properties of a flagellum are often modeled as an Euler-Bernoulli beam, in which the local elastic moments are proportional to the local curvature of the filament. To generate the flagellum wave, as seen in the small-amplitude example, we require internal actuation in the flagellar model. Our theory of odd elastohydrodynamics developed in Section II, however, incorporates the internal actuation as non-local generalized elasticity, together with the passive elastic response. In this section, we numerically examine the intrinsic elasticity of a swimming filament by analyzing the given limit cycle dynamics in shape space (see the inset of Fig. 5).
### Numerical methods
To implement the flagellar elastohydrodynamics, we model the swimming flagellum as a slender filament moving in a plane and represent its dynamics by the linkages model [43], where \(N+1\) rods of length \(\Delta\ell\) are
Figure 5: Schematic of coarse-grained representation of elastic filament and typical dynamics in \(q_{1}-q_{2}\) shape space. We represent the flagellum by \(N+1\) rods of length \(\Delta\ell\), which are connected at each end by elastic hinges. The shape configuration is then specified by the relative angles between neighboring rods, denoted by \(\sigma_{\alpha}\) (\(\alpha=1,2,\cdots,N\)). The rotation of the filament is described by the angle between the horizontal axis and the first rod denoted by \(\theta\). [Inset] The shape gait is given by the autonomous system and we show its typical trajectory in the \(q_{1}-q_{2}\) shape space. The flagellar waveform approaches a periodic pattern described by the stable limit cycle, after starting from the initial point near the origin.
connected at each end by elastic hinges to form a single filament. The shape configuration is then specified by the relative angles between neighboring rods, denoted by \(\sigma_{\alpha}\) (\(\alpha=1,2,\cdots,N\); Fig. 5). These relative angles are discretized representations of the local curvature, and for a passive elastic filament, a linear elastic torque is applied at each hinge. Here, however, we generalize this torque to non-local, nonlinear interactions, analogous to the dynamics of Eq. (5).
The shape gait of the filament is described by a stable limit cycle in the \(q_{1}-q_{2}\) apparent shape space (inset of Fig. 5). To transform the two representations between \(\mathbf{\sigma}\) and \(\mathbf{q}\), let us introduce \(\mathbf{w}^{(\alpha)}\) as the lowest \(N\) PCA modes obtained either from data generated through a mathematical model or experimental data. The number of intrinsic shape coordinates should be smaller than the PCA modes \(N_{\text{flag}}\) in the original data. We then expand the intrinsic shape coordinates in the PCA modes \(\mathbf{w}^{(\alpha)}\) as \(\mathbf{\sigma}=q_{\alpha}\mathbf{w}^{(\alpha)}\) or \(\mathbf{\sigma}=\mathbf{W}\mathbf{q}\) in a matrix form, where \(\mathbf{W}=(\mathbf{w}^{(1)},\mathbf{w}^{(2)},\cdots,\mathbf{w}^{(N)})\in\mathrm{O}(N)\) form an orthonormal basis of the shape space. To apply our theory, we consider the flagellar waveforms described by Eqs. (5)-(6), and assume the form of the damping modes appearing in the right-bottom block in Eq. (5), given by \(\hat{\mathbf{K}}^{\mathrm{d}}=k^{\mathrm{d}}\mathbf{I}_{N-2}\), where \(k^{\mathrm{d}}\) is a non-negative constant and \(\mathbf{I}_{N-2}\) is the \((N-2)\)-dimensional identity matrix.
When the flagellar waveform possesses non-zero time-averaged curvature, \(\mathbf{\sigma}_{0}\), this part cannot be captured by the PCA modes. The odd-elastic representation of Eq. (3) is therefore extended to the form
\[\dot{\mathbf{\sigma}}=-\mathbf{Q}\mathbf{W}\hat{\mathbf{K}}\mathbf{W}^{\mathrm{T} }(\mathbf{\sigma}-\mathbf{\sigma}_{0}). \tag{39}\]
Also, this change affects the factor \(r^{2}\) in Eq. (6) as
\[r^{2}=\left(\mathbf{w}^{(1)}\cdot(\mathbf{\sigma}-\mathbf{\sigma}_{0})\right)^{2}+\left( \mathbf{w}^{(2)}\cdot(\mathbf{\sigma}-\mathbf{\sigma}_{0})\right)^{2}. \tag{40}\]
In the following examples, we consider a freely swimming sperm flagellum in Secs. V.2 and V.4, and a clamped _Chlamydomonas_ flagellum in Sec. V.3. We neglect the sperm head and _Chlamydomonas_ cell body in these examples in order to showcase the odd-bending modulus for different wave patterns. The elastohydrodynamic coupling, represented by the matrix \(\mathbf{Q}\), is numerically computed by the coarse-grained method based on resistive force theory [43], for which we used \(N=80\) links.
### Sinusoidal flagellum model
We start with a representation used as a simple but canonical model of a sperm flagellum [44], where the local curvature, or relative angle in the discretized model, at the arc length \(s_{\alpha}\in[0,\ell]\) is given by a sinusoidal function in the form
\[\sigma_{\alpha}=C_{1}\sin(\nu s_{\alpha}-\omega t). \tag{41}\]
Here, the constants \(C_{1},\nu\), and \(\omega\) are the curvature amplitude, wavenumber, and beat angular frequency, respectively. This simple sinusoidal function is not only theoretically useful, but is also representative of many sperm flagella of marine species [45; 27]. Here, as a reasonable choice to match the flagellar waveforms, we set \(\nu=3\pi\). The corresponding swimming dynamics are shown in Fig. 6(a).
We then non-dimensionalize the system. We employ the flagellar length for the unit of the length scale (\(\ell=1\)) The linear odd elasticity \(k^{\mathrm{o}}\) is identical to the phase velocity and hence characterizes the timescale of the beat cycle, and we set \(k^{\mathrm{o}}=1\). After we fix the length scale and timescale, the only remaining physical unit is the force scale. We use the elastic force as the unit for the force scale by setting \(k^{\mathrm{e}}=1\). There is, therefore, one dimensionless parameter remaining in the system that characterizes the ratio between the timescales for the elastic and viscous responses, and it is known as the sperm number, \(\mathrm{Sp}=\ell(\xi_{\perp}k^{\mathrm{o}}/|k^{\mathrm{e}}|)^{1/4}\)[43], once we set \(k^{\mathrm{ne}}=1\) to fix the value \(C_{1}\) and the radius of the limit cycle. We also find from Eq. (41) that \(k^{\mathrm{no}}=k^{\mathrm{d}}=0\).
Figure 6: Flagellar waveforms and odd-bending modulus along circle with radius \(r=\sqrt{q_{1}^{2}+q_{2}^{2}}\) in two-dimensional shape space for sinusoidal flagellum with wavenumber \(\nu/2\pi=1.5\): (a) superposed waveforms for a swimming flagellum during one beat cycle with its left and initially located at \((x,y)=(0,0)\), (b) \(r=0\) at the origin, (c) \(r=1\) on the limit cycle orbit, and (d) \(r=1.5\) outside the limit cycle. In each panel, the real and imaginary parts are shown by the blue broken curve and the red dotted curve, respectively. The stable limit cycle corresponds to a circle of radius \(r=1\).
As a typical parameter for swimming sperm flagella, we set \(\mathrm{Sp}=3\) and compute the odd-bending modulus obtained from the intrinsic elastic matrix. Due to the nonlinear nature of the elasticity, the odd-bending modulus generally depends on the instantaneous configuration. Nonetheless, due to the rotational symmetry of the dynamics in shape space, the odd-bending modulus is almost constant along a circle with a constant radius \(r=\sqrt{q_{1}^{2}+q_{2}^{2}}\).
In Fig. 6(b-d), we plot the odd-bending modulus at \(r=0\), and its average over circles of radius \(r=1\) and \(1.5\), where \(r=1\) corresponds to the stable limit cycle orbit. The odd-bending modulus for the straight configuration (\(r=0\)) possesses a negative bending modulus in its reciprocal part around \(\tilde{\nu}/2\pi=\pm 1.5\) [Fig. 6(b)], indicating that the wave pattern emerges as an instability for the straight rod. The non-reciprocal part also has a peak around \(\tilde{\nu}/2\pi=\pm 1.5\), and this corresponds to the wave traveling along the flagellum.
On the limit cycle orbit [Fig. 6(c)], remarkably, the real part of the odd-bending modulus almost vanishes and the non-reciprocal interactions are dominant, as observed in the overdamped sphere-spring system described in Section IV. When the orbit moves outside the limit cycle, as in Fig. 6(d), due to the nonlinearity, the even elasticity is enhanced for the wavenumber \(\tilde{\nu}/2\pi\approx\pm 1.5\), while the non-reciprocal interactions that cause the flagellar wave to propagate are also strengthened.
### Chlamydomonas flagellum model
We now proceed to another type of simple model, reproducing a _Chlamydomonas_ flagellum, which is one of the most studied flagella beat patterns and characterized by its asymmetric ciliary beat pattern. According to Geyer et al. [46], the local curvature of the _C. reinhardtii_ flagellum is well represented by a simple function,
\[\sigma_{\alpha}=C_{0}+C_{1}\sin(\nu s_{\alpha}-\omega t), \tag{42}\]
as in the previous model, but with a non-zero constant \(C_{0}\) for the mean curvature. We set \(C_{0}=-1/40\), \(\nu=2\pi\), \(k^{\mathrm{ne}}=25/4\), and the sperm number \(\mathrm{Sp}=1\) for a biologically reasonable waveform, as plotted in Fig. 7(a). Other parameters are the same as in Section V.2. Here, we clamped the proximal end of the _Chlamydomonas_ flagellum at \((x,y)=(0,0)\) and neglected the cell body and the other flagellum for simplicity. In computing the elastohydrodynamics, we further imposed the clamped boundary condition by removing the rows for the rigid body motion in Eq. (2).
The resulting odd-bending modulus averaged over the limit cycle is plotted in Fig. 7(b). In this asymmetric beat pattern, the even part of the non-local elastic interactions remains non-zero and takes both positive and negative signs with peaks at \(\tilde{\nu}/2\pi\approx 0.5\) and \(\tilde{\nu}/2\pi\approx 1.5\), respectively. In contrast, the peak for the non-reciprocal interactions coincides with the wavenumber of the flagellar waveform, indicating that the odd elasticity drives the wave as non-conventional internal actuation.
### Human sperm flagellum model
We now proceed to analyze human sperm data for our active elastic filament. Here, we approximate the limit cycle orbit as a unit circle in the two-dimensional flagellar PCA space using Eq. (5) by expanding the shape variable using the flagellar PCA modes. The parameters used in this section are the same as those used in Section V.2, except that we use a non-zero value of \(k^{\mathrm{d}}=0.1\) to ensure the existence of a stable limit cycle in the \(N\)-dimensional shape space. The free swimming behavior is shown in Fig. 8(a) as superposed snapshots of the flagellum.
As in the previous examples, we plot the odd-bending modulus averaged over the limit cycle in Fig. 8(b). The reciprocal interactions represented as the real part of the odd-bending modulus can take both positive and negative values depending on the wavenumber \(\hat{\nu}\). The peaks around \(\hat{\nu}\approx 0\) indicate the local passive elastic response of the flagellum. The negative reciprocal
Figure 8: Flagellar waveforms and odd-bending modulus along stable limit cycle in data-driven human sperm model. (a) Superposed waveforms for a simulated sperm flagellum during one beat cycle with its left end initially located at \((x,y)=(0,0)\). (b) Real and imaginary parts of the odd-bending modulus.
Figure 7: Flagellar waveforms and odd-bending modulus along stable limit cycle for _Chlamydomonas_ model with proximal end clamped at \((x,y)=(0,0)\). (a) Superposed waveforms during one beat cycle. (b) Real and imaginary parts of the odd-bending modulus.
elasticity has a peak around a wavenumber \(\hat{\nu}/2\pi\approx 2\), where the peaks for the non-reciprocal elastic interactions are also located. This are similar to the generation of the wave as a Hopf instability, as shown in Fig. 6(b). These observations, therefore, imply the following mechanical balance. The internal activity, shown by the peak in the odd elasticity and negative even elasticity, generates a flagellar wave, which is relaxed by the passive elasticity characterized by a local even elastic response.
## VI Discussion and Conclusions
In this study, to formulate the dynamics of a living material in a viscous fluid, we investigated a general description of swimming under a periodic limit cycle oscillation by extending the concept of odd elasticity to a nonlinear regime. By means of a change of basis from intrinsic to apparent shape coordinates, we reduced the shape dynamics to the normal form of a Hopf bifurcation, which is in turn mapped to nonlinear odd elasticity. This formulation, which we refer to as _odd elastohydrodynamics_, then enables us to access the internal non-local, non-reciprocal interactions in the intrinsic shape space. Further, to characterize the internal activity as well as the passive elastic response, we introduced a new concept, the _odd-elastic modulus_, defined by a spatial Fourier transform in an extended space. Of note, this odd-elastic modulus is distinct from the widely used complex modulus or dynamic modulus defined in frequency Fourier space [34].
With the help of the autonomous odd-elastic dynamics of an active system, we were able to examine the general aspects of microswimmer dynamics. Furthermore, in the Appendices, we examined in detail the effects of noise from the swimming gait on the swimming performance by extending the well-known swimming formula that provides the average swimming velocity to a general noisy limit cycle. By calculating the probabilistic areal velocity in shape space, we found that the effect of noise on the odd elasticity is negligibly small in a small-deformation regime. Further analysis of the noisy limit cycle in the shape space allowed us to bridge the entropy production and work done by the odd elasticity in the nonlinear regime, and making it consistent with the physical interpretation that the internal actuation inside the elastic material is described by odd elasticity.
Then, we applied our theory to the analysis of the internal interactions of living organisms, focusing on flagellar swimming. From solvable simple models to biological flagellar waveforms for _Chlamydomonas_ and sperm cells, we studied the odd-bending modulus to decipher the non-local, non-reciprocal inner interactions within the material. In particular, we found that the swimmers can possess negative reciprocal even elasticity at some spatial frequencies, indicating mechanical instability by internal actuation. The imaginary part of the odd-bending modulus is the material non-reciprocal response and corresponds to the odd elasticity, which represents the speed of the generated flagellar wave.
For the limit cycle, we found that the even elasticity ceases for some simple models, suggesting a simple nonlinear description of the material. To illustrate its usefulness in a biological context, we further analyzed the intrinsic odd-elastic response by using _Chlamydomonas_ and human sperm flagella models, deciphering non-local elastic interactions in biological flagella.
It is useful to point out that the current description of active elastic material includes elastohydrodynamic coupling with the outer viscous environment. We have in turn expanded the notion of odd elasticity as a stress-strain linear relation to an effective material constitutive relation that deals with the activity, elasticity, and fluid dynamics. In this paper, we have assumed a circular trajectory in the apparent shape space as a limit cycle. However, by definition, the odd-elastic modulus can be calculated for any closed loop, indicating potential applicability to a wide range of biological data.
Not limited to a one-dimensional filament, our methodology is applicable to higher-dimensional materials such as active elastic membranes and bulk dynamics. The odd response of materials has also been examined in terms of odd viscosity and odd viscoelasticity [48; 47; 8], and natural extensions of the current methods to these odd materials may be expressed as a viscoelastic force representation, \(\mathbf{f}=-\mathbf{K}\mathbf{\sigma}-\mathbf{J}\mathbf{\dot{\sigma}}\), where odd viscosity is encoded by a non-symmetric matrix \(\mathbf{J}\) coupled to the rate of deformation. Extension to an active elastic matrix in a viscoelastic medium is also an interesting future direction, where it is necessary to numerically calculate the hydrodynamic force \(\mathbf{Q}\) from the viscoelastic fluid equation.
The current methodology is also applicable to wet active matter systems, as these extended descriptions of active materials could be useful for simplifying the modeling of elastohydrodynamic interactions between cells. These modeling methods will therefore contribute to a better understanding of the underlying principles of collective behavior, in particular when elastohydrodynamics play an essential role, as reported for sperm population dynamics [49; 50]. Furthermore, microswimmers change their swimming pattern in response to the external environment, such as mammalian spermatozoa before and after capacitation [51; 52], marine sperm cells in chemoattractant gradients [45; 53], phototactic algae such as _Chlamydomonas_ and _Volvox_ under a light source[54; 55], and ciliates in response to mechanical stimuli [56; 57; 58]. Our description of odd elasticity will therefore enable unified comparisons for such diverse waveform morphologies of microswimmers to characterize the differences among species and individual cells.
###### Acknowledgements.
K.I. acknowledges the Japan Society for the Promotion of Science (JSPS) KAKENHI for Transformative Research Areas A (Grant No. 21H05309), and the Japan Science and Technology Agency (JST), FOREST (Grant No. JPMJFR212N). C.M. is a JSPS International Research Fellow (PE22023) and acknowledges funding support by JSPS (Grant No. 22KF0197). K.Y. acknowledges support by a JSPS Grant-in-Aid for JSPS Fellows (Grant No. 22KJ1640). K.I., C.M., and K.Y. were supported in part by the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located at Kyoto University.
## Appendix A Swimming with noisy limit cycle
In this Appendix, to complete the general theory of odd elastohydrodynamics in the presence of noise caused by internal actuation, we first extend the swimming formula for the average swimming velocity to a temporally fluctuating swimming gait, following biological observations of a noisy limit cycle in shape space. Using the gauge-field formulation for microswimming, we investigate the effects of internal active noise on swimming velocity. The role of odd elasticity is then further discussed in terms of non-equilibrium thermodynamics.
### Swimming with probability current
We now consider the swimming formula in Eq. (8) in a statistical sense [15]. With a bracket indicating an ensemble average, the average swimming formula becomes
\[\langle\mathcal{A}\rangle=\frac{1}{2}F_{ij\alpha\beta}\langle q_{\alpha} \dot{q}_{\beta}\rangle=\mathrm{Tr}(\mathcal{F}\mathbf{J}), \tag{10}\]
where the trace is taken over the shape components. The anti-symmetric matrix \(\mathbf{J}\) is the probabilistic areal velocity matrix, given by
\[J_{\alpha\beta}=\bigg{\langle}\oint q_{\alpha}\,dq_{\beta}\bigg{\rangle}. \tag{11}\]
In the two-dimensional shape space, this statistical swimming formula (10) simply reads as
\[\langle A_{ij}\rangle=2F_{ij12}J \tag{12}\]
if we write \(J_{\alpha\beta}=J\epsilon_{\alpha\beta}\). The swimming speed in the form of a gauge potential is proportional to the probabilistic areal velocity \(J\).
To examine the effects of a noisy shape gait on the swimming velocity, we therefore need to evaluate the value of \(\mathbf{J}\) by introducing stochastic dynamics in shape space.
We consider an \(N\)-dimensional autonomous system in the apparent shape space (5) with Gaussian white noise, given by stochastic differential equations (SDE) in the sense of Stratonovich in the form
\[\frac{dq_{\alpha}}{dt}=f_{\alpha}(\mathbf{q})+G_{\alpha\beta}(\mathbf{q})\zeta_{ \beta}(t), \tag{13}\]
where the noise has a zero-mean normal Gaussian form, that is, \(\langle\zeta_{i}(t)\rangle=0\) and \(\langle\zeta_{\alpha}(t)\zeta_{\beta}(0)\rangle=\delta_{\alpha\beta}\delta(t)\) with \(\delta(t)\) denoting the Dirac delta function. The function \(f_{\alpha}\) corresponds to the generalized force and torque in the apparent shape coordinates and is provided by Eq. (5) as \(f_{\alpha}=-\dot{K}_{\alpha\beta}q_{\beta}\). The diffusion tensor is introduced as \(\mathbf{D}=(1/2)\mathbf{G}\mathbf{G}^{\mathrm{T}}\) and is symmetric and positive-definite by definition. The corresponding Fokker-Planck equation for the probability distribution function \(P(\mathbf{q},t)\) and the probability current \(\mathbf{j}(\mathbf{q},t)\) is
\[\frac{\partial P}{\partial t}+\nabla\cdot\mathbf{j}=0, \tag{14}\]
which is explicitly given in the sense of Stratonovich by
\[\frac{\partial P}{\partial t}=-\frac{\partial}{\partial q_{\alpha}}\left[f_{ \alpha}P-\frac{1}{2}\left(G_{\alpha\gamma}\frac{\partial}{\partial q_{\beta}} \left(G_{\beta\gamma}P\right)\right)\right], \tag{15}\]
where \(\mathbf{j}\) is provided by the terms in the bracket on the right-hand side of Eq. (15).
For brevity, we here only consider the dynamics in the two-dimensional \(q_{1}-q_{2}\) shape space on which the stable limit cycle is located. This simplification is equivalent to the assumption of \(\mathbf{K}^{\mathrm{d}}=\mathbf{0}\) in Eq. (5). When we relax this assumption to include \(N-2\) stable modes, the swimming formula in Eq. (12) is calculated as the sum of contributions from other dimensions as discussed in detail in Appendix C of Ishimoto et al. [15]. We may then write the noise tensor \(\mathbf{G}\) as
\[\mathbf{G}=g_{r}(r)\sqrt{2D_{r}}\,\mathbf{e}_{r}\otimes\mathbf{e}_{r}+rg_{\theta}(r) \sqrt{2D_{\theta}}\,\mathbf{e}_{\theta}\otimes\mathbf{e}_{\theta}, \tag{16}\]
with the unit bases denoted by \(\mathbf{e}_{r}\) and \(\mathbf{e}_{\theta}\) for the radial and angle directions, respectively. The dynamics of Eq. (13), represented in Cartesian apparent shape coordinates, is then reduced to a set of equations in polar coordinates, in the sense of Stratonovich, as
\[\frac{dr}{dt}=f_{r}(r)+g_{r}(r)\sqrt{2D_{r}}\zeta_{r},\ \frac{d\theta}{dt}=f_{ \theta}(r)+g_{\theta}(r)\sqrt{2D_{\theta}}\,\zeta_{\theta}. \tag{17}\]
Here, the suffixes \(r\) and \(\theta\) indicate the radial and angle coordinates, respectively, and the zero-mean noise satisfies the relation \(\langle\zeta_{a}(t)\zeta_{b}(0)\rangle=\delta_{ab}\delta(t)\) for \(a,b\in\{r,\theta\}\). We then rewrite the probability current \(\mathbf{j}\) in polar coordinates as
\[\mathbf{j} = \left[\left(f_{r}(r)+D_{r}g_{r}(r)g_{r}^{\prime}(r)\right)P-D_{r} \frac{\partial}{\partial r}\left([g_{r}(r)]^{2}P\right)\right]\mathbf{e}_{r} \tag{18}\] \[+\left[rf_{\theta}(r)P-D_{\theta}\frac{\partial}{\partial\theta} \left(rg_{\theta}(r)P\right)\right]\mathbf{e}_{\theta},\]
yielding the Fokker-Planck equation (101) in the form
\[\frac{\partial P}{\partial t} = -\frac{\partial}{\partial r}\left[\left(f_{r}+D_{r}g_{r}g_{r}^{ \prime}\right)P-D_{r}\frac{\partial}{\partial r}\left(g_{r}^{2}P\right)\right] \tag{102}\] \[-f_{\theta}\frac{\partial P}{\partial\theta}+D_{\theta}g_{\theta }\frac{\partial^{2}P}{\partial\theta^{2}},\]
where the prime symbol denotes the derivative with respect to \(r\). From the rotational symmetry of the system (100), the steady distribution \(P_{\rm st}\) is independent of the angle coordinate. By using the boundary condition for the steady distribution satisfying a zero distribution at infinity, we obtain the following relation for the steady distribution, after once integrating over \(r\):
\[\left(f_{r}(r)+D_{r}g_{r}(r)g_{r}^{\prime}(r)\right)P_{\rm st}-D_{r}\frac{d}{ dr}\left(\{g_{r}(r)\}^{2}P_{\rm st}\right)=0. \tag{103}\]
The steady distribution is then formally solved as
\[P_{\rm st}(r)=\frac{N_{\rm st}}{g_{r}(r)}\exp\left[\int^{r}\frac{f_{r}(x)}{D_{ r}\{g_{r}(x)\}^{2}}dx\right], \tag{104}\]
where \(N_{\rm st}\) is the normalization factor. Note that the steady distribution is known to be unique [59] and here it is independent of both the angle coordinates and odd elasticity because of the form \(f_{r}=-k^{\rm e}r-k^{\rm ne}r^{3}\). For a steady distribution, the probability current \(\mathbf{j}\) (101) possesses only angular components due to the condition (103), leading to the form
\[\mathbf{j}_{\rm st}=rf_{\theta}(r)P_{\rm st}(r)\mathbf{e}_{\theta}, \tag{105}\]
with \(f_{\theta}(r)=k^{\rm o}+k^{\rm no}r^{2}\) in our dynamics. The non-vanishing probability current characterizes the violation of the detailed balance or the non-reciprocity of the non-equilibrium system, and has been studied from several perspectives, such as irreversible circulation [60], curl flux [61; 62], and non-reciprocity [63; 64].
To calculate the value of \(J\), we use the internal noise model proposed by Ma et al. [22] based on experimental observation of bull sperm. Their study found that the fluctuating sperm flagellar waveforms were well described by the SDE model with multiplicative white noise given by
\[g_{r}=r\ \ {\rm and}\ \ g_{\theta}=1, \tag{106}\]
where \(D_{r}\) and \(D_{\theta}\) correspond to the diffusion constant in the amplitude and angle coordinates, respectively. This noise term in Eq. (106) is obtained from additive noise in the (linear) even and odd elastic constants \(k^{\rm e}\) and \(k^{\rm ne}\) by the transformations \(k^{\rm e}\mapsto k^{\rm o}+\sqrt{2D_{r}}\,\zeta_{\rm r}(t)\) and \(k^{\rm o}\mapsto k^{\rm e}+\sqrt{2D_{\theta}}\,\zeta_{\theta}(t)\) in the deterministic dynamics of Eq. (5) [65]. Because \(k^{\rm e}\) and \(k^{\rm o}\) represent the amplitude and phase speed of the limit cycle, respectively, the noise strengths \(D_{r}\) and \(D_{\theta}\) therefore correspond to the amplitude and angle diffusion.
The detailed form of the steady solution depends on the sign of \(k^{\rm e}\)[65]. Indeed, when \(k^{\rm e}\geq 0\), the origin is stable and the possible steady solution is simply the Dirac delta distribution at the origin,
\[P_{\rm st}(r)=\frac{1}{\pi}\delta(r). \tag{107}\]
In the case with \(k^{\rm e}<0\), however, the origin is unstable and a steady limit cycle emerges, leading to a steady distribution in the form
\[P_{\rm st}=N_{\rm st}r^{\frac{|k^{\rm e}|}{D_{r}}-1}\exp\left[-\frac{k^{\rm ne }}{2D_{r}}r^{2}\right] \tag{108}\]
with a normalization prefactor \(N_{\rm st}\). This probability function vanishes at the origin when the noise is sufficiently small, \(D_{r}<|k^{\rm e}|\), whereas the distribution becomes singular at the origin once the magnitude of the noise is increased to \(D_{r}>|k^{\rm e}|\). From the steady distribution with \(|k^{\rm e}|>D_{r}\), as plotted in Fig. 9(a), it can be seen that for the volcano-like function, the maximum values are located on a circle of radius \(r^{*}=\sqrt{(|k^{\rm e}|-D_{r})/k^{\rm ne}}\), which monotonically decreases as the diffusion \(D_{r}\) increases.
Note that the distribution in Eq. (107) is always possible because the origin is a stationary solution of the stochastic system with our choice of noise (106). The resulting steady distribution is therefore obtained by the summation of (107) and (108), with their weights depending on the initial distribution.
Let us assume \(k^{\rm e}<0\) and that the initial distribution does not contain the \(r=0\) state. Then, the steady distribution can be further expressed with an explicit form of the normalization factor \(N_{\rm st}\), given by
\[N_{\rm st}=\frac{\sqrt{2}}{\pi}\left(\sqrt{\frac{k^{\rm ne}}{2D_{r}}}\right)^ {\frac{|k^{\rm e}|}{D_{r}}-1}\Bigg{/}\ \Gamma\left(\frac{|k^{\rm e}|}{2D_{r}}\right), \tag{109}\]
Figure 9: Steady probability distribution and probabilistic current for system with noisy limit cycle with biologically relevant multiplicative noise. Parameters are set as \(k^{\rm e}=-1,k^{\rm ne}=1,k^{\rm o}=1,k^{\rm no}=1\), and \(D_{r}=0.3\). (a) Steady distribution function with a volcano shape. The distribution function has a degenerate peak and reaches zero at the origin and infinity. The function is normalized so that the integrated probability becomes unity. (b) Distribution function projected onto the \(q_{1}-q_{2}\) plane and superposed with probabilistic current vectors. The circular current has a maximum strength around the peak of the distribution.
where \(\Gamma(x)\) in the denominator indicates the gamma function. Then, the steady-state probability current \(\mathbf{j}_{\rm st}\) in Eq. 13 possesses a non-zero value. The plots in Fig. 9(b) show the rotational probability current superposed on the two-dimensional projection of the probability distribution. The size of \(\mathbf{j}\) has a maximum value on a circle, which roughly overlaps with the ridge of the distribution function. Also, we can calculate \(J\) as
\[J = \left\langle\oint q_{1}\,dq_{2}\right\rangle=\frac{k^{\rm o}}{2} \langle r^{2}\rangle+\frac{k^{\rm no}}{2}\langle r^{4}\rangle \tag{18}\] \[= \frac{k^{\rm o}}{2}\frac{|k^{\rm e}|}{k^{\rm ne}}+\frac{k^{\rm no }}{2}\frac{|k^{\rm e}|}{k^{\rm ne}}\left(\frac{|k^{\rm e}|}{k^{\rm ne}}+\frac{ 2D_{r}}{k^{\rm ne}}\right).\]
The first term is proportional to the linear odd elasticity \(k^{\rm o}\) and is equivalent to the deterministic case. The noise effects only appear in the nonlinear odd-elastic term and increase (decrease) by amplifying the noise magnitude for \(k^{\rm no}>0\) (\(k^{\rm no}<0\)), although the nonlinear effects are sub-dominant compared with the linear odd-elastic term. Note that this dependence is a characteristic feature of the noise form in Eq. (14), and the stochastic model with simple additive noise provides qualitatively similar results for a small noise case but different results in general (see Appendix B for details).
The non-zero probability current due to the noisy limit cycle is generated only by the odd parts of the elasticity matrix, and from the swimming formula (13), the average swimming velocity is also proportional to the size of the odd elasticity. When \(k^{\rm e}>0\), however, the steady probability distribution shrinks to the origin and the steady-state probability current vanishes, yielding an average swimming velocity of zero. Moreover, the noise effect does not appear in the dominant linear odd-elastic term, indicating that the fluctuating shape gait does not impact swimming under the noisy limit cycle. However, the effects of noise depend on the stochastic model, and some qualitative differences are discussed in Appendix B.
In summary, the autonomous odd-elastic dynamics of an active system and the gauge-field formulation of microswimming enable us to examine the effects of noise due to the swimming gait on the swimming performance. Notably, we have found that the effect of noise on the odd elasticity is negligibly small in a small-deformation regime.
### Non-reciprocity, irreversibility, and entropy production
To conclude the general theory of odd elastohydrodynamics, we now examine the irreversible stochastic dynamics driven by the odd elasticity from the point of view of thermodynamics. The non-zero probability current is due to the violation of the detailed balance, and this can be characterized by a non-zero (positive) entropy production rate in the non-equilibrium statistical physics [66]. The averaged entropy production rate, \(\dot{\dot{e}}_{p}\), is defined by using non-equilibrium entropy \(S=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}P_{\rm st}\log P_{\rm st}\,dq_ {1}\,dq_{2}\), the average rate of heat \(\dot{Q}\), and the system temperature \(T\) as
\[\dot{e}_{p}:=\dot{S}-\frac{\dot{Q}}{T}, \tag{19}\]
which is, in general, non-negative by the thermodynamic law. For the Langevin system given by Eq. (14), the entropy production rate is provided [67]
\[\dot{e}_{p}=\langle v_{\alpha}D_{\alpha\beta}^{-1}v_{\beta}\rangle, \tag{20}\]
where \(\mathbf{v}\) is the probabilistic velocity for the steady state defined as \(\mathbf{v}=\mathbf{j}_{\rm st}/P_{\rm st}\); hence, in our problem, simply \(\mathbf{v}=rf_{\theta}(r)\mathbf{e}_{\theta}\) from Eq. (13). By directly calculating the diffusion tensor via Eq.(17), we obtain
\[\dot{e}_{p}=\frac{1}{D_{\theta}}\langle f_{\theta}^{2}\rangle, \tag{21}\]
which can be calculated, with \(f_{\theta}(r)=k^{\rm o}+k^{\rm no}r^{2}\), as
\[\dot{e}_{p}=\frac{1}{D_{\theta}}\left[(k^{\rm o})^{2}+2k^{\rm o}k^{\rm no}\frac {|k^{\rm e}|}{k^{\rm ne}}+(k^{\rm no})^{2}\left(\frac{|k^{\rm e}|}{k^{\rm ne} }+\frac{2D_{r}}{k^{\rm ne}}\right)\right] \tag{22}\]
for \(k^{\rm e}<0\) by using the steady-state distribution and Eqs. (16) and (17). When \(k^{\rm e}\geq 0\), we obtain \(\dot{e}_{p}=(k^{\rm o})^{2}/D_{\theta}\).
These results confirm that the average entropy production rate is generated only by the odd elasticity, which agrees with the physical interpretation of the odd elasticity as an internal non-conservative activity. The average work done by the elastic force is also calculated using
\[\dot{W}=-\bigg{\langle}\oint\hat{K}_{\alpha\beta}q_{\beta}\,dq_{\alpha}\bigg{\rangle} =\langle f_{\theta}^{2}\rangle. \tag{23}\]
We note that the even elasticity is represented by a potential force and does not contribute to the average work. Comparing Eqs. (21) and (23) shows that the second law of thermodynamics reads as \(\dot{e}_{p}=\dot{W}/k_{\rm B}T\), with the Boltzmann constant \(k_{\rm B}\). We thus have
\[D_{\theta}=k_{\rm B}T, \tag{24}\]
which is consistent with the generalized fluctuating dissipation theorem for non-equilibrium systems [68].
Of note, the non-equilibrium thermodynamic relations discussed in this subsection are based on the dynamics in the shape space (14), or more precisely, the dynamics of Eqs. (5) and (6) with additional noise. The entropy production rate and the associated heat production for the full system should account for the non-reciprocal swimming motion in the physical space and the energy dissipation in the fluid.
Nonetheless, our analysis of the noisy limit cycle in shape space bridges the entropy production and work done by the odd elasticity in the nonlinear regime, being consistent with the physical interpretation that actuation inside the elastic material is described by odd elasticity.
## Appendix B Probability current with simple additive noise
In this appendix, to complement the results of the biologically relevant multiplicative noise posited in Section A.1, we discuss the probabilistic areal velocity for the additive Gaussian noise in Eq. (101). The form of the noise (101) is given by the setting \(G_{\alpha\beta}=\sqrt{2D}\,\delta_{\alpha\beta}\).
The formal solution (100) then provides the steady probability distribution [69]
\[P_{\rm st}(r)=N_{\rm st}\exp\left[-\frac{k^{\rm e}}{2D}r^{2}-\frac{k^{\rm ne} }{4D}r^{4}\right], \tag{102}\]
where the normalization factor \(N_{\rm st}\) is obtained by a direct integral of \(\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}P_{\rm st}\ dq_{1}dq_{2}=1\) as
\[N_{\rm st}=\frac{1}{\pi}\exp\left(-\frac{(k^{\rm e})^{2}}{4Dk^{\rm ne}}\right) \bigg{/}\sqrt{\frac{\pi D}{k^{\rm ne}}}\,{\rm erfc}\left(\frac{k^{\rm e}}{2 \sqrt{Dk^{\rm ne}}}\right), \tag{103}\]
where \({\rm erfc}(x)\) is the complementary error function.
The exponential form in Eq. 102 represents the fourth-order potential associated with the nonlinear odd elasticity, and the steady distribution function is unimodal when \(k^{\rm e}>0\), whereas it becomes crater-shaped with a degenerate maximum on a circle with radius \(r^{*}=(|k^{\rm e}|/k^{\rm ne})^{1/2}\) when \(k^{\rm e}<0\).
As in Eq. (103), we can calculate the probabilistic areal velocity via
\[J=\frac{k^{\rm o}}{2}\langle r^{2}\rangle+\frac{k^{\rm no}}{2}\langle r^{4} \rangle=:J_{\rm o}+J_{\rm no}, \tag{104}\]
which we separated into two terms for subsequent analyses.
After some calculations, we can express the linear odd elastic contribution, \(J_{\rm o}=(k^{\rm o}/2)\langle r^{2}\rangle\), as
\[J_{\rm o}=\frac{k^{\rm o}}{2}\Bigg{(}\frac{2\pi D}{k^{\rm ne}}N_{\rm st}- \frac{k^{\rm e}}{k^{\rm ne}}\Bigg{)}\,. \tag{105}\]
Note that \(N_{\rm st}\) is a non-negative function of \(D\). For example, when \(k^{\rm e}=0\), it becomes \(N_{\rm st}=\sqrt{k^{\rm ne}/[\pi^{3}D]}\). To understand the effects of noise, we need to examine the behavior of \(DN_{\rm st}\), and we found that this quantity monotonically increases with \(D\). Indeed, by the asymptotic expression for a small \(x=4Dk^{\rm ne}/(k^{\rm e})^{2}>0\),
\[\frac{\sqrt{x}e^{-1/x}}{{\rm erfc}(1/\sqrt{x})}=\sqrt{\pi}\left(1+\frac{x}{2 }-\frac{x^{2}}{2}\right)+O(x^{3}), \tag{106}\]
\[\frac{2\pi D}{k^{\rm ne}}N_{\rm st}-\frac{k^{\rm e}}{k^{\rm ne}}=\frac{k^{ \rm e}}{2k^{\rm ne}}\left(x-x^{2}\right)+O(x^{3}), \tag{107}\]
and we obtain an asymptotic behavior for small noise as
\[J_{\rm o}=\frac{k^{\rm o}}{k^{\rm e}}D+O(D^{2})\ {\rm when}\ k^{\rm e}>0. \tag{108}\]
When \(k^{\rm e}<0\), using the asymptotic behavior for small \(x>0\),
\[\frac{\sqrt{x}e^{-1/x}}{{\rm erfc}(-1/\sqrt{x})}=\frac{1}{2}e^{-1/x}\left( \sqrt{x}+O(x^{7/2})\right), \tag{109}\]
and we can obtain
\[J_{\rm o}\simeq\frac{k^{\rm no}}{2}\frac{|k^{\rm e}|}{k^{\rm e}}\ {\rm when}\ k^{\rm e}<0 \tag{110}\]
with an exponentially small error.
We found that the average \(\langle r^{2}\rangle\) monotonically increases with \(D\), irrespective of the sign of \(k^{\rm e}\). The magnitude of the probabilistic areal velocity is therefore proportional to the odd elasticity \(k^{\rm e}\) and increases as the size of the noise increases, although the overall sign is determined by the sign of \(k^{\rm e}\). This noise dependence is different from that for the multiplicative noise case, although the noise effect is exponentially small for little noise.
By similar calculations, we can obtain the probabilistic areal velocity from the nonlinear odd elasticity, \(J_{\rm no}=(k^{\rm no}/2)\langle r^{4}\rangle\), as
\[J_{\rm no}=\frac{k^{\rm no}}{2}\left[\frac{2D}{k^{\rm ne}}-\left(\frac{k^{\rm e }}{k^{\rm ne}}\right)\left(\frac{2\pi D}{k^{\rm ne}}N_{\rm st}-\frac{k^{\rm e }}{k^{\rm ne}}\right)\right]. \tag{111}\]
By a similar asymptotic analysis, by using the expression (106), we obtain \(J_{\rm no}\) for small \(D\) when \(k^{\rm e}>0\) as
\[J_{\rm no}=\frac{2k^{\rm no}}{(k^{\rm e})^{2}}D^{2}+O(D^{3})\ {\rm when}\ k^{\rm e}>0, \tag{112}\]
which is positive for small \(D\) and monotonically increases (if \(k^{\rm no}>0\)) as diffusion is enhanced. When \(k^{\rm e}<0\), again using the asymptotic (109), the probabilistic areal velocity is obtained for small \(D\) as
\[J_{\rm no}\simeq\frac{k^{\rm no}}{2}\left[\frac{2D}{k^{\rm ne}}+\left(\frac{k ^{\rm e}}{k^{\rm ne}}\right)^{2}\right]\ {\rm when}\ k^{\rm e}<0, \tag{113}\]
with an exponentially small error.
|
2304.09736 | Khuri-Treiman analysis of $J/ψ\toπ^{+}π^{-}π^{0}$ | We study the decay $J/\psi\to\pi^{+}\pi^{-}\pi^{0}$ within the framework of
the Khuri-Treiman equations. We find that the BESIII experimental di-pion mass
distribution in the $\rho(770)$-region is well reproduced with a
once-subtracted $P$-wave amplitude. Furthermore, we show that $F$-wave
contributions to the amplitude improve the description of the data in the
$\pi\pi$ mass region around 1.5 GeV. We also present predictions for the
$J/\psi\to\pi^{0}\gamma^{*}$ transition form factor. | JPAC Collaboration, M. Albaladejo, S. Gonzàlez-Solís, Ł. Bibrzycki, C. Fernández-Ramírez, N. Hammoud, V. Mathieu, M. Mikhasenko, G. Montaña, R. J. Perry, A. Pilloni, A. Rodas, W. A. Smith, A. Szczepaniak, D. Winney | 2023-04-19T15:22:17Z | http://arxiv.org/abs/2304.09736v1 | # Khuri-Treiman analysis of \(J/\psi\rightarrow\pi^{+}\pi^{-}\pi^{0}\)
###### Abstract
We present a search for the \(J/\psi\rightarrow\pi^{+}\pi^{-}\pi^{0}\) signal in the \(\pi^{+}\pi^{-}\pi^{
Abstract: We study the decay \(J/\psi\to\pi^{+}\pi^{-}\pi^{0}\) within the framework of the Khuri-Treiman equations. We find that the BESIII experimental di-pion mass distribution in the \(\rho(770)\)-region is well reproduced with a once-subtracted \(P\)-wave amplitude. Furthermore, we show that \(F\)-wave contributions to the amplitude improve the description of the data in the \(\pi\pi\) mass region around 1.5 GeV. We also present predictions for the \(J/\psi\to\pi^{0}\gamma^{*}\) transition form factor.
###### Contents
* 1 Introduction
* 2 Formalism
* 2.1 Decay amplitude and kinematics
* 2.2 Khuri-Treiman equations for \(J/\psi\to 3\pi\)
* 3 Results
* 3.1 \(P\)-wave contribution
* 3.2 Inclusion of the \(F\)-wave contribution
* 4 \(J/\psi\to\pi^{0}\gamma^{*}\) transition form factor
* 5 Summary
## 1 Introduction
Decays of the lowest-lying charmonium states provide an excellent environment to study light hadron spectroscopy, search for exotic mesons, test QCD and QCD-based models, as well as testing theoretical techniques in a region where both non-perturbative and perturbative QCD effects play a role.
In this work we analyze the decay \(J/\psi\to\pi^{+}\pi^{-}\pi^{0}\), to study the dynamics of the three-pion system at low and intermediate energies under rather clean conditions. Here, the final state invariant mass distribution can contain contributions from the \(P\)-wave (\(J^{PC}=1^{--}\)) and \(F\)-wave (\(J^{PC}=3^{--}\)) states of the \(\pi\pi\) subsystem. Previous experimental studies from BESII [1] and BABAR [2] showed that the \(P\)-wave \(\rho(770)\pi\) intermediate state dominates the process, but limited statistics prevented any detailed study of substructures in the \(3\pi\) system. While the dominance of the \(\rho(770)\) resonance can be clearly seen in the Dalitz plot distribution and projection measurements by the BESIII collaboration obtained with roughly 1.9 million \(J/\psi\to\pi^{+}\pi^{-}\pi^{0}\) events [3], there are hints of contributions other than the \(\rho(770)\). For example, the absence of events in the center of the Dalitz plot indicates the contribution from additional states and/or partial waves which may interfere destructively with the \(\rho(770)\). Exactly the opposite situation is found for the partner reaction
\(\pi^{+}\pi^{-}\pi^{0}\). There, the 7872 events from BESIII [3] show a completely different shape of the \(\pi\pi\) invariant mass distribution and the Dalitz plot -- the \(\rho\pi\) contribution is subleading and almost all events are found in the center of the Dalitz plot, with data indicating that the main contribution comes from a higher mass resonance, _i.e._ the \(\rho(2150)\) resonance with \(J^{PC}=1^{--}\). The different picture between the \(J/\psi\) and \(\psi(2S)\) decays into \(\pi^{+}\pi^{-}\pi^{0}\) and the lack of reasonable explanations within the quark model is known as the \(\rho\pi\) puzzle and still remains largely unresolved (see _e.g._ Refs. [4; 5; 6; 7; 8], and references therein). New high-statistics BESIII data on \(J/\psi\) decays will soon be available [9; 10], which could be used to greatly improve the theoretical uncertainties associated to vector charmonium decays. In particular, they might help clarify the \(\rho\pi\) puzzle, as well as provide access to high-precision \(\rho\)-\(\omega\) mixing effect analyses and motivate coupled channel studies with the decays \(J/\psi\to K^{+}K^{-}\pi^{0}\) and \(J/\psi\to K_{S}K^{\pm}\pi^{\mp}\).
The decay \(J/\psi\to\pi^{+}\pi^{-}\pi^{0}\) has previously been studied within the context of the Veneziano model [11], and using aspects of unitarity and analyticity constraints [12; 13]. Here, we adapt the Khuri-Treiman (KT) framework [14], applied extensively in the isospin-violating decay \(\eta\to 3\pi\)[15; 16; 17; 18; 19; 20; 21] and in the decay of light vector isoscalar resonances \(\omega,\phi\to 3\pi\)[22; 23; 24], to the analysis of the vector charmonium decay \(J/\psi\to\pi^{+}\pi^{-}\pi^{0}\). We show that one subtraction in the KT equations satisfactorily describes the BESIII experimental di-pion mass distribution at the peak of the \(\rho(770)\). In addition, we find that \(F\)-wave effects are needed to describe the intermediate energy region around 1.5 GeV. We also apply our analysis techniques to predict the \(J/\psi\to\pi^{0}\gamma^{*}\) transition form factor. Our study lays the groundwork for a detailed analysis of \(J/\psi\) decays using the large data sample currently being collected at BESIII.
This paper is organized as follows. In Section 2 we review the KT formalism for the \(J/\psi\to 3\pi\) decay. In Section 3 we apply the formalism to the BESIII data and discuss the results. In Section 4, we present predictions for the \(J/\psi\to\pi^{0}\gamma^{*}\) transition form factor, and we summarize our findings in Section 5.
## 2 Formalism
### Decay amplitude and kinematics
The amplitude for the decay \(J/\psi(p_{V})\to\pi^{0}(p_{0})\)\(\pi^{+}(p_{+})\)\(\pi^{-}(p_{-})\) can be expressed in terms of a kinematic prefactor and a single invariant scalar function \(F(s,t,u)\) containing the dynamical information,
\[\mathcal{M}(s,t,u)=i\,\epsilon_{\mu\nu\alpha\beta}\;\epsilon^{\mu}(p_{V})\,p_{ +}^{\nu}\,p_{-}^{\alpha}\,p_{0}^{\beta}\;F(s,t,u)\,, \tag{1}\]
where \(\epsilon_{\mu\nu\alpha\beta}\) is the Levi-Civita tensor and \(\epsilon^{\mu}(p_{V})\) is the polarization vector of the \(J/\psi\) meson. The particle momenta are related to the Mandelstam variables through:
\[s=(p_{+}+p_{-})^{2}\,,\quad t=(p_{0}+p_{+})^{2}\,,\quad u=(p_{0}+p_{-})^{2}\,, \tag{2}\]
with \(s+t+u=m_{J/\psi}^{2}+3m_{\pi}^{2}\). In this paper, we work in the isospin limit with \(m_{\pi}\doteq m_{\pi^{\pm}}=m_{\pi^{0}}\) and \(m_{\pi}=(2m_{\pi^{\pm}}+m_{\pi^{0}})/3\). The scattering angle in the \(s\)-channel, defined by the
enter of mass of the \(\pi^{+}\pi^{-}\) pair, is denoted by \(\theta_{s}\) and is given by:
\[\cos\theta_{s}(s,t,u)=\frac{t-u}{4\,p(s)\,q(s)}\,,\quad\sin\theta_{s}(s,t,u)= \frac{\sqrt{\phi(s,t,u)}}{2\sqrt{s}\,p(s)\,q(s)}\,, \tag{3}\]
where the momenta \(p(s)\) and \(q(s)\),
\[p(s)=\frac{\lambda^{\frac{1}{2}}(s,m_{\pi}^{2},m_{\pi}^{2})}{2\sqrt{s}}\,, \quad q(s)=\frac{\lambda^{\frac{1}{2}}(s,m_{J/\psi}^{2},m_{\pi}^{2})}{2\sqrt{s }}\,, \tag{4}\]
are, respectively, the momenta of the \(\pi^{\pm}\) and \(\pi^{0}\) in the \(s\)-channel. \(\lambda(a,b,c)=a^{2}+b^{2}+c^{2}-2ab-2bc-2ca\) is the Kallen, or triangle, function [25]. The zeroes of the well-known Kibble function [26],
\[\phi(s,t,u)=\left(2\sqrt{s}\ \sin\theta_{s}\ p(s)\,q(s)\right)^{2}=s\,t\,u-m_{ \pi}^{2}(m_{J/\psi}^{2}-m_{\pi}^{2})^{2}\, \tag{5}\]
define the boundaries of the physical regions of the process. The Dalitz-plot boundaries in \(t\) for a given value of \(s\) for \(J/\psi\to 3\pi\) lie within the interval \([t_{\rm min}(s),\ t_{\rm max}(s)]\), with
\[t_{\rm max,min}(s)=\frac{m_{J/\psi}^{2}+3m_{\pi}^{2}-s}{2}\pm 2\,p(s)\,q(s)\, \tag{6}\]
while the allowed range for \(s\) is given by \(s_{\rm min}=4m_{\pi}^{2}\) to \(s_{\rm max}=(m_{J/\psi}-m_{\pi})^{2}\).
Finally, the measured differential decay width can be written in terms of the invariant amplitude \(F(s,t,u)\) as
\[\frac{d^{2}\Gamma}{ds\,dt}=\frac{1}{(2\pi)^{3}}\,\frac{1}{32\,m_{J/\psi}^{3}} \,\frac{1}{3}\frac{\phi(s,t,u)}{4}\ |F(s,t,u)|^{2}\,. \tag{7}\]
### Khuri-Treiman equations for \(J/\psi\to 3\pi\)
The KT formalism for the \(J/\psi\to 3\pi\) amplitude \(F(s,t,u)\) is formally identical to the well-established one for the \(\omega\to 3\pi\) decay amplitude [22; 23; 24; 27], and has been discussed in Ref. [28] (see also Ref. [29]). As shown in these references, the \(s\)-channel partial-wave expansion for \(F(s,t,u)\) is given by
\[F(s,t,u)=\sum_{J\,\rm odd}^{\infty}(p(s)\,q(s))^{J-1}\ P_{J}^{\prime}(z_{s}) \;f_{J}(s)\,, \tag{8}\]
where \(z_{s}=\cos\theta_{s}\) and \(P_{J}^{\prime}(z_{s})\) is the derivative of the Legendre polynomial. The KT representation of the scalar function \(F(s,t,u)\) in Eq. (8) may be obtained by replacing the infinite sum of partial waves in the \(s\)-channel with the sum of three so-called isobar amplitudes, one for each of the \(s\)-, \(t\)- and \(u\)-channels. By truncating the partial wave expansion of each isobar amplitude at \(J_{\rm max}=1\) we obtain the following crossing-symmetric isobar decomposition [22; 23; 30]:
\[F(s,t,u)=F_{1}(s)+F_{1}(t)+F_{1}(u)\,, \tag{9}\]
where each isobar amplitude, \(F_{1}(x)\), has only a right-hand or unitary cut in its respective Mandelstam variable. The relation between \(F_{1}(s)\) and \(f_{1}(s)\) is obtained by projecting Eq. (9) onto the \(s\)-channel partial wave,
\[f_{1}(s) =F_{1}(s)+\hat{F}_{1}(s)\,, \tag{10}\] \[\hat{F}_{1}(s) \equiv 3\int_{-1}^{1}\frac{dz_{s}}{2}\;(1-z_{s}^{2})\;F_{1}(t(s,z_{s }))\,, \tag{11}\]
where the inhomogeneity \(\hat{F}_{1}(s)\) contains the \(s\)-channel projection of the left-hand cut contributions due to the \(t\)- and \(u\)-channels, and its evaluation in the decay region requires a proper analytical continuation [31]. Assuming elastic unitarity with only two-pion intermediate states, we arrive at the KT equation for the \(J/\psi\to 3\pi\) decay, _i.e._ the unitarity relation for the isobar amplitude \(F_{1}(s)\):
\[\text{disc}\,F_{1}(s)=2i\left(F_{1}(s)+\hat{F}_{1}(s)\right)\;\sin\delta_{1}(s )\;e^{-i\delta_{1}(s)}\;\theta(s-4m_{\pi}^{2})\,, \tag{12}\]
where \(\delta_{1}(s)\) is the \(P\)-wave \(\pi\pi\) phase shift, which is real.
Given the discontinuity relation in Eq. (12), one can write an unsubtracted dispersion relation for \(F_{1}(s)\) as
\[F_{1}(s)=\frac{1}{2\pi i}\int_{4m_{\pi}^{2}}^{\infty}ds^{\prime}\ \frac{\text{disc}\,F_{1}(s^{\prime})}{s^{\prime}-s}\,, \tag{13}\]
the solution of which can be written as:
\[F_{1}(s)=\Omega_{1}(s)\left(a+\frac{s}{\pi}\int_{4m_{\pi}^{2}}^{\infty}\frac{ ds^{\prime}}{s^{\prime}}\frac{\sin\delta_{1}(s^{\prime})\,\hat{F}_{1}(s^{\prime})}{ \left|\Omega_{1}(s^{\prime})\right|(s^{\prime}-s)}\right)\,, \tag{14}\]
where \(\Omega_{1}(s)\) is the usual Omnes function [32],
\[\Omega_{1}(s)=\exp\left[\frac{s}{\pi}\int_{4m_{\pi}^{2}}^{\infty}\frac{ds^{ \prime}}{s^{\prime}}\frac{\delta_{1}(s^{\prime})}{s^{\prime}-s}\right]\,. \tag{15}\]
The subtraction constant \(a\) in Eq. (14) is the only free parameter in the model. It is in general complex, \(a=\left|a\right|e^{i\phi_{a}}\). While its modulus \(\left|a\right|\) can be fixed from the experimental \(J/\psi\to 3\pi\) decay width, no observable of the decay is sensitive to the overall phase \(\phi_{a}\), so we can set \(\phi_{a}=0\). Since it determines the overall normalization of the amplitude, the constant \(a\) can be factored out.
We note that due to the asymptotic behavior of \(F_{1}(s)\) in Eq. (14), the amplitude \(F(s,t,u)\) satisfies the Froissart-Martin bound [33; 22; 34]. Also note that, even though \(F_{1}(s)/\Omega_{1}(s)\) in Eq. (14) looks like a once-subtracted dispersion relation, \(F_{1}(s)\) actually satisfies the unsubtracted dispersion relation given in Eq. (13). Therefore, the energy dependence of \(F_{1}(s)\) is a pure prediction given solely by the phase shift \(\delta_{1}(s)\). Here, we take \(\delta_{1}(s)\) from the phase shift parametrizations of Ref. [35] that are valid roughly up to \(\sqrt{s}=2\) GeV. These phase shifts contain information about inelastic channels, but given that the inelasticity is found to be rather small until about 1.4 GeV we refrain to consider them.
Therefore, the phase shift that we employ have the physics of the \(\rho(770)\) and also the effects of the higher \(\rho(1450)\) and \(\rho(1770)\) resonances. For our analysis, beyond \(\sqrt{s}=\Lambda\equiv 1.85\) GeV we smoothly guide the \(\delta_{1}(s)\) to \(\pi\) through [27; 36]
\[\delta_{\infty}(s)\equiv\lim_{s\to\infty}\delta_{1}(s)=\pi-\frac{\alpha}{\beta +\left(s/\Lambda^{2}\right)^{3/2}}\,, \tag{16}\]
where \(\alpha\) and \(\beta\) are parameters introduced so that the phase \(\delta_{1}(s)\) and its first derivative \(\delta^{\prime}(s)\) are continuous at \(s=\Lambda^{2}\). Their explicit expressions read
\[\alpha=\frac{3\left(\pi-\delta_{1}(\Lambda^{2})\right)^{2}}{2\Lambda^{2} \delta_{1}^{\prime}(\Lambda^{2})}\,,\quad\beta=-1+\frac{3\left(\pi-\delta_{1} (\Lambda^{2})\right)}{2\Lambda^{2}\delta_{1}^{\prime}(\Lambda^{2})}\,. \tag{17}\]
This ensures the expected asymptotic \(1/s\) behavior of \(\Omega_{1}(s)\). The three phase shifts \(\delta_{1}(s)\) from Ref. [35] that we use as an input are shown in Fig. 1 up to 2.5 GeV. The different solutions come from using different \(\pi\pi\) scattering data sources. As seen, the behavior of the phase shift solution I suggests a large interference between the \(\rho^{\prime}\) and \(\rho^{\prime\prime}\), with a sizable change in the phase in the region between 1.5 and 1.8 GeV, while solutions II and III looks smoother in this region. For our analysis, we use solution I as our central input for the phase and solutions II and III to quantify the systematic uncertainties in our calculations.
We solve Eq. (14) following a numerical iterative procedure similar to Refs. [20; 21; 22; 16; 22; 23; 17; 24; 25; 26; 27; 28; 29; 38]. We use \(F_{1}(s)=\Omega_{1}(s)\) as an efficient initial input to calculate \(\hat{F}_{1}(s)\) from Eq. (11), which subsequently is inserted as an input in Eq. (14) for the computation of an updated \(F_{1}(s)\). This cyclic calculation is repeated until the solution converges. In Fig. 2, we show the solutions for \(F_{1}(s)\) (normalized to \(a=1\)) after each iteration step along with the initial input (dashed blue line). As can be seen, convergence is achieved after three iterations. The
Figure 1: Solutions I, II and III for the \(P\)-wave phase shift \(\delta_{1}(s)\) from Ref. [35] valid roughly up to \(\sqrt{s}=2\) GeV. The solution of Ref. [37] (dotted red line) is valid only up to about \(\sqrt{s}=1.3\) GeV, and is shown for completeness.
ifference between the final solution (solid black) and the starting point, _i.e._\(F_{1}(s)=\Omega_{1}(s)\) (dashed blue), is rather small, hinting at moderate crossed-channel effects.
Note that when the crossed-channel rescattering effects are removed from the isobar \(F_{1}(s)\), _i.e._ when \(\hat{F}_{1}(s)=0\) in Eq. (14), \(F_{1}(s)\) is simply the pure Omnes function multiplied by a constant,
\[F_{1}(s)=a^{\prime}\Omega_{1}(s)\,, \tag{18}\]
which implies the following isobar decomposition of the full amplitude (_cf._ Eq. (9)):
\[F(s,t,u)=a^{\prime}\left(\Omega_{1}(s)+\Omega_{1}(t)+\Omega_{1}(u)\right)\,. \tag{19}\]
In this case, a new normalization constant \(a^{\prime}\) has to be chosen to reproduce the \(J/\psi\to 3\pi\) decay width. Also note that Eq. (14) can be written in the form
\[F_{1}(s)=\Omega_{1}(s)\left(a+b^{\prime}\,s+\frac{s^{2}}{\pi}\int_{4m_{\pi}^{2 }}^{\infty}\frac{ds^{\prime}}{(s^{\prime})^{2}}\frac{\sin\delta_{1}(s^{\prime })\,\hat{F}_{1}(s^{\prime})}{|\Omega_{1}(s^{\prime})|\,(s^{\prime}-s)}\right)\,, \tag{20}\]
where \(b^{\prime}\) satisfies the following sum rule [22]:
\[b\equiv b^{\prime}/a=\frac{1}{\pi}\int_{4m_{\pi}^{2}}^{\infty}\frac{ds^{ \prime}}{(s^{\prime})^{2}}\frac{\sin\delta_{1}(s^{\prime})\,\hat{F}_{1}(s^{ \prime})/a}{|\Omega_{1}(s^{\prime})|}\,. \tag{21}\]
The subtraction constant, \(b\), is complex due to the presence of the three-particle cut in the physical region of the decay amplitude. This value is found to be:
\[b_{\rm sum}\simeq\,0.141\,e^{2.321\,i}\ {\rm GeV}^{-2}\,\,. \tag{22}\]
Figure 2: Convergence behavior of the iterative procedure for the real (left plot) and imaginary (right plot) parts of the amplitude \(F_{1}(s)\) given in Eq. (14) using solution I of the phase shift \(\delta_{1}(s)\) as input. The vertical line denotes the two-pion threshold.
Had we used solution II or III of the phase shift \(\delta_{1}(s)\) (_cf._ Fig. 1), we would have obtained \(b_{\rm sum}\simeq\,0.129\,e^{2.640\,i}\) GeV\({}^{-2}\) and \(b_{\rm sum}\simeq\,0.124\,e^{2.811\,i}\) GeV\({}^{-2}\), respectively.
Performing one subtraction on Eq. (13) leads to the solution [20; 22; 30]:
\[F_{1}(s)=a\left[F_{a}(s)+b\,F_{b}(s)\right]\,,\] (23a) where now \[b\] is not constrained to satisfy Eq. ( 21 ), and the functions \[F_{a}(s)\] and \[F_{b}(s)\] are given by \[F_{a}(s) = \Omega_{1}(s)\left[1+\frac{s^{2}}{\pi}\int_{4m_{\pi}^{2}}^{\infty }\frac{ds^{\prime}}{s^{\prime 2}}\frac{\sin\delta_{1}(s^{\prime})\,\hat{F}_{a}(s^{ \prime})}{|\Omega_{1}(s^{\prime})|(s^{\prime}-s)}\right]\,, \tag{23b}\] \[F_{b}(s) = \Omega_{1}(s)\left[s+\frac{s^{2}}{\pi}\int_{4m_{\pi}^{2}}^{\infty }\frac{ds^{\prime}}{s^{\prime 2}}\frac{\sin\delta_{1}(s^{\prime})\,\hat{F}_{b}(s^{ \prime})}{|\Omega_{1}(s^{\prime})|(s^{\prime}-s)}\right]\,. \tag{23c}\]
These functions only need to be calculated once since they are independent of the numerical values of \(a\) and \(b\) and, as we will discuss in Sec. 3, \(a\) and \(b\) will become fit parameters. In Fig. 3, we show the solutions for \(F_{a}(s)\) and \(F_{b}(s)\) using a numerical iterative procedure similar to the one described previously. In this case, nine iterations are needed to obtain convergent solutions. Strictly speaking, the amplitude \(F(s,t,u)\) built from \(F_{1}(s)\) in Eq. (23a) does not satisfy the asymptotic Froissart-Martin bound for an arbitrary value of the parameter \(b\neq b_{\rm sum}\) [_cf._ Eq. (22)]. The main advantage of introducing one subtraction is that, due to the additional \(1/s^{\prime}\) factor introduced, we reduce the importance of the high energy region of the dispersion integrals where the phase shift is not well known. By letting the subtraction constant \(b\) be a free parameter, we can partially absorb our ignorance of the higher energy part of the integral. This allows us to parametrize some unknown energy dependence of the \(J/\psi\to 3\pi\) interaction not directly related to \(\pi\pi\) rescattering. As we will show in the following section, the once-subtracted parametrization provides a good representation of the data from BESIII in the \(\rho(770)\) resonance region.
## 3 Results
### \(P\)-wave contribution
We now compare our KT amplitudes defined in the previous section to the experimental data from the BESIII collaboration [3]. Given that the Dalitz plot distribution is not publicly available, we are only able to analyze the di-pion mass projection of Eq. (7), computed on the \(\sqrt{s}\equiv m_{\pi\pi}\) invariant mass, shown in Fig. 2 of Ref. [3]. A Poisson distribution is assumed to assign uncertainty for every bin. High statistics of the data sample make it challenging to achieve an accurate description of the data with reasonably simple models. Nevertheless, we will be able to obtain a qualitative description of the data in the whole energy range. We start by using the unsubtracted KT amplitude Eq. (14). The single free parameter \(a\) only affects the overall normalization of the amplitude and can be fixed from the \(J/\psi\to\pi^{+}\pi^{-}\pi^{0}\) decay width. Using the PDG values \(\Gamma_{J/\psi}=92.6\) keV and \({\rm BR}(J/\psi\to\pi^{+}\pi^{-}\pi^{0})=2.10(8)\%\)[39] one finds \(|a|\simeq 0.051\) GeV\({}^{-3}\). In Fig. 4, we compare our prediction to the \(m_{\pi\pi}\) distribution by BESIII with proper normalization [_cf._ Eq. (11)].
In the figure, we also show the result obtained when the crossed-channel rescattering is neglected [_cf._ Eq. (19)], in which case the global normalization is found to be \(|a^{\prime}|\simeq 0.046\) GeV\({}^{-3}\). As can be observed, the result of the latter solution (dotted brown line) lies below that of the unsubtracted KT \(F_{1}(s)\) solution at the peak of the \(\rho\)-meson, and neither reproduce the experimental data in this region. In addition, both appear to fail at describing the intermediate energy region. In order to achieve a better description of the data, we next use the more flexible, once-subtracted amplitude Eqs. (23b) and (23c), with the
Figure 3: Convergence behavior of the iterative procedure for the real (left plots) and imaginary (right plots) parts of the amplitudes \(F_{a}(s)\) (Eq. (23b), upper plots) and \(F_{b}(s)\) (Eq. (23c), lower plots) using solution I of the phase shift as \(\delta_{1}(s)\) input. The vertical line denotes the two-pion threshold.
additional subtraction constant \(b\) fitted to BESIII data. For our analysis, we define
\[\chi^{2}_{\rm data}=\sum_{i=1}\left(\frac{N_{{\rm ev},i}-{\cal N}d\Gamma^{\rm th}_ {i}/dm_{\pi\pi}}{\sigma_{N_{{\rm ev},i}}}\right)^{2}\,, \tag{10}\]
where \(N_{{\rm ev},i}\) and \(\sigma_{N_{{\rm ev},i}}\) are, respectively, the experimental number of events distribution and the corresponding error in the \(i\)-th bin and \(d\Gamma^{\rm th}_{i}/dm_{\pi\pi}\) is the theoretical expression for the decay distribution [_cf._ Eq. (7)]. For \(\sigma_{N_{{\rm ev},i}}\) we take \(\sqrt{N_{{\rm ev},i}}\). The constant \({\cal N}\) is at this stage an arbitrary normalization. Since we are not determining the branching ratio, we reabsorb the global normalization of the amplitude \(a\) into \({\cal N}\) and fix alone this overall constant from the fit to the BESIII data. The sum in Eq. (10) runs over the 80 data points and we take into account an efficiency of about 0.3 for the number of events and the errors in our fits [3].
The \(\chi^{2}_{\rm data}\) minimization yields
\[|b|=0.198(35)\ {\rm GeV}^{-2}\,,\quad\phi_{b}=2.675(300)\,, \tag{11}\]
which implies \(|a|=0.0565(22)\ {\rm GeV}^{-3}\) for the normalization of the amplitude upon using the \({\rm BR}(J/\psi\to\pi^{+}\pi^{-}\pi^{0})\) from the PDG. The statistical error is negligible and the quoted error is the theoretical systematic uncertainty attached to our calculations. This is obtained from the absolute value of the difference between the fits performed with solutions I (central
Figure 4: BESIII (red circles) [3] measurement of the \(m_{\pi\pi}\) invariant mass distribution for the decay \(J/\psi\to 3\pi\) as compared to our prediction without crossed-channel effects (dotted brown line), with the unsubtracted KT amplitude (dashed green line) and our fit in Eq. (11) including one subtraction (black solid line). The gray band accounts for the systematic uncertainties attached to our calculations. See main text for details.
solution) and III of the phase shift \(\delta_{1}^{1}(s)\) (_cf._ Fig. 1), which gives the largest variation. We observe that the systematic errors attached are sizable, of about 18% and 11% for \(|b|\) and \(\phi_{b}\), respectively. We also note that this value stays close to its sum-rule prediction given in Eq. (22). Therefore, we conclude that the pion-pion \(P\)-wave phase shift saturates the sum rule for the \(J/\psi\to 3\pi\) partial wave to about 75%. This result is to be compared to similar sum rules for \(\omega\to 3\pi\) in Ref. [38], where the fitted value of \(b\) was found to be quite different than its sum-rule \(b_{\text{sum}}\), and for \(\phi\to 3\pi\) in Ref. [22], where it was observed that the difference between the fitted \(b\) and \(b_{\text{sum}}\) was small. The result of the fit is shown in Fig. 4 with the normalization of the events distribution resulting from the fits, \(\mathcal{N}=7.64(33)\times 10^{8}\) in units of \((2.4\,\text{MeV})^{-1}\). The gray error band in the figure accounts for the systematic uncertainties associated to our fits and is defined as the (symmetrized) difference between the fit results obtained with solution I of the phase shift with respect to the ones from solution III, which give the largest difference. It can be seen that this fit provides a satisfactory description of experimental data up to \(m_{\pi\pi}\sim 1\) GeV (the elastic region). However, we obtain high values of the \(\chi^{2}/\text{dof}\) of about 200 but this problem is not critical. We shall come back to discuss this point below. Here we stress that the once-subtracted KT amplitude is able to reproduce the \(\rho(770)\) function shape and note that contributions of partial waves other than the elastic \(P\)-wave, which is the main one, seem to be required to describe the intermediate energy region around \(m_{\pi\pi}\sim 1.5\) GeV. The next allowed partial wave is the \(F\)-wave, which we will include in the following subsection. As we will see, the inclusion of an explicit \(F\)-wave improves the quality of the fit.
In Fig. 5, we show the Dalitz plot distribution resulting from our fit, which exhibits unambiguous contributions from \(\rho(770)\) resonances which appear as bands along the Dalitz plot boundaries, with almost no events in the center of the Dalitz plot. The visual comparison with the corresponding BESIII Dalitz-plot data shows a good agreement (see Fig. 2 in Ref. [3]).
### Inclusion of the \(F\)-wave contribution
The isobar decomposition of the amplitude including \(F\)-waves follows from Eq. (8) and reads [22; 38]:
\[\begin{split} F(s,t,u)&=F_{1}(s)+F_{1}(t)+F_{1}(u) \\ &+(p(s)q(s))^{2}P_{3}^{\prime}(z_{s})F_{3}(s)+(p(t)q(t))^{2}P_{3} ^{\prime}(z_{t})F_{3}(t)+(p(u)q(u))^{2}P_{3}^{\prime}(z_{u})F_{3}(u)\,,\end{split} \tag{33}\]
where \(F_{1}(s)\) is the \(P\)-wave isobar [_cf._ Eq. (23a)], \(F_{3}(s)\) is the \(F\)-wave isobar amplitude, which as \(F_{1}(s)\) only has a right-hand cut, and:
\[z_{t}=\frac{s-u}{4p(t)q(t)}\,\qquad z_{u}=\frac{s-t}{4p(u)q(u)}. \tag{34}\]
The discontinuity of the \(F\)-wave is expressed by:
\[\text{disc}\,F_{3}(s)=2i\left(F_{3}(s)+\hat{F}_{3}(s)\right)\,\sin\delta_{3}(s )\;e^{-i\delta_{3}(s)}\;\theta(s-4m_{\pi}^{2})\,, \tag{35}\]
here \(\delta_{3}(s)\) and \(\hat{F}_{3}(s)\) are the \(F\)-wave phase shift and inhomogeneity, respectively. Here, we will simplify Eq. (3.5) by neglecting \(\hat{F}_{3}(s)\), as done for instance in Ref. [20]. The solution is then given by:
\[F_{3}(s)=p_{3}(s)\Omega_{3}(s)\,, \tag{3.6}\]
where \(\Omega_{3}(s)\) is the \(F\)-wave Omnes function (_cf._ Eq. (2.15))
\[\Omega_{3}(s)=\exp\left[\frac{s}{\pi}\int_{4m_{\pi}^{2}}^{\infty}\frac{ds^{ \prime}}{s^{\prime}}\frac{\delta_{3}(s^{\prime})}{s^{\prime}-s}\right]\,. \tag{3.7}\]
In order to obtain the required input phase \(\delta_{3}(s)\), we model the \(F\)-wave contribution by a \(\rho_{3}(1690)\) resonance (\(J^{PC}=3^{--}\)). While the dominant decay mode of the \(\rho_{3}(1690)\) is to \(4\pi\), we only consider here its decay to \(\pi\pi\) and neglect inelastic channels effects. We use the following Breit-Wigner representation for \(F_{3}(s)\):
\[F_{3}(s)|_{\rm BW}=\frac{m_{\rho_{3}}^{2}}{m_{\rho_{3}}^{2}-s-im_{\rho_{3}} \Gamma_{\rho_{3}}^{\ell=3}(s)}\,, \tag{3.8}\]
with the energy-dependent width given by
\[\Gamma_{R}^{\ell}(s) = \frac{\Gamma_{R}m_{R}}{\sqrt{s}}\left(\frac{p(s)}{p(m_{R}^{2})} \right)^{2\ell+1}\left(F_{R}^{\ell}(s)\right)^{2}\,. \tag{3.9}\]
The \(F_{R}^{\ell}(s)\) denotes the Blatt-Weisskopf factor that limits the growth of the isobar [40]. For \(\ell=3\) it is given by:
\[F_{R}^{\ell=3}(s) = \sqrt{\frac{z_{0}(z_{0}-15)^{2}+9(2z_{0}-5)^{2}}{z(z-15)^{2}+9(2 z-5)^{2}}}\,,\quad z=r_{R}^{2}p^{2}(s)\,,\quad z_{0}=r_{R}^{2}p^{2}(m_{\rho_{3}}^{2})\,, \tag{3.10}\]
Figure 5: Dalitz plot distribution \(d^{2}\Gamma/ds\,dt\) (in arbitrary units) resulting from our fit in Eq. (3.2).
with the hadronic scale \(r_{R}=2\,\text{GeV}^{-1}\). The phase can then be computed from the relation
\[\tan\delta_{3}(s)=\frac{\text{Im}F_{3}(s)|_{\text{BW}}}{\text{Re}F_{3}(s)|_{\text {BW}}}\,, \tag{3.11}\]
which completes our representation of the \(F\)-wave isobar \(F_{3}(s)\). Using \(m_{\rho_{3}}=1688\) MeV and \(\Gamma_{\rho_{3}}=161\) MeV from the PDG, in Fig. 6 we display the model for the phase \(\delta_{3}(s)\) Eq. (3.11) and the output for the corresponding Omnes function \(\Omega_{3}(s)\) Eq. (3.7) that we use for our analysis.
Finally, the function \(p_{3}(s)\) in Eq. (3.6) is a polynomial that parametrizes the energy dependence not directly related to the propagation of the \(\rho_{3}(1690)\) resonance and fixes the strength of the \(F\)-wave amplitude. In order to achieve a satisfactory description of the data, we take \(p_{3}(s)\) linear in \(s\) with parameters relative to the \(P\)-wave amplitude, _i.e._\(p_{3}(s)=a(|c|e^{i\phi_{c}}+|d|e^{i\phi_{d}}\,s)\), such that the overall normalization of the amplitude \(a\) can be factored out in Eq. (3.3) and absorbed in \(\mathcal{N}\) (_cf._ Eq. (3.1)) as in the previous subsection. By minimizing Eq. (3.1), we obtain the following values for the fit parameters:
\[|b|=0.205(34)\ \text{GeV}^{-2}\,,\quad\phi_{b}=2.784(298)\,, \tag{3.12}\]
for the \(P\)-wave subtraction constant, and
\[\begin{split}|c|\times 10^{2}&=4.38(1.46)\ \text{GeV}^{-4}\,, \qquad\phi_{c}=3.80(5)\,,\\ |d|\times 10^{2}&=1.58(46)\ \text{GeV}^{-6}\,, \qquad\phi_{d}=0.65(8)\,,\end{split} \tag{3.13}\]
for the parameters of the \(F\)-wave subtracted polynomial \(p_{3}(s)\). Again, the quoted error in the previous equations is the systematic uncertainty obtained from using the different \(P\)-wave phase shifts \(\delta_{1}(s)\) as input. The result of this fit implies \(|a|=0.0581(60)\ \text{GeV}^{-3}\) for the overall normalization of the amplitude and it is plotted in Fig. 7 as the dash-dotted blue
Figure 6: \(F\)-wave phase shift \(\delta_{3}(s)\) Eq. (3.11) (left plot) and output for the Omnès function \(\Omega_{3}(s)\) Eq. (3.7) (right plot).
line using the event distribution normalization from the fits, \({\cal N}=8.09(41)\times 10^{8}\) in units of \((2.4\,\text{MeV})^{-1}\). In the figure, the result of the standalone \(P\)-wave fit [_cf._ Eq.(3.2)] is also shown for comparison. As seen, the \(\rho_{3}(1690)\)-induced \(F\)-wave contribution improves the description of the data around 1.5 GeV. Numerically, we find that the individual \(F\)-wave contribution is rather small, while the interference between the \(P\)- and \(F\)-waves gives a correction of a few percent in the region \(m_{\pi\pi}\sim 1.5\) GeV. The \(\chi^{2}/\text{dof}\) remains high (about 100). However, with the systematic uncertainties associated to our fits (blue error band in Fig. 7), we conclude that our representation of the amplitude is capable of describing the two more prominent features shown by the data: the line shape of the BESIII measurements in the vicinity of the \(\rho(770)\) resonance as well as the movement of the function at \(m_{\pi\pi}\sim 1.5\) GeV due to the \(F\)-wave effects.1 As for the Dalitz-plot distribution, the \(F\)-wave effects provides no significant change with respect to Fig. 3.2 and we thus refrain to show them here.
Footnote 1: We shall wait for the arrival of new Dalitz distribution experimental data from BESIII to ascribe a strict statistical meaning to our \(\chi^{2}\) fits.
## 4 \(J/\psi\to\pi^{0}\gamma^{*}\) transition form factor
The \(J/\psi\pi^{0}\) transition form factor (TFF), \(f_{J/\psi\pi^{0}}(s)\), governs the \(J/\psi\to\pi^{0}\gamma^{*}\) amplitude and its energy dependence is experimentally accessible from the decays \(J/\psi\to\pi^{0}e^{+}e^{-}\) and
Figure 7: BESIII (red circles) [3] measurement of the \(m_{\pi\pi}\) invariant mass distribution for the decay \(J/\psi\to 3\pi\) as compared to our fits in Eqs. (3.2) (solid black line), (3.12) and (3.13) (dot-dashed blue line). The blue error band accounts for the systematic uncertainties attached to our calculations. See main text for details.
\(J/\psi\to\pi^{0}\mu^{+}\mu^{-}\). At present, there is no measurement of the shape of the form factor and the only experimental information on these decays is the measurement of the branching ratio by the BESIII collaboration, \(BR(J/\psi\to\pi^{0}e^{+}e^{-})=(7.56\pm 1.32\pm 0.50)\times 10^{-7}\)[41]. This measurement was obtained subtracting the \(\rho\) resonance contribution and assuming that excited \(c\bar{c}\) exchanges, _e.g._ coming from off-shell \(\psi^{\prime}\) contributions, dominate the energy-dependence of the form factor. Refs. [28; 42] showed that subtracting this contribution is not well motivated, as the light vector meson contributions to the form factor actually dominate the decay. Using the formalism previously employed for the decays of light vector mesons \(\omega/\phi\to\pi^{0}\gamma^{*}\)[24; 43], we present a dispersive description of \(f_{J/\psi\pi^{0}}(s)\) comparable to Ref. [28], but with the difference that our analysis is driven by the \(J/\psi\to 3\pi\) experimental data analysis presented in Sec. 3.
A dispersive representation of \(f_{J/\psi\pi^{0}}(s)\) is fully determined, up to possible subtractions, by the discontinuity across the right hand cut. Here, we focus on the light-quark resonance contributions to the discontinuity, which dominate the form factor at low and intermediate energies. Additional \(c\bar{c}\) contributions can arise close to the upper limit of the accessible phase space, \(\sqrt{s}=m_{J/\psi}-m_{\pi^{0}}\), and in fact can dominate the transition form factor there [28; 42], but these contributions appear in a region of the Dalitz decays which are strongly suppressed by phase space [28; 42], rendering the task of experimentally observing them nearly impossible. Bearing this in mind, and because of the absence of experimental data for the form factor, we do not consider them in our analysis.
In order to be consistent with the elastic approximation in the \(J/\psi\to\pi^{+}\pi^{-}\pi^{0}\) study, we include only the two-pion intermediate state contribution to the discontinuity (see Fig. 8 for a diagrammatic interpretation):
\[\mathrm{disc}f_{J/\psi\pi^{0}}(s)=i\ \frac{p^{3}(s)}{6\pi\sqrt{s}}\ F_{\pi}^{V \ast}(s)\ f_{1}(s)\ \theta(s-4m_{\pi}^{2})\,, \tag{4.1}\]
which requires as input the full \(s\)-channel \(P\)-wave \(J/\psi\to 3\pi\) amplitude \(f_{1}(s)\) given in Eq. (2.10) and the pion vector form factor complex-conjugate \(F_{\pi}^{V\ast}(s)\), which we approximate by the Omnes function (complex-conjugate) given in Eq. (2.15). Given that we are
Figure 8: Diagrammatic representation of the two-pion contribution to the discontinuity of the \(J/\psi\pi^{0}\) transition form factor [_cf._ Eq. (4.1)]. The blue and red circles represent, respectively, the full \(s\)-channel \(P\)-wave \(J/\psi\to 3\pi\) amplitude \(f_{1}(s)\) and the pion vector form factor \(F_{\pi}^{V}(s)\).
using a once-subtracted dispersion relation for the \(J/\psi\to 3\pi\) KT equations, an unsubtracted dispersion relation for the TFF, as used for instance in Ref. [28], would result in a divergent integral if no cutoff is used. Therefore, we use a once-subtracted dispersion relation for the TFF itself,
\[f_{J/\psi\pi^{0}}(s)=\left|f_{J/\psi\pi^{0}}(0)\right|e^{i\phi_{J/\psi\pi^{0}}(0 )}+\frac{s}{12\pi^{2}}\int_{4m_{\pi}^{2}}^{\infty}\frac{ds^{\prime}}{(s^{ \prime})^{3/2}}\frac{p^{3}(s^{\prime})\;F_{\pi}^{V*}(s^{\prime})\;f_{1}(s^{ \prime})}{(s^{\prime}-s)}\,, \tag{4.2}\]
where we indicate explicitly the existence of a non-vanishing phase of \(f_{J/\psi\pi^{0}}(s)\) at \(s=0\). This is implied by the cross-channel effects, _i.e._ the functions \(F_{\pi}^{V*}(s)\) and \(f_{1}(s)\) do not have the same phase, and the discontinuity of \(f_{J/\psi\pi^{0}}(s)\) is in general complex [24; 43]. The modulus of the subtraction constant \(\left|f_{J/\psi\pi^{0}}(0)\right|\) can be fixed from the \(J/\psi\to\pi^{0}\gamma\) partial decay width
\[\Gamma(J/\psi\to\pi^{0}\gamma)=\frac{e^{2}(m_{J/\psi}^{2}-m_{\pi^{0}}^{2})^{3 }}{96\pi m_{J/\psi}^{3}}\;\left|f_{J/\psi\pi^{0}}(0)\right|^{2}. \tag{4.3}\]
Using the value of the partial decay width of \(J/\psi\to\pi^{0}\gamma\)[39] in combination with the above equation, one obtains:
\[\left|f_{J/\psi\pi^{0}}(0)\right|=6.0(3)\times 10^{-4}\quad\text{GeV}^{-1}\,. \tag{4.4}\]
The phase \(\phi_{J/\psi\pi^{0}}(0)\) is a free parameter that can only be accessed from the transition form factor experimental data (see _e.g._ Ref. [24]). Due to the absence of data for \(J/\psi\to\pi^{0}\gamma^{*}\), we set \(\phi_{J/\psi\pi^{0}}(0)=0\) in our study.
In Fig. 9, we show up to \(\sqrt{s}=2\) GeV our prediction for the absolute value of the transition form factor resulting from Eq. (4.2) and using the results from Eq. (3.2) (solid black line). This is our central result for the form factor. In this figure, however, we also show the result of using the unsubtracted KT solution for \(J/\psi\to 3\pi\) (dashed blue line). It is worth noting that both curves are similar and only a slight difference is observed at the \(\rho\) peak. Additionally, the calculations when an unsubtracted dispersion relation for the form factor is used are also shown in the figure, both with an unsubtracted (dotted red line) and once-subtracted (dot-dashed green line) \(J/\psi\to 3\pi\) amplitude. In the latter case, we have cut the dispersive integral at 4 GeV\({}^{2}\) to avoid the dispersion relation to diverge. Again, both curves are similar. In this case, the value at the real photon energy can be calculated from the sum rule [28; 43]:
\[f_{J/\psi\pi^{0}}(0)=\frac{1}{12\pi^{2}}\int_{4m_{\pi}^{2}}^{\infty}ds^{\prime }\frac{p^{3}(s^{\prime})F_{\pi}^{V*}(s^{\prime})f_{1}(s^{\prime})}{(s^{\prime} )^{3/2}}\,. \tag{4.5}\]
This value is found to be \(\left|f_{J/\psi\pi^{0}}(0)\right|=5.0(2)\times 10^{-4}\) GeV\({}^{-1}\) for both versions of the unsubtracted dispersion relation. The quoted uncertainty is the systematic error from using the different phase shifts as input. This value is in qualitative agreement with the value extracted from the measured \(J/\psi\to\pi^{0}\gamma\) in Eq. (4.4), indicating that the normalization is saturated by the two-pion intermediate state contribution by roughly 85%. The difference between the various lines provides an estimate of the theoretical uncertainty associated to our description. We expect our study to strengthen the case for new experimental measurements of the shape of this form factor, which would allow improving the understanding of radiative \(J/\psi\) decays.
## 5 Summary
We have analyzed the decay \(J/\psi\to\pi^{+}\pi^{-}\pi^{0}\) within the framework of the Khuri-Treiman equations, which satisfy the constraints imposed by unitarity, analyticity and crossing symmetry. We have included the \(P\)-wave effects of the \(\pi\pi\) subsystem up to around 2 GeV, which are controlled by the \(\pi\pi\)\(P\)-wave scattering-phase shift. We have seen that one subtraction in the \(P\)-wave amplitude is necessary to achieve a good description of the experimental data in the \(\rho(770)\)-region. The corresponding subtraction constant was fixed from fits to the di-pion invariant mass distribution from BESIII. We have also seen that the \(P\)-wave alone is not capable of reproducing the data in the mass region around \(m_{\pi\pi}\sim 1.5\) GeV, and that the inclusion of an \(F\)-wave contribution arising from the \(\rho_{3}(1690)\) brings theory closer to data in this region. In addition, we have provided predictions for the transition form factor \(J/\psi\to\pi^{0}\gamma^{*}\) up to 2 GeV. Our study lays the groundwork for an event-by-event likelihood fit of high-precision data from \(J/\psi\) decays, which are expected to be available from BESIII in a near future.
The authors would like to thank Joshua Jackson and Ryan Mitchell (Indiana University) for fruitful discussions. MA is supported by Generalitat Valenciana under Grant No. CIDEGENT/2020/002, and by the Spanish Ministerio de Ciencia e Innovacion (MICINN) under contracts No. PID2020-112777GBI00. The work of SGS is supported by the Laboratory
Figure 9: Prediction for the absolute value of the transition form factor \(J/\psi\to\pi^{0}\gamma^{*}\) using Eq. (18) (solid black line) and variants of it. See main text for details.
Directed Research and Development program of Los Alamos National Laboratory under project number 20210944PRD2, and by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). This work was supported by the U.S. Department of Energy contract DE-AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab, U.S. Department of Energy Grants No. DE-FG02-87ER40365 and No. DE-FG02-92ER40735, CONACYT (Mexico) Grant No. A1-S-21389, and Spanish national Grants PID2020-118758GB-I00 and PID2019-106080 GB-C21. CFR is supported by Spanish Ministerio de Educacion y Formacion Profesional under Grant No. BG20/00133. VM is a Serra Hunter fellow. The work of MM is funded by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy-EXC-2094-390783311. DW is supported by National Natural Science Foundation of China Grant No. 12035007 and the NSFC and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the funds provided to the Sino-German Collaborative Research Center TRR110 "Symmetries and the Emergence of Structure in QCD" (NSFC Grant No. 12070131001, DFG ProjectID 196253076-TRR 110). This work contributes to the aims of the U.S. Department of Energy ExoHad Topical Collaboration, contract DE-SC0023598.
|
2310.13829 | Universal Representation of Permutation-Invariant Functions on Vectors
and Tensors | A main object of our study is multiset functions -- that is,
permutation-invariant functions over inputs of varying sizes. Deep Sets,
proposed by \cite{zaheer2017deep}, provides a \emph{universal representation}
for continuous multiset functions on scalars via a sum-decomposable model.
Restricting the domain of the functions to finite multisets of $D$-dimensional
vectors, Deep Sets also provides a \emph{universal approximation} that requires
a latent space dimension of $O(N^D)$ -- where $N$ is an upper bound on the size
of input multisets. In this paper, we strengthen this result by proving that
universal representation is guaranteed for continuous and discontinuous
multiset functions though a latent space dimension of $O(N^D)$. We then
introduce \emph{identifiable} multisets for which we can uniquely label their
elements using an identifier function, namely, finite-precision vectors are
identifiable. Using our analysis on identifiable multisets, we prove that a
sum-decomposable model for general continuous multiset functions only requires
a latent dimension of $2DN$. We further show that both encoder and decoder
functions of the model are continuous -- our main contribution to the existing
work which lack such a guarantee. Also this provides a significant improvement
over the aforementioned $O(N^D)$ bound which was derived for universal
representation of continuous and discontinuous multiset functions. We then
extend our results and provide special sum-decomposition structures to
universally represent permutation-invariant tensor functions on identifiable
tensors. These families of sum-decomposition models enables us to design deep
network architectures and deploy them on a variety of learning tasks on
sequences, images, and graphs. | Puoya Tabaghi, Yusu Wang | 2023-10-20T22:00:59Z | http://arxiv.org/abs/2310.13829v1 | # Universal Representation of Permutation-Invariant Functions on Vectors and Tensors
###### Abstract
A main object of our study is multiset functions -- that is, permutation-invariant functions over inputs of varying sizes. Deep Sets, proposed by Zaheer et al. (2017), provides a _universal representation_ for continuous multiset functions on scalars via a sum-decomposable model. Restricting the domain of the functions to finite multisets of \(D\)-dimensional vectors, Deep Sets also provides a _universal approximation_ that requires a latent space dimension of \(O(N^{D})\) -- where \(N\) is an upper bound on the size of input multisets. In this paper, we strengthen this result by proving that universal representation is guaranteed for continuous and discontinuous multiset functions though a latent space dimension of \(O(N^{D})\). We then introduce _identifiable_ multisets for which we can uniquely label their elements using an identifier function, namely, finite-precision vectors are identifiable. Using our analysis on identifiable multisets, we prove that a sum-decomposable model for general continuous multiset functions only requires a latent dimension of \(2DN\). We further show that both encoder and decoder functions of the model are continuous -- our main contribution to the existing work which lack such a guarantee. Also this provides a significant improvement over the aforementioned \(O(N^{D})\) bound which was derived for universal representation of continuous and discontinuous multiset functions. We then extend our results and provide special sum-decomposition structures to universally represent permutation-invariant tensor functions on identifiable tensors. These families of sum-decomposition models enables us to design deep network architectures and deploy them on a variety of learning tasks on sequences, images, and graphs.
## 1 Introduction
There is a wide gamut of machine learning problems aiming at identifying an optimal function on unordered collection of entities, namely, sets and multisets. Set or audience expansion
tasks in image tagging, computational advertisement, and astrophysicists (Ntampaka et al., 2016; Ravanbakhsh et al., 2016a), parsing objects in a scene (Eslami et al., 2016; Kosiorek et al., 2018), population statistics (Poczos et al., 2013), inference on point clouds (Qi et al., 2017a, b), min-cut and routing on a graph, reinforcement learning (Sunehag et al., 2017), and modelling interactions between objects in a set (Lee et al., 2019) are examples of such problems. Popular machine learning models are designed for ordered algebraic objects, namely, vectors, matrices, and tensors. To adapt these standard models to operate on multisets, we must enforce various permutation invariance properties (Oliva et al., 2013; Szabo et al., 2016; Muandet et al., 2013, 2012; Shawe-Taylor, 1993). To characterize a general class of multiset (or permutation-invariant) functions, several authors have proposed sum-decomposition models (Ravanbakhsh et al., 2016b; Zaheer et al., 2017). Notably, Deep Sets provides a universal representation for continuous multiset functions on _scalars_. This model is a form of Janossy pooling which is easy to implement and parallelize (Murphy et al., 2018). At its core, it maps elements of the input multiset \(X\) individually via \(\phi\) and then aggregates them to _uniquely_ encode the input multiset, that is, \(\Phi(X)=\sum_{x\in X}\phi(x)\in\mathbb{R}^{M}\) is the unique encoding for \(X\) or is an _injective_ map. Injectivity is the most important property of the encoder \(\Phi\) as it operates an intermediate feature extraction step by uniquely mapping multisets to vectors. Then, to represent a multiset function \(f(X)\), we map the resulting feature \(\Phi(X)\) to \(f(X)\), that is, \(f(X)=\rho\circ\Phi(X)\) where \(\rho\) is a decoder that belongs to a rich class of unconstrained functions. The existence of a continuous sum-decomposable model -- continuous encoder \(\Phi\) and decoder \(\rho\) -- is guaranteed only if the dimension of the model's intermediate features (\(M\)) is sufficiently large. If we lower this dimension, Wagstaff et al. (2022) prove that there exists no continuous decoder \(\rho\) such that \(\rho\circ\Phi\) that can even approximate some multiset functions better than a naive constant baseline. Regarding multiset functions on _vectors_, the best available result is given by Zaheer et al. (2017), which only provides a _universal approximation_ for continuous multiset functions through analyzing their finite-order Taylor approximation. As our first contribution, we provide a universal representation, through the sum-decomposable model, for continuous and discontinuous multiset functions on vectors which is a generalization to the existing universal approximation results. It is important to note that all universal representation results are stronger than their of universal approximation counterparts as the former results imply the latter ones.
Beyond permutation-invariant functions on scalars and vectors, SignNet and BasisNet (Lim et al., 2022) are neural network architectures, among other works (Dwivedi and Bresson, 2020; Dwivedi et al., 2020, 2021; Beaini et al., 2021; Kreuzer et al., 2021; Mialon et al., 2021; Kim et al., 2022), that provide sign and orthonormal basis invariances as they are displayed by eigenspaces (Eastment and Krzanowski, 1982; Rustamov et al., 2007; Bro et al., 2008; Ovsjanikov et al., 2008). Laplacian eigenvectors capture connectivity, clusters, subgraph frequencies, help derive graph positional encodings to generalize Transformers to graphs and improve performance of Graph Neural Networks (GNNs) (Dwivedi et al., 2020, 2021), and other useful properties of a graph (Von Luxburg, 2007; Cvetkovic et al., 1997). Under certain conditions, these network structures can universally approximate any continuous function with the desired invariances. Both networks utilize Invariant Graph Networks (IGNs) (Maron et al., 2018) to build permutation invariance or equivariance property for functions on matrices. IGN treats graphs (with nodes and edges) as _tensors_. Its architecture involves
permutation-invariant and equivariant _linear layers_ for tensor input and output data. As the tensor order goes to \(O(N^{4})\), it achieves the universality for graphs of size \(N\)(Azizian and Lelarge, 2020; Maron et al., 2019; Keriven and Peyre, 2019).
The type of injective multiset functions as introduced earlier are useful in studying the separation power of Message-Passing Neural Networks and its relation to the Weisfeiler-Leman (WL) graph isomorphism test (Xu et al., 2018). They are also used in showing the equivalence of high-order GNNs to high-order WL tests (Morris et al., 2019; Maron et al., 2019), and results related to geometric GNNs and WL tests (Hordan et al., 2023; Joshi et al., 2023; Pozdnyakov and Ceriotti, 2022). Amir et al. (2023) give a theoretical analysis on the required latent dimension for nonpolynomial encoders -- namely, sigmoid, hyperbolic tangent, sinusoid -- to arrive at an injective multiset function.
Contributions.In this paper we mainly focus on the study of multivariate multiset functions, that is, functions on multisets that contain at most \(N\) vectors of dimension \(D\). In the case of \(D=1\), this reduces to multiset functions on scalars. Our main contributions are as follows.
1. We propose extended versions of the sum-decomposition models of multiset functions on vectors (Zaheer et al., 2017). Multiset functions encompass permutation-invariant functions since they are invariant to the specific ordering of the input elements. We adopt the term multiset function to emphasize the fact that the number of input elements can vary -- which is not the case for permutation-invariant functions. As our first contribution, in Section 3, we present the universal representation for continuous and discontinuous multiset functions -- over \(D\)-dimensional vectors -- via a sum-decomposable model; see Theorem 8. The latent dimension of this model is \(\binom{N+D}{D}-1\) where \(N\) is the upper bound on the size of input multisets. For universal representation of _continuous_ multiset functions, we show that both encoder and decoder functions (of the sum-decomposable model) are also continuous; see Theorem 3. In the case of scalar domain \(D=1\), this latent dimension coincides with the one in (Wagstaff et al., 2019, 2022), that is, \(\binom{N+1}{1}-1=N\). Theorems 3 and 8 are novel contributions to the existing universal approximation results for continuous multiset functions (Zaheer et al., 2017; Maron et al., 2019; Segol and Lipman, 2019). Universal approximation results rely on finite-order Taylor approximation of continuous multiset functions. This technique does not work for (1) universal representation and (2) discontinuous multiset functions. As discussed next, we significantly lower this bound for representation continuous multiset functions.
2. In Section 4, we put forward the notion of _identifiable_ multisets. These are multisets whose distinct elements can be uniquely labeled via a continuous functional, for example, multisets containing finite-precision vectors are identifiable via a linear functional. We then show that on identifiable multisets of \(D\)-dimensional vectors, the latent dimension of the sum-decomposable representations can be lowered to \(2DN\) -- from the original \(\binom{N+D}{D}-1\). More importantly, through subsequent analysis on identifiable multisets, we show that universal representation of continuous multiset functions, where both encoder and decoder functions are continuous, is possible via latent dimension
of \(2DN\); see Theorem 6. The techniques used to derive this results are centered at the notion of an identifier function. This is different from the previous lines of work using polynomial and nonpolynomial-based encoders (Zaheer et al., 2017; Dym and Gortler, 2022). While our result in Theorem 3 is suboptimal compared to this new result (Theorem 6), we still include Section 3 as it obtains a better result compared to the existing work based on polynomial-based encoders (common in approximation approaches), which is of independent interest. In summary, the main contribution of our results to existing literature are (1) the lowest latent dimension bound, and (2) the continuity guarantee for the decoder function.
3. We finally provide universal representation for continuous and discontinuous permutation-invariant tensor functions of an arbitrary order. We obtain a nested sum-decomposable representation on _only_ for what we call _identifiable tensors_ -- similar to identifiable multisets. Depending on the particular choice of the identifier function, we then provide different bounds on the latent dimensions for the representation. This is similar to an existing decomposition result on permutation-equivariant functions on matrices (tensors of order two) (Fereydounian et al., 2022). In contrast, we propose a modified encoder function that (1) provides a reduced latent dimension -- \(2DN\) compared to \(\binom{D}{2}N\), (2) allows for generalization of the sum-decomposition representation to tensors of arbitrary order, (3) is guaranteed to be injective.
More on related work.The most notable work on universal representation of nonlinear multiset function concerns scalar-valued domains (Wagstaff et al., 2019, 2022). Much of the existing result in the literature concerns universal approximations for permutation-invariant and -equivariant functions. Sum-decomposition of multiset functions on multidimensional entities have been solely approached through the universal approximation power of polynomial functions (Zaheer et al., 2017; Segol and Lipman, 2019). Wagstaff et al. (2022) thoroughly investigates the theoretical distinction between universal representation and approximation of multiset functions on _scalars_; but this has remained an open question for multiset function on multivariate elements. Invariant and equivariant _linear_ functions have been thoroughly studied in the literature (Maron et al., 2018; Ravanbakhsh, 2020). In comparison, our nonlinear model generalizes the permutation-invariant linear layers utilized in IGNs (Maron et al., 2018), which, for universal approximation on \(N\) points, require \(O(N^{N})\)-sized intermediate tensors (Ravanbakhsh, 2020). An important class of permutation-compatible (invariant or equivariant) _nonlinear_ functions is GNN -- the primary iterative-based models for learning information over graphs. There has been a large body of work aimed at understanding the expressive power of GNNs Maron et al. (2019, 2019); Keriven and Peyre (2019); Garg et al. (2020); Azizian and Lelarge (2020); Bevilacqua et al. (2021). To provide insight to the capability of GNNs in representing graph functions, Fereydounian et al. (2022) introduce an algebraic formulation -- akin to the sum-decomposition model for multiset functions -- to represent permutation-equivariant _nonlinear_ functions on matrices in terms of composition of simple encoder and decoder functions. One can connect the notion of permutation-compatible functions (on 2-tensors) to our proposed algebraic form of permutation-invariant functions on \(k\)-tensors. Though, by focusing on identifiable tensors, we lowered the latent dimension required for representing 2-tensors to \(O(DN)\) -- from to \(O(D^{2}N)\) in (Fereydounian et al.,
2022) -- and guarantee the _injectivity_ of the encoding function.
**Organization.** In Section 2, we review the existing sum-decomposition results for multiset functions on scalars. Then, in Section 3, we provide our universal representation results for multivariate multiset functions. In Section 4, we introduce identifiable multisets and show how they can be used to derived a lowered latent dimension bound for continuous sum-decomposition of continuous multiset functions. Finally, focusing on permutation invariance, we put forth a nested sum-decomposition model to represent invariant functions over \(k\)-tensors in Section 5. Identifiablity for tensors is the main concept necessary to establish the aforementioned decomposition models. We delegate all proofs, supplementary results and discussions to the Appendix.
**Notations.** We denote the nonnegative reals by \(\mathbb{R}_{+}=\{x\in\mathbb{R}:x\geq 0\}\). For any \(N\in\mathbb{N}\), we let \([N]=\{1,\ldots,N\}\). The function \(f\) maps elements from its domain to elements in its codomain, that is to say, \(f:\mathrm{dom}(f)\to\mathrm{codom}(f)\) where \(\mathrm{codom}(f)=\{f(x):x\in\mathrm{dom}(f)\}\). Example of domains are \(\mathbb{R}\), \(\mathbb{R}^{D}\), \(\mathbb{N}\), and \(\mathbb{Q}\). We denote the collection of subsets of a domain \(\mathbb{D}\) as \(2^{\mathbb{D}}\). Let \(\mathbb{D}\) be a domain and \(f:\mathbb{D}\to\mathrm{codom}(f)\). We then let \(f(\mathbb{D}_{1})\stackrel{{\mathrm{def}}}{{=}}\{f(x):x\in \mathbb{D}_{1}\}\subseteq\mathrm{codom}(f)\) where \(\mathbb{D}_{1}\subseteq\mathbb{D}\). A multiset is a pair \((X,m)\) where \(X\) is a set of objects and \(m\) is a map from \(X\) to cardinals (representing the multiplicity of each element in \(X\)). We simplify this notation by identifying multisets by "multiset \(X\)" or using double curly brackets, namely, \(X=\{\{1,1,2\}\}\) has three elements but \(X=\{1,1,2\}=\{1,2\}\) has two elements. For any domain \(\mathbb{D}\) and multiset \(X\), \(X\subseteq\mathbb{D}\) means that the underlying set for \(X\) (repetitive elements removed) is a subset of \(\mathbb{D}\), and \(|X|\) is the size of the multiset (repetitive elements included). We denote multisets (and sets) with \(X\) and tensors (and matrices) with \(T\). For \(N\in\mathbb{N}\) and domain \(\mathbb{D}\), we let \(\mathbb{X}_{\mathbb{D},N}=\{\text{multiset }X\subseteq\mathbb{D}:|X|=N\}\), \(\mathbb{X}_{\mathbb{D},S}=\{\text{multiset }X\subseteq\mathbb{D}:|X|\in S\}\) where \(|\cdot|\) returns the cardinality of its input set (or multiset) and \(S\subseteq\mathbb{N}\), namely, \(\mathbb{X}_{\mathbb{D},[N]}=\{\text{multiset }X\subseteq\mathbb{D}:1\leq|X|\leq N\}\).
## 2 Review of the Sum-decomposable Model for Multiset Functions on Scalars
Standard machine learning algorithms operate on data arranged in canonical ways, namely vectors, matrices, and tensors. However, in statistic estimation, set expansion, outlier detection (Zaheer et al., 2017), and problems involving point clouds or group of atoms form a molecule (Wagstaff et al., 2022), we often want to learn maps defined on an unordered collection of entities, that is, a set or a multiset. Throughout this paper, we treat functions defined on sets and multisets differently. In this paper, we use a (multi)set function over \(\mathbb{D}\) to refer to a function whose domain consists of sub(multi)sets of a domain \(\mathbb{D}\). That is, a multiset function assigns a value for every possible submultiset of the domain \(\mathbb{D}\). A multiset function \(f\) must be: (1) invariant to the ordering of its input elements (permutation invariance), and (2) well-defined on multisets of different sizes.
In general, if one wishes to model a multiset function, it is not immediately clear how to enforce that the given function satisfies condition (1), namely, permutation invariance to the ordering of elements in input multisets. A powerful approach to tackle this problem is to first find a complete representation of multiset functions by specific composition of _uncon
strained functions_ -- which we refer to as _encoder and decoder functions_. Beside providing a characterization of multiset functions, such decomposition is crucial in the learning setting because, for example, these unconstrained functions can then be modeled (and learned) by neural networks; see for example the popular Deep Sets architecture (Zaheer et al., 2017). A specific form of this composition is called _sum-decomposable_ representation. The following provides such a result for set functions defined on a countable domain.
**Theorem 1** (Zaheer et al. 2017).: _Let \(f:2^{\mathbb{D}}\to\mathrm{codom}(f)\) where \(\mathbb{D}\) is a countable domain. Then,_
\[\forall X\subseteq\mathbb{D}:f(X)=\rho\circ\Phi(X),\ \ \Phi(X)=\sum_{x\in X}\phi(x), \tag{1}\]
_where \(\phi:\mathbb{D}\to\mathrm{codom}(\phi)\subset\mathbb{R}\), \(\rho:\mathrm{codom}(\Phi)\to\mathrm{codom}(f)\), and \(\mathrm{codom}(\Phi)=[0,1]\subset\mathbb{R}\)._
Theorem 1 provides an algebraic construct for universal representation for set functions on countable sets. We use the term universal representation to distinguish it from the weaker universal approximation results in the literature. This universal representation is obtained via the so-called sum-decomposable representation formally defined as follows:
**Definition 1**.: _A (multi)set function \(f\) over \(\mathbb{D}\) is sum-decomposable, or it has a sum-decomposable representation, if it can be written as \(f(X)=\rho\circ\Phi(X)\) for any (multi)set \(X\subseteq\mathbb{D}\), where \(\Phi(X)=\sum_{x\in X}\phi(x)\). We refer to \(\phi\), \(\Phi\) and \(\rho\) as the element-encoder, (multi)set-encoder, and decoder functions, respectively. We also may call \(\phi\) and \(\Phi\) sometimes simply as encoder functions. Furthermore, suppose \(\phi:\mathbb{D}\to\mathrm{codom}(\phi)\subseteq\mathbb{R}^{M}\), then we refer to \(\mathbb{R}^{M}\) (which is the ambient space of \(\mathrm{codom}(\phi)\)) as the decomposition model's latent space, and say that \(f\) is sum-decomposable via \(\mathbb{R}^{M}\). The latent dimension of this sum-decomposition is \(M\). A continuous multiset function \(f\) is continuously sum-decomposable if has a sum-decomposable representation where both the encoder and decoder functions, that is, \(\phi\) (and thus \(\Phi\)) and \(\rho\) are continuous in the entire ambient space of their respective domains._
Theorem 1 states that a set function over a countable set is essentially sum-decomposable via \(\mathbb{R}\) and the latent dimension is one. Interestingly, it is shown in (Wagstaff et al., 2022) that set functions on an _uncountable_ domain \(\mathbb{D}\) do not admit this sum-decomposable representation. Nevertheless, there is an extension of Theorem 1 to finite-sized multisets (Wagstaff et al., 2022).
**Theorem 2** (Wagstaff et al. 2019).: _Let \(N\in\mathbb{N}\), and \(f:\mathbb{X}_{\mathbb{D},[N]}\to\mathbb{R}\) be a continuous multiset function where \(\mathbb{D}=[0,1]\). Then, it is continuously sum-decomposable (see Definition 1) via \(\mathbb{R}^{N}\) -- that is, the latent space is a subset of \(\mathbb{R}^{N}\) -- and vice versa._
Recall that \(\mathbb{X}_{\mathbb{D},[N]}\) is the collection of all multisets over \(\mathbb{D}\) of cardinality at most \(N\). Since \(\mathbb{D}\) in the above theorem is \([0,1]\subset\mathbb{R}\), the result states that a continuous multiset function over scalars is continuously sum-decomposable via \(\mathbb{R}^{N}\) where \(N\) is the maximum cardinality of input multisets. The continuity of the decoder \(\rho\) comes at the cost of increased latent space dimension; compare universal representation Theorems 1 and 2. This latent dimension is tight in the worst case, that is, there does not exist a sum-decomposition via a latent space with dimension less than \(N\)(Wagstaff et al., 2022). Of course, in practice, for a specific multiset function at hand, there might exist a sum-decomposition with a much lower
latent dimension. One might expect that the latent dimension can be reduced in the case of universal approximation. Interestingly, at least in the case of multiset functions over scalars, despite the reasonable intuition, universal approximation is not possible (for all multiset functions) if we lower the latent dimension from \(N\)(Wagstaff et al., 2022).
## 3 Warmup: Sum-decomposable Model for Multiset Functions on Vectors
Theorem 2 concerns multiset functions operating on scalar-valued elements (that is, the input is a multiset with elements from \(\mathbb{R}\)). In practice we are often faced with applications on vector-valued multisets. For example, a multiset of \(\leq N\) points in \(\mathbb{R}^{D}\) can be represented as a multiset of cardinality \(\leq N\) over \(\mathbb{R}^{D}\); similarly, in the graph learning setting, we may have a set of \(N\) nodes in a graph with \(D\)-dimensional node features. In what follows, we consider multiset functions over vectors in \(\mathbb{R}^{D}\), that is, functions of the form \(f:\mathbb{X}_{\mathbb{D},[N]}\to\mathbb{R}\) where \(\mathbb{D}\subset\mathbb{R}^{D}\). But for simplicity, we first consider functions over multisets of cardinally exactly \(N\), that is, \(f:\mathbb{X}_{\mathbb{D},N}\to\mathbb{R}\). Our main result in this section is the following theorem:
**Theorem 3**.: _A continuous multivariate multiset function \(f:\mathbb{X}_{\mathbb{D},N}\to\mathrm{codom}(f)(\subseteq\mathbb{R}^{n})\), over a multisets of \(N\) elements in a compact set \(\mathbb{D}\subseteq\mathbb{R}^{D}\), is continuously sum-decomposable via \(\mathbb{R}^{\binom{N+D}{D}-1}\). That is, encoder \(\phi\) is continuous over \(\mathbb{D}\), and decoder \(\rho\) is continuous over \(\mathbb{R}^{\binom{N+D}{D}-1}\)._
The above theorem states that a continuous multiset function over multisets of \(N\) number of vectors from \(\mathbb{D}\subset\mathbb{R}^{D}\) is continuously sum-decomposable via a latent dimension of \(\binom{N+D}{D}-1\). In the special case of \(D=1\), this recovers the previous result for multiset functions over scalars in Theorem 2. In Section 4, we give a stronger result with a much lower latent dimension. We nevertheless include this result because (1) it is obtained via a similar proof technique to Theorem 2 by using polynomial-based encoders; and (2) this is a novel result that arrives at the same latent dimension as the one reported in (Zaheer et al., 2017) for the universal approximation of continuous multiset functions. The detailed proofs are given in Appendices A and B. We provide a high level description here.
In the remainder of this section, we fix \(\mathbb{D}\subset\mathbb{R}^{D}\) to be a compact subset of \(\mathbb{R}^{D}\). Following the proof technique in (Zaheer et al., 2017), to show the existence of a sum-decomposition of \(f=\rho\circ\Phi\), we want to construct a multiset encoder \(\Phi\) that is _injective_ over \(\mathbb{X}_{\mathbb{D},N}\). Once we have an injective encoder \(\Phi\), we can then define \(\rho=f\circ\Phi^{-1}\) over all admissible inputs, that is, \(\mathrm{codom}(\Phi)\). By construction, the encoder \(\Phi\) is continuous. The key challenge is to show that \(\rho=f\circ\Phi^{-1}\) is not only well-defined but also continuous over the latent space \(\mathrm{codom}(\Phi)\).
To construct an injective multiset function \(\Phi(X)=\sum_{x\in X}\phi(x)\), we use permutation-invariant polynomials as in (Maron et al., 2019; Segol and Lipman, 2019). We express these polynomials as follows:
\[\forall X\in\mathbb{X}_{\mathbb{R}^{D},N}:\ p(X)=\mathrm{poly}(e_{1}(X),\cdots,e_{K}(X)), \tag{2}\]
where \(e_{k}(X)=\sum_{x\in X}\prod_{d=1}^{D}x_{d}^{k_{d}}\) is a power-sum multi-symmetric polynomial, \(k_{1}\ldots k_{D}\) is the \(D\)-digit representation of \(k\in[K]\) in base \(N+1\), \(K=\binom{N+D}{D}-1\), and poly is a polynomial function (Rydh, 2007).
**Remark 1**.: _It is known that one can universally approximate continuous multivariate multiset functions over a compact set with a multiset polynomial in equation (2). Since there are \(K=\binom{N+D}{D}-1\) power-sum multi-symmetric polynomial basis\(\big{(}e_{k}(X)\big{)}_{k\in[K]}\), we can design an encoder \(\phi\) to provide a universally approximate sum-decomposable model for multivariate continuous multiset functions via \(\mathbb{R}^{\binom{N+D}{D}-1}\); see Theorem 9 in (Zaheer et al., 2017)._
In Appendix A, we first state Theorem 8 that guarantees a universal representation (not universal approximation as in the above Remark) of _any_ multivariate (\(D>1\)) multiset functions -- continuous or discontinuous -- via the sum-decomposable model through \(\mathbb{R}^{\binom{N+D}{D}-1}\). The resulting decoder \(\rho\) constructed this way may not be continuous. Nevertheless, this already is a novel contribution to the existing literature. From a technical standpoint, Theorem 8 is valuable as it does not rely on approximating the multiset function \(f\) using a finite-order polynomial; but rather, it aims at showing \(\Phi\) is injective through analyzing the parameterized roots of a class of multivariate polynomials. Based on Theorem 8, in Appendix B, we show that _if \(f\) is a continuous multiset function_, then its decoder \(\rho=f\circ\Phi^{-1}\) is continuous in the ambient space of \(\operatorname{codim}(\Phi)\), that is, \(\mathbb{R}^{\binom{N+D}{D}-1}\). The key idea is to prove that (1) \(\Phi^{-1}\) is a continuous function on \(\operatorname{codim}(\Phi)\) and (2) \(\operatorname{codim}(\Phi)\) is a compact subset of \(\mathbb{R}^{\binom{N+D}{D}-1}\). This completes the proof of Theorem 3.
We can further generalize the results in Theorems 3 and 8 to multisets of varying sizes.
**Theorem 4**.: _Theorems 3 and 8 are valid for multivariate multiset functions of at most \(N\) elements from a compact subset \(\mathbb{D}\subset\mathbb{R}^{D}\), that is, \(\mathbb{X}_{\mathbb{D},[N]}\)._
As a direct result of the proof technique of Theorem 4 (especially that the construction of the injective multiset encoder \(\Phi\) is independent of the multiset function \(f\) we try to represent), in Proposition 1, we show that for functions on product of _different_ multisets of \(D\)-dimensional vectors, we may use _the same_ encoder in its sum-decomposable model.
**Proposition 1**.: _A (continuous) multiset function \(f:\mathbb{X}_{\mathbb{D},[N_{1}]}\times\mathbb{X}_{\mathbb{D},[N_{2}]}\to \operatorname{codim}(f)\), where \(\mathbb{D}\) is compact subset of \(\mathbb{R}^{D}\), is (continuously) sum-decomposable via \(\mathbb{R}^{\binom{N+D}{D}-1}\times\mathbb{R}^{\binom{N+D}{D}-1}\), that is,_
\[\forall X\in\mathbb{X}_{\mathbb{D},[N_{1}]},X^{\prime}\in\mathbb{X}_{\mathbb{ D},[N_{2}]}:\ f(X,X^{\prime})=\rho\big{(}\sum_{x\in X}\phi(x),\sum_{x^{\prime}\in X ^{\prime}}\phi(x^{\prime})\big{)},\]
_where continuous \(\phi:\mathbb{R}^{D}\to\mathbb{R}^{\binom{N+D}{D}-1},N=\max\{N_{1},N_{2}\}\) and (continuous) \(\rho:\mathbb{R}^{\binom{N+D}{D}-1}\times\mathbb{R}^{\binom{N+D}{D}-1}\to \operatorname{codim}(\rho)\), and \(\operatorname{codim}(f)\subset\operatorname{codim}(\rho)\)._
Relation to the results of Fereydounian et al. (2022).We note that Fereydounian et al. (2022) propose an encoder \(\Phi\) that is injective over particular (multi)sets \(\mathbb{X}_{N,D}^{s}\) (not all multisets) of \(D\)-dimensional vectors. The function \(\Phi\) provides unique encodings for these (multi)sets in \(\operatorname{codim}(\Phi)\subset\mathbb{R}^{\binom{D}{2}N}\) -- where \(\operatorname{codim}(\Phi)=\{\Phi(X):X\in\mathbb{X}_{N,D}^{s}\}\) and \(N\) is the size of the input (multi)sets. This leads to a sum-decomposition for functions over (multi)sets in \(\mathbb{X}_{N,D}^{s}\). More importantly, for a continuous multiset function, a continuous sum-decomposition \(f=\rho\circ\Phi\) is not guaranteed over all multisets; in particular, the continuity of \(\rho=f\circ\Phi^{-1}\) is only guaranteed over \(\operatorname{codim}(\Phi)\) -- an _open_ subset of \(\mathbb{R}^{\binom{D}{2}N}\). Therefore, it
does not guarantee the existence of a continuous extension for \(\rho\) to \(\mathbb{R}^{\binom{D}{2}N}\); see Appendix L for a detailed discussion.
## 4 Sum-decomposable Models on Identifiable Multisets
Inspired by the theoretical difference between latent space dimensions for sum-decomposition representations of set and multiset functions -- refer to the result in (Fereydounian et al., 2022) -- we aim to reduce the dimension of the latent space. In this section, we introduce a way to achieve that by first restricting the domain to what we call _identifiable multisets_, which we introduce below. We will then show that results over this set can be extended to the case with this restriction removed.
**Definition 2**.: _Let \(l:\mathbb{D}\to\mathbb{R}\) be a continuous function and \(\mathbb{D}\) be a domain. We denote \(\mathbb{X}_{\mathbb{D},N}^{l}=\{X\in\mathbb{X}_{\mathbb{D},N}:\forall x,x^{ \prime}\in X,l(x)=l(x^{\prime})\to x=x^{\prime}\}\), as the set of multisets of size \(N\) that are identifiable via \(l\), that is, \(l\)-identifiable._
According to Definition 2, the continuous identifier function \(l\) uniquely labels distinct elements of multisets in \(\mathbb{X}_{\mathbb{D},N}^{l}\). In Theorem 5 and Proposition 2 we provide improved bounds on latent dimensions given in Theorems 4 and 8 -- by restricting the domain of multiset functions to \(l\)-identifiable multisets.
**Theorem 5**.: _Let \(f:\mathbb{X}_{\mathbb{R}^{D},N}\to\operatorname{codim}(f)\) be a multiset function and \(\ell:\mathbb{R}^{D}\to\operatorname{codim}(l)\subseteq\mathbb{R}\) be continuous. Then, there is a continuous function \(\phi:\mathbb{R}^{D}\to\operatorname{codim}(\phi)\subset\mathbb{C}^{D\times N}\) such that_
\[\forall X\in\mathbb{X}_{\mathbb{R}^{D},N}^{l}:f(X)=\rho\big{(}\sum_{x\in X} \phi(x)\big{)}=\rho\circ\Phi(X),\]
_where \(\rho:\Phi(\mathbb{X}_{\mathbb{R}^{D},N}^{l})\to\operatorname{codim}(f)\) and \(\Phi(\mathbb{X}_{\mathbb{R}^{D},N}^{l})\stackrel{{\text{\rm def}}} {{=}}\{\Phi(X):X\in\mathbb{X}_{\mathbb{R}^{D},N}^{l}\}\)._
**Proposition 2**.: _Theorem 5 is valid for multivariate multiset functions of at most \(N\) elements from a compact subset of \(\mathbb{R}^{D}\)._
**Remark 2**.: _Theorem 5 asserts that sum-decomposition of arbitrary (continuous or discontinuous) multiset functions is possible via latent dimension \(O(ND)\) on inputs that are identifiable via a continuous identifier \(l:\mathbb{R}^{D}\to\operatorname{codim}(l)\subseteq\mathbb{R}\). In comparison, the universal representation results in Theorems 3 and 4 require the latent space dimension of \(O(N^{D})\), which even for a small number of features, becomes obsolete in practice. Furthermore, the bound in Theorem 5 is an improvement over \(O(ND^{2})\) proposed in (Fereydounian et al., 2022). Additionally, we propose a concrete characterization of the input domain in Definition 2 -- which works for any continuous function \(l\) which can be designed for the specific application. Since the set of identifiable multisets \(\mathbb{X}_{\mathbb{D},N}^{l}\) (where \(\mathbb{D}\) is a compact subset of \(\mathbb{R}^{D}\)) does not form a compact set, there is no guarantee that \(\rho:\operatorname{codim}(\mathbb{X}_{\mathbb{D},N}^{l})\to\operatorname{codim} (f)\) has a continuous extension to \(\mathbb{C}^{D\times N}\) -- that is, if we use the multiset encoding function \(\Phi\) (introduced in the proofs), for some multiset functions \(f\) there may not exits a continuous \(\rho:\mathbb{C}^{D\times N}\to\operatorname{codim}(\rho)\) that enables the sum-decomposition. However, we address this issue in Section 4.1. We finally note that our specific multiset encoder \(\Phi\) maps multisets to complex-valued matrices in \(\mathbb{C}^{D\times N}\). Without causing any technical issues, this latent space can be viewed as \(\mathbb{R}^{2D\times N}\)._
**Remark 3**.: _The multiset encoding function \(\Phi\) in Proposition 2 is akin to separating invariants introduced in (Dym and Gortler, 2022), that is, the quantity \(\Phi(X)\) is invariant with respect to permutations -- as group actions. The subtle difference is that, multiset function are permutation-invariant but the converse is not true; since multiset functions may be allowed to have varying-sized inputs. Using separating invariants, Dym and Gortler (2022) claim that for randomized invariants of dimension \(2DN+1\) (compare to ours which is \(2DN\)) almost all matrices in \(\mathbb{R}^{D\times N}\) are identified up to the permutation of their columns. This results is based on applying linear projections on multidimensional elements to obtain scalars and then using a continuous separating (injective) map on them. Then they prove that the measure of matrices that can not be identified via the permutation-invariant encoding is zero. As a result, the sum-decomposition does not hold for all matrices (akin to multisets in our paper) and there is no guarantee for the existence of continuous decoder \(\rho\) (over the ambient space) for representing a continuous permutation-invariant function. On the other hand, Amir et al. (2023) propose using a nonpolynomial element-encoder, that is, \(\phi\) in our notation, to construct an injective multiset function \(\Phi\). They arrive at a latent dimension of \(2N(D+1)+1\). However, their construction of \(\phi\) requires random selection of parameters and the injectivity only holds in the almost surely sense. Therefore, it may not work for some parameters._
### Towards a Continuous Decoder
In Theorem 5, we prove how our notion of \(\ell\)-identifiable multisets admits a reduced latent dimension for sum-decomposition representation of multiset functions. The state-of-the-art approach that allow such reduced-dimensional representations rely on probabilistic arguments, that is, excluding multisets of measure zero from all valid multisets; see Remark 3. These approaches do not yet lead to a continuous sum-decomposition (in particular, continuous decoder function \(\rho\)). In what follows, we use the \(\ell\)-identifable multisets, focus on allowing the representation on a _dense subset_ of multisets as Proposition 3 and Lemma 1 below suggest, and ultimately find a continuous sum-decomposition as in Theorem 6. Proofs of all these results can be found in Appendices G to I.
**Proposition 3**.: _Let \(\mathbb{X}_{\mathbb{Q}^{D},N}\) be the set of all multisets of \(N\) vectors from \(\mathbb{Q}^{D}\) where \(\mathbb{Q}\) denotes the set of rational numbers. Then, \(\mathbb{X}_{\mathbb{Q}^{D},N}\) is an \(l\)-identifiable subset of \(\mathbb{X}_{\mathbb{R}^{D},N}\)._
**Lemma 1**.: _Let \(\mathbb{D}\subseteq\mathbb{R}^{D}\) be a compact set with continuous nonempty interior, \(Q(\mathbb{D})=\mathbb{D}\cap\mathbb{Q}^{D}\) be the set of all vectors with rational elements in \(\mathbb{D}\). Then, \(\mathbb{X}_{Q(\mathbb{D}),N}\) is a dense subset of \(\mathbb{X}_{\mathbb{D},N}\). Similarly, \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) is a dense subset of \(\Phi(\mathbb{X}_{\mathbb{D},N})\) where \(\Phi\) is the multiset encoder in Theorem 5._
**Remark 4**.: _To give an example for the utility of Theorem 5, consider circuits design applications where we want to learn a variety of electronic design tasks, for example, routed wire length prediction (Xie et al., 2021), circuit partitioning (Lu et al., 2020), logic synthesis (Zhu et al., 2020) and placement optimization (Li et al., 2020). We can represent circuit as geometric graphs with nodes placed on integer-valued vectors coordinates with multidimensional features, that is, properties of each circuit elements. As a result of Proposition 3, we can uniquely identify each node with a continuous identifier. Since they are an important
class of \(\ell\)-identifiable multisets, in Corollary 1, we specialize Theorem 5 to rational-valued multisets._
**Corollary 1**.: _Let \(f:\mathbb{X}_{\mathbb{R}^{D},N}\to\mathrm{codom}(f)\) be a multiset function. Then, there is a continuous function \(\phi:\mathbb{R}^{D}\to\mathrm{codom}(\phi)\subset\mathbb{C}^{D\times N}\) such that_
\[\forall X\in\mathbb{X}_{\mathbb{Q}^{D},N}:\ f(X)=\rho\big{(}\sum_{x\in X}\phi( x)\big{)}=\rho\circ\Phi(X),\]
_and \(\rho:\Phi(\mathbb{X}_{\mathbb{Q}^{D},N})\to\mathrm{codom}(f)\)._
Corollary 1 states that the sum-decomposable model is valid -- via latent dimension of \(2DN\) -- on a dense subset of multisets in \(\mathbb{X}_{\mathbb{R}^{D},N}\); see Lemma 1. The main drawbacks of this representation are as follows: (1) the measure of valid multisets \(\mathbb{X}_{\mathbb{Q}^{D},N}\) is zero and (2) there is no guarantee on the existence of a continuous extension of \(\rho\) to \(\mathbb{C}^{D\times N}\). It is important to note that we choose to focus on \(\mathbb{X}_{\mathbb{Q}^{D},N}\) despite the fact that it has a measure zero. We argue that one should not focus on the measure of valid multisets \(\mathbb{X}_{\mathbb{Q}^{D},N}\); but rather take advantage of the fact that valid multisets form a dense subset of all multisets, that is, \(\mathbb{X}_{\mathbb{R}^{D},N}\). In Theorem 6, we leverage this fact and resolve both aforementioned issues by focusing on the sum-decomposable representation of _continuous_ multiset function.
**Theorem 6**.: _Consider a compact set \(\mathbb{D}\subset\mathbb{R}^{D}\) with nonempty interior. Let \(f:\mathbb{X}_{\mathbb{D},N}\to\mathrm{codom}(f)\) be a continuous multiset function and \(\Phi:\mathbb{X}_{\mathbb{D},N}\to\mathrm{codom}(\Phi)\) be the function in Theorem 5. Then, there exists a continuous function \(\rho:\mathbb{C}^{D\times N}\to\mathrm{codom}(\rho)\subseteq f(\mathbb{X}_{ \mathbb{D},N})\) such that_
\[\forall X\in\mathbb{X}_{\mathbb{D},N}:f(X)=\rho\circ\Phi(X).\]
The major contribution of Theorem 6 is the continuity of \(\rho\) over the whole latent space. The detailed proof of this key theorem is in Appendix I. At the high level, we begin with the result in Corollary 1. There, we claim that there exits decoding function \(\rho:\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\to\mathrm{codom}(\rho)\) such that the stated decomposition remains valid on rational-valued vectors in \(\mathbb{D}\subset\mathbb{C}^{D\times N}\). This result does not guarantee the continuity of \(\rho\) in \(\mathbb{C}^{D\times N}\). However, we leverage the facts that (1) \(f\) is a continuous multiset function and (2) \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) is a _dense_ (noncompact) subset of \(\Phi(\mathbb{X}_{\mathbb{D},N})\) and prove that \(\rho\) has a _continuous extension_ to \(\Phi(\mathbb{X}_{\mathbb{D},N})\) -- a compact subset of \(\mathbb{C}^{D\times N}\) -- and therefore has a continuous extension to \(\mathbb{C}^{D\times N}\). The continuity guarantee of the decoder function \(\rho\) is the major contribution of Theorem 6 over existing results in (Dym and Gortler, 2022) and (Fereydounian et al., 2022).
## 5 Permutation-Invariant Tensor Functions
Data with underlying a hypergraph structure -- that is, nodes connected with weighted (hyper)edges -- are ubiquitous in many applications (Chen et al., 2019; Ma et al., 2018; Wang et al., 2019; Yang et al., 2019). Inspired by such data, we study functions defined on _tensors_ and adopt graph-theoretic notions to describe relevant concepts. The tensor setting is also used for the higher order graph neural network called IGN (Invariant graph network) (Maron et al., 2018).
**Definition 3**.: _Let \(N,K\in\mathbb{N}\). We denote \(\mathbb{T}_{N,K}\) as the set of \(K\)-th order \(D\)-dimensional tensors on \(N\) entities, that is, \(\mathbb{T}_{N,K}=\mathbb{R}^{N^{K}\times D}\)._
We can use tensors to represent (1) node features, (2) graph adjacency matrix (second order tensor), and (3) hypergraph hyperedges with multidimensional features. In Definition 4, we introduce a tensor notation for permuting node entities.
**Definition 4**.: _Let \(N\in\mathbb{N}\), \(\Pi(N)\) be the set of permutations over \([N]\), and \(\pi\in\Pi(N)\). Then, we let_
\[T,T^{\prime}\in\mathbb{T}_{N,K}:T^{\prime}=\pi(T)\Longleftrightarrow T^{ \prime}_{n_{1}\ldots n_{K}}=T_{\pi(n_{1})\ldots\pi(n_{K})}\quad\text{for all}\ \ n_{1},\ldots,n_{K}\in[N].\]
_Tensors \(T,T^{\prime}\in\mathbb{T}_{N,K}\) are congruent, denoted by \(T\equiv T^{\prime}\), if there is \(\pi\in\Pi(N)\) such that \(T^{\prime}=\pi(T)\)._
Akin to multiset functions, tensor functions must exhibit the same permutation invariance property. Adopting the notation in Definition 4, a permutation-invariant tensor function \(f:\mathbb{T}_{N,K}\to\operatorname{codim}(f)\) is such that \(f(T)=f(\pi(T))\) for all \(T\in\mathbb{T}_{N,K}\) and permutation operator \(\pi:[N]\to[N]\). This is a specific form of \(G\)-invariant functions (Maron et al., 2019b) where \(G\) is the permutation group. It also is an extension of permutation-compatible functions formalized for 2-tensors, that is, input graphs with node features and an adjacency second-order tensor (matrix) (Fereydounian et al., 2022).
In what follows, we propose a sum-decomposable model to universally represent permutation-invariant tensor (of arbitrary order) functions. Our algebraic approach relies on identifying each node with a unique label. This is applicable when we have tensors accompanied with distinct node features or hypergraph structures that admit the unique labelling. Given _any_ identifier, in Definition 5, we formalize the set of all tensors that admit the required unique labelling.
**Definition 5**.: _Let \(l:\mathbb{T}_{N,K}\to\mathbb{R}^{N\times M}\) be an identifier -- with \(M\)-dimensional labels -- such that_
\[\forall T\in T\in\mathbb{T}_{N,M}:\ l(\pi(T))=\pi l(T)\]
_We denote the set of tensors that are identifiable via \(l\), that is, \(l\)-identifiable, as \(\mathbb{T}_{N,K}^{l}\subset\mathbb{T}_{N,K}\) such that \(\forall T\in\mathbb{T}_{N,K}^{l}\) the multiset \(\{\{e_{n}^{\top}l(T)\in\mathbb{R}^{M}:n\in[N]\}\}\) consists of distinct elements where \(e_{n}\) is the \(n\)-th standard basis of \(\mathbb{R}^{N}\) and \(n\in[N]\)._
In the first step of our approach, given a tensor and an identifier, we first construct a _set_ that remains invariant with respect to the permutation of the node entities.
**Definition 6**.: _Let \(K,N\in\mathbb{N}\). For any \(l\)-identifiable tensor \(T\in\mathbb{T}_{N,K}^{l}\), let \(\alpha_{n_{1}\ldots n_{K}}^{K}(T)=T_{n_{1}\ldots n_{K}}\in\mathbb{R}^{D}\) for all \(n_{1},\ldots,n_{K}\in[N]\). Then, we define recursively that:_
\[\forall k\in K\text{ down to }1,n_{1},\ldots,n_{k-1}\in[N]:\ \alpha_{n_{1} \ldots n_{k-1}}^{k-1}(T)=\{\big{(}e_{n_{k}}^{\top}l(T),\alpha_{n_{1}\ldots n_{ k}}^{k}(T)\big{)}:n_{k}\in[N]\}.\]
_We define the set \(S(T)=\{\big{(}e_{n_{1}}^{\top}l(T),\alpha_{n_{1}}^{1}(T)\big{)}:n_{1}\in[N]\}\)._
**Proposition 4**.: _Let \(K,N\in\mathbb{N}\) and \(T,T^{\prime}\in\mathbb{T}_{N,K}^{l}\). Then, we have \(S(T)=S(T^{\prime})\) if and only if \(T^{\prime}=\pi(T)\) for a permutation \(\pi\in\Pi(N)\), that is, \(T\equiv T^{\prime}\)._
Proposition 4 establishes a bijection between identifiable tensors \(\mathbb{T}^{l}_{N,K}\) -- upto a permutation factor-- and sets in \(S(\mathbb{T}^{l}_{N,K})\). In Theorem 7, we give an algebraic characterization of (nonlinear) permutation-invariant tensor functions with distinct node features, that is, the sum-decompoable model is valid _only_ on identifiable tensors.
**Theorem 7**.: _Let \(K,N\in\mathbb{N}\). Let \(f:\mathbb{T}_{N,K}\to\mathrm{codom}(f)\) be a permutation-invariant tensor function. Then we have_
\[\forall T\in\mathbb{T}^{l}_{N,K}:f(T)=\rho\Big{(}\sum_{n_{1}\in[N]}\phi_{1}(e^ {\top}_{n_{1}}l(T),\beta^{1}_{n_{1}}(T))\Big{)}\]
_where \(l:\mathbb{T}_{N,K}\to\mathrm{codom}(l)\) is an identifier function, \(\beta^{K}_{n_{1}\ldots n_{K}}(T)=T_{n_{1}\ldots n_{K}}\in\mathbb{R}^{D}\) for all \(n_{1},\ldots,n_{K}\in[N]\), and_
\[\forall k\in[K],n_{1},\ldots,n_{k-1}\in[N]:\ \beta^{k-1}_{n_{1}\ldots n_{k-1}}(T )=\sum_{n_{k}\in[N]}\phi_{k}(e^{\top}_{n_{k}}l(T),\beta^{k}_{n_{1}\ldots n_{k} }(T)),\]
_where \(\phi_{k}\) is continuous over its compact domain and its codomain resides in \(\mathbb{R}^{D_{k}}\) (\(k\in[K]\)), and_
1. \(D_{k}=2(M+D_{k+1})N\) _if_ \(\mathrm{codom}(l)\subset\mathbb{Q}^{N\times M}\)__
2. \(D_{k}=\binom{N+D_{k+1}}{N}-1\) _if_ \(\mathrm{codom}(l)\subset\mathbb{R}^{N\times M}\)__
_for all \(k\in[K-1]\) and \(D_{K}=D\). The function \(\rho\) is defined on \(\mathbb{D}\subset\mathbb{R}^{D_{1}}\) where_
\[\mathbb{D}=\{\sum_{n_{1}\in[N]}\phi_{1}(e^{\top}_{n_{1}}l(T),\beta^{1}_{n_{1} }(T)):T\in\mathbb{T}^{l}_{N,K}\},\]
_and it is not guaranteed to have a continuous extension to \(\mathbb{R}^{D_{1}}\)._
## 6 Conclusion
In this work, we provide several contributions regarding the universal representation theory of multiset functions and permutation-invariant tensor functions. We show that there exists a universal sum-decomposition model for multivariate multiset functions and provide the best available bound on the dimension of encoded multiset features. Our extensive analyses rely on the novel notion of \(\ell\)-identifiable multisets -- which allows us to uniquely label distinct elements of multisets. Our proposed decomposable model for permutation-invariant tensor functions generalizes the existing models for linear permutation invariant tensor functions used as the layers of IGNs. It is important to note that our universal representation (via sum-decomposables) is stronger than the concept of universal approximation. All these results lead to universal approximation results of multiset (or tensor) functions by sum-decomposables -- which suggest natural architectures for neural networks, similar to DeepSets.
## References
* Amir et al. (2023) Tal Amir, Steven J Gortler, Ilai Avni, Ravina Ravina, and Nadav Dym. Neural injective functions for multisets, measures and graphs via a finite witness theorem. _arXiv preprint arXiv:2306.06529_, 2023.
* Attenborough (2003) Mary P Attenborough. _Mathematics for electrical engineering and computing_. Elsevier, 2003.
* Azizian and Lelarge (2020) Waiss Azizian and Marc Lelarge. Expressive power of invariant and equivariant graph neural networks. _arXiv preprint arXiv:2006.15646_, 2020.
* Beaini et al. (2021) Dominique Beaini, Saro Passaro, Vincent Letourneau, Will Hamilton, Gabriele Corso, and Pietro Lio. Directional graph networks. In _International Conference on Machine Learning_, pages 748-758. PMLR, 2021.
* Bevilacqua et al. (2021) Beatrice Bevilacqua, Fabrizio Frasca, Derek Lim, Balasubramaniam Srinivasan, Chen Cai, Gopinath Balamurugan, Michael M Bronstein, and Haggai Maron. Equivariant subgraph aggregation networks. _arXiv preprint arXiv:2110.02910_, 2021.
* Bro et al. (2008) Rasmus Bro, Evrim Acar, and Tamara G Kolda. Resolving the sign ambiguity in the singular value decomposition. _Journal of Chemometrics: A Journal of the Chemometrics Society_, 22(2):135-140, 2008.
* Chen et al. (2019) Yu Chen, Lingfei Wu, and Mohammed J Zaki. Reinforcement learning based graph-to-sequence model for natural question generation. _arXiv preprint arXiv:1908.04942_, 2019.
* Curgus and Mascioni (2006) Branko Curgus and Vania Mascioni. Roots and polynomials as homeomorphic spaces. _Expositiones Mathematicae_, 24(1):81-95, 2006.
* Cvetkovic et al. (1997) Dragos Cvetkovic, Dragos M Cvetkovic, Peter Rowlinson, and Slobodan Simic. _Eigenspaces of graphs_. Number 66. Cambridge University Press, 1997.
* Deimling (2010) Klaus Deimling. _Nonlinear functional analysis_. Courier Corporation, 2010.
* Dwivedi and Bresson (2020) Vijay Prakash Dwivedi and Xavier Bresson. A generalization of transformer networks to graphs. _arXiv preprint arXiv:2012.09699_, 2020.
* Dwivedi et al. (2020) Vijay Prakash Dwivedi, Chaitanya K Joshi, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Benchmarking graph neural networks. 2020.
* Dwivedi et al. (2021) Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, and Xavier Bresson. Graph neural networks with learnable structural and positional representations. _arXiv preprint arXiv:2110.07875_, 2021.
* Dym and Gortler (2022) Nadav Dym and Steven J Gortler. Low dimensional invariant embeddings for universal geometric learning. _arXiv preprint arXiv:2205.02956_, 2022.
* Eastment and Krzanowski (1982) HT Eastment and WJ Krzanowski. Cross-validatory choice of the number of components from a principal component analysis. _Technometrics_, 24(1):73-77, 1982.
* Dym and Gortler (2014)
Ryszard Engelking. General topology. _Sigma series in pure mathematics_, 6, 1989.
* Eslami et al. (2016) SM Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Geoffrey E Hinton, et al. Attend, infer, repeat: Fast scene understanding with generative models. _Advances in neural information processing systems_, 29, 2016.
* Feller (1967) William Feller. An introduction to probability theory and its applications. Technical report, Wiley series in probability and mathematical statistics, 3rd edn.(Wiley, New..., 1967.
* Fereydounian et al. (2022) Mohammad Fereydounian, Hamed Hassani, Javid Dadashkarimi, and Amin Karbasi. The exact class of graph functions generated by graph neural networks. _arXiv preprint arXiv:2202.08833_, 2022.
* Garg et al. (2020) Vikas Garg, Stefanie Jegelka, and Tommi Jaakkola. Generalization and representational limits of graph neural networks. In _International Conference on Machine Learning_, pages 3419-3430. PMLR, 2020.
* Hordan et al. (2023) Snir Hordan, Tal Amir, Steven J Gortler, and Nadav Dym. Complete neural networks for euclidean graphs. _arXiv preprint arXiv:2301.13821_, 2023.
* Joshi et al. (2023) Chaitanya K Joshi, Cristian Bodnar, Simon V Mathis, Taco Cohen, and Pietro Lio. On the expressive power of geometric graph neural networks. _arXiv preprint arXiv:2301.09308_, 2023.
* Keriven and Peyre (2019) Nicolas Keriven and Gabriel Peyre. Universal invariant and equivariant graph neural networks. _Advances in Neural Information Processing Systems_, 32, 2019.
* Kim et al. (2022) Jinwoo Kim, Dat Nguyen, Seonwoo Min, Sungjun Cho, Moontae Lee, Honglak Lee, and Seunghoon Hong. Pure transformers are powerful graph learners. _Advances in Neural Information Processing Systems_, 35:14582-14595, 2022.
* Kosiorek et al. (2018) Adam Kosiorek, Hyunjik Kim, Yee Whye Teh, and Ingmar Posner. Sequential attend, infer, repeat: Generative modelling of moving objects. _Advances in Neural Information Processing Systems_, 31, 2018.
* Kreuzer et al. (2021) Devin Kreuzer, Dominique Beaini, Will Hamilton, Vincent Letourneau, and Prudencio Tossou. Rethinking graph transformers with spectral attention. _Advances in Neural Information Processing Systems_, 34:21618-21629, 2021.
* Lee et al. (2019) Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In _International conference on machine learning_, pages 3744-3753. PMLR, 2019.
* Li et al. (2020) Yaguang Li, Yishuang Lin, Meghna Madhusudan, Arvind Sharma, Wenbin Xu, Sachin S Sapatnekar, Ramesh Harjani, and Jiang Hu. A customized graph neural network model for guiding analog ic placement. In _Proceedings of the 39th International Conference on Computer-Aided Design_, pages 1-9, 2020.
* Li et al. (2020)
Derek Lim, Joshua Robinson, Lingxiao Zhao, Tess Smidt, Suvrit Sra, Haggai Maron, and Stefanie Jegelka. Sign and basis invariant networks for spectral graph representation learning. _arXiv preprint arXiv:2202.13013_, 2022.
* Lu et al. (2020) Yi-Chen Lu, Sai Surya Kiran Pentapati, Lingjun Zhu, Kambiz Samadi, and Sung Kyu Lim. Tp-gnn: A graph neural network framework for tier partitioning in monolithic 3d ics. In _2020 57th ACM/IEEE Design Automation Conference (DAC)_, pages 1-6. IEEE, 2020.
* Ma et al. (2018) Tengfei Ma, Jie Chen, and Cao Xiao. Constrained generation of semantically valid graphs via regularizing variational autoencoders. _Advances in Neural Information Processing Systems_, 31, 2018.
* Maron et al. (2018) Haggai Maron, Heli Ben-Hamu, Nadav Shamir, and Yaron Lipman. Invariant and equivariant graph networks. _arXiv preprint arXiv:1812.09902_, 2018.
* Maron et al. (2019a) Haggai Maron, Heli Ben-Hamu, Hadar Serviansky, and Yaron Lipman. Provably powerful graph networks. _Advances in neural information processing systems_, 32, 2019a.
* Maron et al. (2019b) Haggai Maron, Ethan Fetaya, Nimrod Segol, and Yaron Lipman. On the universality of invariant networks. In _International conference on machine learning_, pages 4363-4371. PMLR, 2019b.
* Mialon et al. (2021) Gregoire Mialon, Dexiong Chen, Margot Selosse, and Julien Mairal. Graphit: Encoding graph structure in transformers. _arXiv preprint arXiv:2106.05667_, 2021.
* Morris et al. (2019) Christopher Morris, Martin Ritzert, Matthias Fey, William L Hamilton, Jan Eric Lenssen, Gaurav Rattan, and Martin Grohe. Weisfeiler and leman go neural: Higher-order graph neural networks. In _Proceedings of the AAAI conference on artificial intelligence_, volume 33, pages 4602-4609, 2019.
* Muandet et al. (2012) Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, and Bernhard Scholkopf. Learning from distributions via support measure machines. _Advances in neural information processing systems_, 25, 2012.
* Muandet et al. (2013) Krikamol Muandet, David Balduzzi, and Bernhard Scholkopf. Domain generalization via invariant feature representation. In _International conference on machine learning_, pages 10-18. PMLR, 2013.
* Murphy et al. (2018) Ryan L Murphy, Balasubramaniam Srinivasan, Vinayak Rao, and Bruno Ribeiro. Janossy pooling: Learning deep permutation-invariant functions for variable-size inputs. _arXiv preprint arXiv:1811.01900_, 2018.
* Ntampaka et al. (2016) Michelle Ntampaka, Hy Trac, Dougal J Sutherland, Sebastian Fromenteau, Barnabas Poczos, and Jeff Schneider. Dynamical mass measurements of contaminated galaxy clusters using machine learning. _The Astrophysical Journal_, 831(2):135, 2016.
* Oliva et al. (2013) Junier Oliva, Barnabas Poczos, and Jeff Schneider. Distribution to distribution regression. In _International Conference on Machine Learning_, pages 1049-1057. PMLR, 2013.
* O'Connor et al. (2019)
Maks Ovsjanikov, Jian Sun, and Leonidas Guibas. Global intrinsic symmetries of shapes. In _Computer graphics forum_, volume 27, pages 1341-1348. Wiley Online Library, 2008.
* Poczos et al. (2013) Barnabas Poczos, Aarti Singh, Alessandro Rinaldo, and Larry Wasserman. Distribution-free distribution regression. In _Artificial Intelligence and Statistics_, pages 507-515. PMLR, 2013.
* Pozdnyakov and Ceriotti (2022) Sergey N Pozdnyakov and Michele Ceriotti. Incompleteness of graph neural networks for points clouds in three dimensions. _Machine Learning: Science and Technology_, 3(4):045020, 2022.
* Pugh and Pugh (2002) Charles Chapman Pugh and CC Pugh. _Real mathematical analysis_, volume 2011. Springer, 2002.
* Qi et al. (2017a) Charles R Qi, Hao Su, Kaichun Mo, and Leonidas J Guibas. Pointnet: Deep learning on point sets for 3d classification and segmentation. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 652-660, 2017a.
* Qi et al. (2017b) Charles Ruizhongtai Qi, Li Yi, Hao Su, and Leonidas J Guibas. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. _Advances in neural information processing systems_, 30, 2017b.
* Ravanbakhsh (2020) Siamak Ravanbakhsh. Universal equivariant multilayer perceptrons. In _International Conference on Machine Learning_, pages 7996-8006. PMLR, 2020.
* Ravanbakhsh et al. (2016a) Siamak Ravanbakhsh, Junier Oliva, Sebastian Fromenteau, Layne Price, Shirley Ho, Jeff Schneider, and Barnabas Poczos. Estimating cosmological parameters from the dark matter distribution. In _International Conference on Machine Learning_, pages 2407-2416. PMLR, 2016a.
* Ravanbakhsh et al. (2016b) Siamak Ravanbakhsh, Jeff Schneider, and Barnabas Poczos. Deep learning with sets and point clouds. _arXiv preprint arXiv:1611.04500_, 2016b.
* Rustamov et al. (2007) Raif M Rustamov et al. Laplace-beltrami eigenfunctions for deformation invariant shape representation. In _Symposium on geometry processing_, volume 257, pages 225-233, 2007.
* Rydh (2007) David Rydh. A minimal set of generators for the ring of multisymmetric functions. In _Annales de l'institut Fourier_, volume 57, pages 1741-1769, 2007.
* Segol and Lipman (2019) Nimrod Segol and Yaron Lipman. On universal equivariant set networks. _arXiv preprint arXiv:1910.02421_, 2019.
* Seroul (2012) Raymond Seroul. _Programming for mathematicians_. Springer Science & Business Media, 2012.
* Shawe-Taylor (1993) John Shawe-Taylor. Symmetries and discriminability in feedforward network architectures. _IEEE Transactions on Neural Networks_, 4(5):816-826, 1993.
* Stein and Shakarchi (2010) Elias M Stein and Rami Shakarchi. _Complex analysis_, volume 2. Princeton University Press, 2010.
* Stein et al. (2017)
Peter Sunehag, Guy Lever, Audrunas Gruslys, Wojciech Marian Czarnecki, Vinicius Zambaldi, Max Jaderberg, Marc Lanctot, Nicolas Sonnerat, Joel Z Leibo, Karl Tuyls, et al. Value-decomposition networks for cooperative multi-agent learning. _arXiv preprint arXiv:1706.05296_, 2017.
* Sutherland [2009] Wilson A Sutherland. _Introduction to metric and topological spaces_. Oxford University Press, 2009.
* Szabo et al. [2016] Zoltan Szabo, Bharath K Sriperumbudur, Barnabas Poczos, and Arthur Gretton. Learning theory for distribution regression. _The Journal of Machine Learning Research_, 17(1):5272-5311, 2016.
* Luxburg [2007] Ulrike Von Luxburg. A tutorial on spectral clustering. _Statistics and computing_, 17:395-416, 2007.
* Wagstaff et al. [2019] Edward Wagstaff, Fabian Fuchs, Martin Engelcke, Ingmar Posner, and Michael A Osborne. On the limitations of representing functions on sets. In _International Conference on Machine Learning_, pages 6487-6494. PMLR, 2019.
* Wagstaff et al. [2022] Edward Wagstaff, Fabian B Fuchs, Martin Engelcke, Michael A Osborne, and Ingmar Posner. Universal approximation of functions on sets. _Journal of Machine Learning Research_, 23(151):1-56, 2022.
* Wang et al. [2019] Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. Kgat: Knowledge graph attention network for recommendation. In _Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining_, pages 950-958, 2019.
* Xie et al. [2021] Zhiyao Xie, Rongjian Liang, Xiaoqing Xu, Jiang Hu, Yixiao Duan, and Yiran Chen. Net2: A graph attention network method customized for pre-placement net length estimation. In _Proceedings of the 26th Asia and South Pacific Design Automation Conference_, pages 671-677, 2021.
* Xu et al. [2018] Keyulu Xu, Weihua Hu, Jure Leskovec, and Stefanie Jegelka. How powerful are graph neural networks? _arXiv preprint arXiv:1810.00826_, 2018.
* Yang et al. [2019] Xu Yang, Kaihua Tang, Hanwang Zhang, and Jianfei Cai. Auto-encoding scene graphs for image captioning. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 10685-10694, 2019.
* Zaheer et al. [2017] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnabas Poczos, Russ R Salakhutdinov, and Alexander J Smola. Deep sets. _Advances in neural information processing systems_, 30, 2017.
* Zhu et al. [2020] Keren Zhu, Mingjie Liu, Hao Chen, Zheng Zhao, and David Z Pan. Exploring logic optimizations with reinforcement learning and graph convolutional network. In _Proceedings of the 2020 ACM/IEEE Workshop on Machine Learning for CAD_, pages 145-150, 2020.
Theorem 8 and Its Proof
**Theorem 8**.: _Any multivariate multiset function \(f:\mathbb{X}_{\mathbb{R}^{D},N}\to\operatorname{codim}(f)\) -- over a multisets of \(N\) elements in \(\mathbb{R}^{D}\) -- is sum-decomposable via \(\mathbb{R}^{\binom{N+D}{D}-1}\), that is,_
\[\forall X\in\mathbb{X}_{\mathbb{R}^{D},N}:\ f(X)=\rho\circ\Phi(X),\ \text{where}\ \Phi(X)\stackrel{{\mathrm{def}}}{{=}}\sum_{x\in X}\phi(x),\]
_where \(\phi:\mathbb{R}^{D}\to\operatorname{codim}(\phi)\subseteq\mathbb{R}^{\binom{ N+D}{D}-1}\) is a continuous function and \(\rho:\operatorname{codim}(\Phi)\to\operatorname{codim}(f)\)._
Note that compared to Theorem 3, the function \(f\) in Theorem 8 is not necessarily continuous, and the decoder function \(\rho\) is not necessarily continuous as well.
### Proof
Let \(N,D\in\mathbb{N}\). We want to prove that for any multivariate multiset function \(f:\mathbb{X}_{\mathbb{R}^{D},N}\to\operatorname{codim}(f)\), there exists a sum-decomposition via \(\mathbb{R}^{\binom{N+D}{D}-1}\).
**Trivial case of \(\mathbf{N=1}\).** We define functions \(\phi\) and \(\rho\) as follows:
\[\forall x\in\mathbb{R}^{D}:\phi(x)=x,\ \text{and}\ \rho(x)=f(\{\{x\}\}),\]
where \(\operatorname{dom}(\phi)=\mathbb{R}^{D}\). Since \(\Phi(\{\{x\}\})=\phi(x)\), \(\operatorname{codim}(\rho)=\operatorname{codim}(\Phi)=\operatorname{codim} (\phi)=\mathbb{R}^{D}=\mathbb{R}^{\binom{1+D}{D}-1}\), \(\operatorname{codim}(\rho)=\operatorname{codim}(f)\), and \(f(\{\{x\}\})=\rho\circ\Phi(\{\{x\}\})\), we arrive at the theorem's statement for \(N=1\).
**Remark 5**.: _In our notation, depending on the context, \(x_{n}\) can mean either (1) the \(n\)-th coordinate (element) of vector \(x\) (say in \(\mathbb{R}^{D}\)) or (2) a vector indexed by \(n\), for example, \(x_{1},\ldots,x_{N}\in\mathbb{R}^{D}\). In the latter case, we do emphasize the domain of the vector a priori, that is, \(x_{n}\in\mathbb{R}^{D}\)._
**General case of \(\mathbf{N\geq 2}\).** We break down our approach into two steps:
1. We show that there exists a function \(\phi:\mathbb{R}^{D}\to\operatorname{codim}(\phi)\subseteq\mathbb{R}^{\binom{ N+D}{D}-1}\) such that \(\Phi(X)=\sum_{x\in X}\phi(x)\) is an injective multiset function, that is, \(\Phi^{-1}\) is well-defined on \(\operatorname{codim}(\Phi)\).
2. Let \(\rho=f\circ\Phi^{-1}\). This immediately proves \(f=\rho\circ\Phi(X)=\rho\big{(}\sum_{x\in X}\phi(x)\big{)}\).
This is an extension to the existing univariate result (that is, \(D=1\)); refer Theorem 2 in (Zaheer et al., 2017). In the one-dimensional case, Zaheer et al. (2017) prove that \(\Phi\) is an invertible function by showing that, given \(\Phi(X)\), one can construct a univariate polynomial \(p(t;\Phi(X))\) whose roots are \(X\), that is, \(\Phi^{-1}\circ\Phi(X)=\operatorname{roots}\circ p(t;\Phi(X))=X\) where roots returns the multiset of roots of a polynomial equation. Moreover, the appropriate choice for the basis function \(\phi\) -- which makes this analysis tractable -- gives a bound for the latent dimension, that is, dimension of the ambient vector space containing \(\operatorname{codim}(\Phi)\).
In our approach, we arrive at the appropriate choice for \(\phi\) constructing a **multivariate** polynomial whose parameterized roots are **related** to \(X\). In what follows, we (1) introduce the basis function \(\phi\), (2) construct an appropriate multivariate polynomial \(p(t;z,\Phi(X))\) -- parameterized by both \(t\in\mathbb{R}\) and \(z\in\mathbb{R}^{D}\) -- and (3) extract \(X\) from its **parameterized** roots. In step (3), we introduce novel techniques for analyzing parameterized multisets -- akin to computing directional derivatives for multivariate functions. We summarize these steps in Figure 1.
The following definition introduces several frequently used functions in this proof.
**Definition 7**.: _For any multiset of real scalars \(X=\{\{x_{n}\in\mathbb{R}:n\in[N]\}\}\) where \(N\geq 2\), we let_
\[\operatorname{gap}(X)=\min_{\begin{subarray}{c}n,n^{\prime}\in[N] \\ x_{n}\neq x_{n}^{\prime}\end{subarray}}|x_{n}-x_{n^{\prime}}|, \operatorname{diam}(X)=\max_{\begin{subarray}{c}n,n^{\prime} \in[N]\\ n\neq n^{\prime}\end{subarray}}|x_{n}-x_{n^{\prime}}|,\] \[\operatorname{unique}(X)=\{x_{n}:n\in[N]\}, \operatorname{sort}(X)=\big{(}x_{\pi(n)}\big{)}_{n\in[N]}\in \mathbb{R}^{N},\]
_where \(\pi:[N]\to[N]\) is a permutation operator such that \(x_{\pi(1)}\geq x_{\pi(2)}\geq\cdots\geq x_{\pi(N)}\)._
**Remark 6**.: _If \(x_{n},x_{n^{\prime}}\in X\) where \(x_{n}=x_{n^{\prime}}\) for distinct \(n,n^{\prime}\in[N]\), then the permutation operator \(\pi\) in Definition 7 is not unique; but any such permutation \(\pi\) results in the same sorted vector \((x_{\pi(n)})_{n\in[N]}\). Hence, \(\operatorname{sort}(X)\) is well-defined for any multiset of real-valued scalars \(X\)._
**Remark 7**.: _Let \(X\) be a multiset of real scalars. Then, \(\operatorname{gap}(X)\) is well-defined only if the cardinality of \(\operatorname{unique}(X)\) is strictly greater than one, that is, \(|\operatorname{unique}(X)|>1\)._
We consider a class of multivariate polynomials paramterized with \(t\in\mathbb{R}\) and \(z\in\mathbb{R}^{D}\). In Proposition 5, we introduce a function \(\phi\) that enables us to construct each polynomial -- in the aforementioned class -- using only \(\Phi(X)=\sum_{x\in X}\phi(x)\). In other words, knowing \(t\), \(z\) and \(\Phi(X)\), we can represent the polynomial \(\prod_{x\in X}(t-z^{\top}x)\). This allows us to write the polynomial \(\prod_{x\in X}(t-z^{\top}x)\) as a function depending on variables \(t,z\) and \(\Phi(X)\), which we call \(p(t;z,\Phi(X))\).
**Proposition 5**.: _Let \(N,D\in\mathbb{N}\) and \(\phi:\mathbb{R}^{D}\to\operatorname{codim}(\phi)\subseteq\mathbb{R}^{\binom{N +D}{D}-1}\) be the following continuous function:_
\[\forall x\in\mathbb{R}^{D}:\ \phi(x)=\big{(}\prod_{d=1}^{D}x_{d}^{k_{d}} \big{)}_{k\in\mathcal{K}^{D}_{N}}\in\mathbb{R}^{\binom{N+D}{D}-1},\]
_where \(k=(k_{d})_{d\in[D]}\) is a \(D\)-tuple and \(\mathcal{K}^{D}_{N}=\{(k_{d})_{d\in[D]}:k_{1}+\ldots+k_{D}\in[N],k_{1},\ldots,k _{D}\geq 0\}\). Then, for all \(X\in\mathbb{X}_{\mathbb{R}^{D},N}\), \(\Phi(X)=\sum_{x\in X}\phi(x)\) suffices to construct the following multivariate polynomial:_
\[\forall t\in\mathbb{R},z\in\mathbb{R}^{D}:\ \prod_{x\in X}(t-z^{\top}x)=p\big{(}t;z, \Phi(X)\big{)}. \tag{3}\]
Figure 1: Proof sketch for the injectivity of \(\Phi\).
To show that \(\Phi\) is an invertible function, we want to argue that the multiset \(X\) can be uniquely recovered form the multivariate polynomial in equation (3), that is, \(p\big{(}t;z,\Phi(X)\big{)}\). Let us proceed with the following definitions.
**Definition 8**.: _For any \(z\in\mathbb{R}^{D}\), multiset \(X\) of at least two \(D\)-dimensional vectors, and multivariate polynomial \(p\big{(}t;z,\Phi(X)\big{)}\) in the equation (3), we formalize the following functions:_
* \(\mathrm{roots}\circ p\big{(}t;z,\Phi(X)\big{)}=\{\{t:p\big{(}t;z,\Phi(X)\big{)} =0\}\}=\{\{z^{\top}x:x\in X\}\}\stackrel{{\mathrm{def}}}{{=}}z^{ \top}X\)__
* \(\mathrm{separators}\circ p\big{(}t;z,\Phi(X)\big{)}=\mathrm{argmax}_{z\in \mathbb{R}^{D}}|\mathrm{unique}\circ\mathrm{roots}\circ p\big{(}t;z,\Phi(X)|\) _,_
_where \(|\cdot|\) returns the cardinality of its input set._
**Definition 9**.: _Let \(X\) be a multiset of at least two \(D\)-dimensional vectors. If exists, the directional derivative of \(\mathrm{sort}\big{(}z^{\top}X\big{)}\) -- where \(z^{\top}X=\{\{z^{\top}x:x\in X\}\}\) -- at \(z\in\mathbb{R}^{D}\) in the direction of unit norm \(v\in\mathbb{R}^{D}\) is given as follows:_
\[\nabla_{v}\mathrm{sort}\big{(}z^{\top}X\big{)}=\lim_{\delta\to 0}\frac{1}{ \delta}\Big{(}\mathrm{sort}\big{(}(z+\delta v)^{\top}X\big{)}-\mathrm{sort} \big{(}z^{\top}X\big{)}\Big{)}. \tag{4}\]
In Proposition 6, we show how to retrieve \(X\) from \(\mathrm{roots}\circ p\big{(}t;z,\Phi(X)\big{)}\), that is, the parameterized multiset \(z^{\top}X\).
**Proposition 6**.: _For any \(z\in\mathbb{R}^{D}\), multiset \(X=\{\{x_{n}\in\mathbb{R}^{D}:n\in[N]\}\}\) where \(N\geq 2\), and the multivariate polynomial \(p\big{(}t;z,\Phi(X)\big{)}\) in the equation (3), we have_
\[\mathrm{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\neq\emptyset.\]
_Moreover, for any \(z^{*}\in\mathrm{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\), the directional derivative of \(\mathrm{sort}\circ\mathrm{roots}\circ p\big{(}t;z,\Phi(X)\big{)}\) is well-defined and we have:_
\[\forall d\in[D]:\nabla_{e_{d}}\mathrm{sort}\circ\mathrm{root}\circ p\big{(}t ;z,\Phi(X)|_{z=z^{*}}=(e_{d}^{\top}x_{\pi_{z^{*}}(n)})_{n\in[N]},\]
_where \(e_{d}\in\mathbb{R}^{D}\) is the \(d\)-th standard basis vector for \(\mathbb{R}^{D}\) (\(d\in[D]\)), and \(\pi_{z^{*}}:[N]\to[N]\) is a permutation operator that sorts the elements \({z^{*}}^{\top}X\) -- see Definition 7._
In summary, given \(\Phi(X)\in\mathbb{R}^{\binom{N+D}{D}-1}\), we can construct a multivariate polynomial \(p\big{(}t;z,\Phi(X)\big{)}\) with parameterized roots \(z^{\top}X\); see Proposition 5. Then, we can pick a fixed vector \(z^{*}\in\mathrm{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\neq\emptyset\); see Definition 8 and Proposition 6. We then prove the following result:
\[\forall d\in[D]:\nabla_{e_{d}}\mathrm{sort}\circ\mathrm{root}\circ p\big{(} t;z,\Phi(X)|_{z=z^{*}}=(e_{d}^{\top}x_{\pi_{z^{*}}(n)})_{n\in[N]},\]
where \(\nabla_{e_{d}}\) computes the directional derivative (see Definition 9) in the direction of \(e_{d}\) -- the \(d\)-th standard basis of \(\mathbb{R}^{D}\) -- for \(d\in[D]\), and \(x_{n}\in\mathbb{R}^{D}\) is an element of \(X\) indexed by \(n\). We retrieve \(X\) as follows:
\[\{\{(e_{d}^{\top}x_{\pi_{z^{*}}(n)})_{d\in[D]}\in\mathbb{R}^{D}:n \in[N]\}\} =\{\{(e_{d}^{\top}x_{n})_{d\in[D]}\in\mathbb{R}^{D}:n\in[N]\}\}\] \[=X.\]
This result does not depend on the specific choices for the permutation operator \(\pi_{z^{*}}\) and \(z^{*}\in\mathrm{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\). Therefore, \(\Phi\) is an invertible multiset function, that is,
\[\Phi^{-1}\circ\Phi(X)=\{\{\begin{pmatrix}\big{(}\nabla_{e_{1}}\mathrm{sort} \circ\mathrm{root}\circ p\big{(}t;z,\Phi(X)|_{z=z^{*}}\big{)}_{n}\\ \vdots\\ \big{(}\nabla_{e_{D}}\mathrm{sort}\circ\mathrm{root}\circ p\big{(}t;z,\Phi(X )|_{z=z^{*}}\big{)}_{n}\end{pmatrix}\in\mathbb{R}^{D}:n\in[N]\}\}, \tag{5}\]
where \(z^{*}\in\mathrm{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\), and subscript \(n\) denotes the \(n\)-the element of the \(N\)-dimensional vectors. The function \(\Phi^{-1}\) is only well-defined on \(\mathrm{codom}(\Phi)\); see equation (5).
Now we let \(\rho:\mathrm{codom}(\Phi)\to\mathrm{codom}(f)\) where
\[\forall y\in\mathrm{codom}(\Phi)\subseteq\mathbb{R}^{\binom{N+D}{D}-1}:\rho(y )=f\circ\Phi^{-1}(y).\]
This proves the sum-decomposition representation claim of the theorem, that is, \(f=\rho\circ\Phi\). In Appendices A.2 and A.3 we provide proofs of Propositions 5 and 6. In Appendix A.4, we provide two illustrative examples on computing \(\Phi^{-1}\).
### Proof of Proposition 5
We expand the expression in equation (3) as follows:
\[\forall t\in\mathbb{R},z\in\mathbb{R}^{D}:\prod_{x\in X}(t-z^{\top}x)=t^{N}+ \sum_{n\in[N]}(-1)^{n}a_{n}(z;X)t^{N-n} \tag{6}\]
where each coefficient \(a_{n}(z;X)\) is determined using the Newton-Girard formulae (Seroul, 2012), that is,
\[a_{n}(z;X)=\frac{1}{n}\det\begin{pmatrix}E_{1}(z;X)&1&0&\cdots&0\\ E_{2}(z;X)&E_{1}(z;X)&1&\cdots&0\\ \vdots&\vdots&\vdots&\cdots&\vdots\\ E_{n}(z;X)&E_{n-1}(z;X)&E_{n-2}(z;X)&\cdots&E_{1}(z;X)\end{pmatrix} \tag{7}\]
for all \(n\in[N]\) and \(z\in\mathbb{R}^{D}\), and \(E_{n}(z;X)=\sum_{x\in X}(z^{\top}x)^{n}\). Therefore, each coefficient \(a_{n}(z;X)\) is a polynomial function of \(\{E_{n}(z;X)\}_{n=1}^{N}\) -- moments of the parameterized multiset \(\{\{z^{\top}x:x\in X\}\}\stackrel{{\mathrm{def}}}{{=}}z^{\top}X\). Lemma 2 lets us relate each moment to the elementary symmetric polynomials.
**Lemma 2**.: _For any \(k_{1},\cdots,k_{D}\in\mathbb{N}\cup\{0\}\) and \(n\in\mathbb{N}\), let_
\[\binom{n}{k_{1},\ldots,k_{D}}^{\mathrm{ind}}=\begin{cases}\frac{n!}{k_{1}! \cdots k_{D}!}&\text{ if }k_{1}+\cdots+k_{D}=n\\ 0&\text{ otherwise.}\end{cases}\]
_Let \(x,z\in\mathbb{R}^{D}\) and \(n\in[N]\). Then, we have \((z^{\top}x)^{n}=\langle\psi(z,n),\phi(x)\rangle\) such that_
\[\psi(z,n)=\Big{(}\binom{n}{k_{1},\ldots,k_{D}}^{\mathrm{ind}}\prod_{d=1}^{D}z _{d}^{k_{d}}\Big{)}_{k\in\mathcal{K}_{N}^{D}},\quad\phi(x)=\big{(}\prod_{d=1}^ {D}x_{d}^{k_{d}}\big{)}_{k\in\mathcal{K}_{N}^{D}}\in\mathbb{R}^{\binom{N+D}{ D}-1}, \tag{8}\]
_where \(k=(k_{d})_{d\in[D]}\) and \(\mathcal{K}_{N}^{D}=\{(k_{d})_{d\in[D]}:k_{1}+\ldots+k_{D}\in[N],k_{1},\ldots,k _{D}\geq 0\}\)._
Proof.: Let \(x,z\in\mathbb{R}^{D}\) and \(n\in[N]\). Then, we have
\[(z^{\top}x)^{n}=(\sum_{d\in[D]}z_{d}x_{d})^{n}=\sum_{k_{1}+\ldots+k_{D}=n}{n \choose k_{1},\ldots,k_{D}}\prod_{d=1}^{D}z_{d}^{k_{d}}\prod_{d=1}^{D}x_{d}^{k_{ d}}=\langle\psi(z,n),\phi(x)\rangle\]
where \(\phi(x)\) and \(\psi(z,n)\) are given in equation (8). Since the dimension of \(\phi(x)\) -- the size of \(\mathcal{K}_{N}^{D}\) -- is equivalent to the number of solutions to the following problem:
\[k_{1},\ldots,k_{D}\in\mathbb{N}\cup\{0\}:1\leq\sum_{d=1}^{D}k_{d}\leq N. \tag{9}\]
We can transform the problem in equation (9) to the following form:
\[k_{1},\ldots,k_{D},k_{\circ}\in\mathbb{N}\cup\{0\},k_{\circ}\neq N:\sum_{d=1}^ {D}k_{d}+k_{\circ}=N. \tag{10}\]
In the occupancy problem, we ask: _how many ways can one distribute \(N\) indistinguishable objects into \(D+1\) distinguishable bins?_ The number of nonnegative solutions are \({N+D\choose D}\); refer to (Feller, 1967), section 5. However, if \(k_{\circ}=N\), then \(k_{1}=k_{2}=\cdots=k_{D}=0\) which is not allowed. If we exclude this case, we arrive at \({N+D\choose D}-1\) integer solutions for problems in equations (9) and (10)
Let us now prove the proposition's statement. Given \(\Phi(X)=\sum_{x\in X}\phi(x)\), we compute
\[\forall z\in\mathbb{R}^{D},n\in[N]:E_{n}(z;X)=\sum_{x\in X}\langle z,x\rangle^ {n}=\sum_{x\in X}\langle\psi(z,n),\phi(x)\rangle=\langle\psi(z,n),\Phi(X)\rangle,\]
that are, all parameterized moments required to construct \(\prod_{x\in X}(t-x^{\top}z)\) -- refer to Lemma 2, and equation (7). Therefore, we can uniquely identify the polynomial in equation (3) with only \(\Phi(X)\).
### Proof of Proposition 6
**Proposition 7**.: _For any \(z\in\mathbb{R}^{D}\), multiset \(X=\{\{x_{n}\in\mathbb{R}^{D}:n\in[N]\}\}\) where \(N\geq 2\), and the multivariate polynomial \(p\big{(}t;z,\Phi(X)\big{)}\) in the equation (3), we have_
\[\mathrm{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\neq\emptyset.\]
_Moreover, for any \(z^{*}\in\mathrm{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\), the directional derivative of \(\mathrm{sort}\circ\mathrm{roots}\circ p\big{(}t;z,\Phi(X)\big{)}\) is well-defined and we have:_
\[\forall d\in[D]:\nabla_{e_{d}}\mathrm{sort}\circ\mathrm{root}\circ p\big{(}t ;z,\Phi(X)|_{z=z^{*}}=(e_{d}^{\top}x_{\pi_{z^{*}}(n)})_{n\in[N]},\]
_where \(e_{d}\in\mathbb{R}^{D}\) is the \(d\)-th standard basis vector for \(\mathbb{R}^{D}\) (\(d\in[D]\)), and \(\pi_{z^{*}}:[N]\to[N]\) is a permutation operator that sorts the elements \({z^{*}}^{\top}X\) -- see Definition 7._
For any \(z\in\mathbb{R}^{D}\) and multiset \(X=\{\{x_{n}\in\mathbb{R}^{D}:n\in[N]\}\}\) where \(N\geq 2\), we have
\[\text{sort}\circ\text{root}\circ p(t;z,X)=(z^{\top}x_{\pi_{z}(n)})_{n\in[N]} \in\mathbb{R}^{N}\]
where \(\pi_{z}:[N]\to[N]\) is a permutation operator such that \(z^{\top}x_{\pi_{z}(1)}\geq z^{\top}x_{\pi_{z}(2)}\geq\cdots\geq z^{\top}x_{\pi_ {z}(N)}\); see Definition 7. Given such an ordered list, we want to retrieve the multiset \(X\). If the order of the elements of \(X\) after sorting remains unchanged for a perturbed parameter \(z+\delta e_{d}\) -- where \(e_{d}\in\mathbb{R}^{D}\) is the \(d\)-th standard basis for \(\mathbb{R}^{D}\) and small enough \(\delta\in\mathbb{R}\), that is, \(x_{\pi_{z+\delta e_{d}}(n)}=x_{\pi_{z}(n)}\) for all \(n\in[N]\) and \(d\in[D]\) -- then we have the following equality:
\[\frac{1}{\delta}\Big{(}\text{sort}((z+\delta e_{d})^{\top}X)- \text{sort}(z^{\top}X)\Big{)} =\frac{1}{\delta}\Big{(}(z+\delta e_{d})^{\top}x_{\pi_{z}+\delta e _{d}(n)}-z^{\top}x_{\pi_{z}(n)}\Big{)}_{n\in[N]}\] \[\overset{(a)}{=}\frac{1}{\delta}\Big{(}z^{\top}x_{\pi_{z}(n)}+ \delta e_{d}^{\top}x_{\pi_{z}(n)}-z^{\top}x_{\pi_{z}(n)}\Big{)}_{n\in[N]}\] \[=\frac{1}{\delta}(\delta e_{d}^{\top}x_{\pi_{z}(n)})_{n\in[N]}=(e _{d}^{\top}x_{\pi_{z}(n)})_{n\in[N]},\]
where (a) is due to our assumption \(x_{\pi_{z+\delta e_{d}}(n)}=x_{\pi_{z}(n)}\) for all \(n\in[N]\) and \(d\in[D]\). If this property holds true, we can compute the following limit:
\[\lim_{\delta\to}\frac{1}{\delta}\Big{(}\text{sort}\circ\text{root }\circ p(t;z+\delta e_{d},X) -\text{sort}\circ\text{root}\circ p(t;z,X)\Big{)} \tag{11}\] \[=\lim_{\delta\to}\frac{1}{\delta}\Big{(}\text{sort}((z+\delta e _{d})^{\top}X)-\text{sort}(z^{\top}X)\Big{)},\]
to retrieve the \(d\)-component of the elements in \(X\) up to a fixed but unknown permutation \(\pi_{z}\) that does not depend on \(e_{d}\) -- that is, \((e_{d}^{\top}x_{\pi_{z}(n)})_{n\in[N]}\) -- for all \(d\in[D]\). The limit in equation (11) is well-defined and returns \((e_{d}^{\top}x_{\pi_{z}(n)})_{n\in[N]}\) if there exists a vector \(z\in\mathbb{R}^{D}\) such that it admits a solution for the following feasibility problem:
\[\text{find }\delta^{*}>0\text{ such that }x_{\pi_{z+\delta e_{d}}(n)}=x_{\pi_{z}(n)}, \text{ for all }n\in[N],d\in[D],\delta\leq\delta^{*}.\]
As we shall see, any vector \(z^{*}\in\text{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\) admits a solution to the aforementioned problem. To prove this result, we first need to derive the following property for the separators.
**Lemma 3**.: _For any \(z\in\mathbb{R}^{D}\), multiset \(X\) of at least two \(D\)-dimensional vectors, and the multivariate polynomial \(p\big{(}t;z,\Phi(X)\big{)}\) in the equation (3), we have \(\text{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\) is a nonempty subset of \(\mathbb{R}^{D}\) and for all \(z^{*}\in\text{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\), we have_
\[|\text{unique}\circ\text{roots}\circ p\big{(}t;z^{*},\Phi(X) \big{)} =\max_{z\in\mathbb{R}^{D}}|\text{unique}\circ\text{roots}\circ p \big{(}t;z,\Phi(X)\big{)}\] \[=|\text{unique}(X)|.\]
Proof.: If \(|\text{unique}(X)|=1\), then \(\text{separators}\circ p\big{(}t;z,\Phi(X)\big{)}=\mathbb{R}^{D}\) and the statement is trivial. Therefore, in what follows, we assume \(|\text{unique}(X)|>1\).
Let \(X\) be a multiset of (at least two distinct) \(D\)-dimensional vectors and \(\text{roots}\circ p\big{(}t;z,\Phi(X)=z^{\top}X,\) for all \(z\in\mathbb{R}^{D}\). If \(x,x^{\prime}\in X\) where \(x\neq x^{\prime}\), then we have \(z^{\top}x=z^{\top}x^{\prime}\) for \(z\in(x-x^{\prime})^{\perp}\subset\mathbb{R}^{D}\). Therefore, we have
\[\forall z\in\mathbb{R}^{D}:|\text{unique}(z^{\top}X)|\leq|\text{unique}(X)|.\]
We can prove the claim if we show \(|\text{unique}(z^{\top}X)|\) achieves its upper bound \(|\text{unique}(X)|\) over a subset of \(\mathbb{R}^{D}\) -- namely, separators \(\circ p\big{(}t;z,\Phi(X)\big{)}\).
Let \(P_{x,x^{\prime}}=(x-x^{\prime})^{\perp}\) for distinct \(x,x^{\prime}\in\text{unique}(X)\) -- that is, \(x\neq x^{\prime}\). By construction, \(P_{x,x^{\prime}}\) is a \((D-1)\)-dimensional subspace since \(x\neq x^{\prime}\). Since \(\text{unique}(X)\) contains only distinct elements, we have
\[\forall z\in P_{x,x^{\prime}}\Longleftrightarrow\langle z,x-x^{\prime} \rangle=0,\]
for all distinct \(x,x^{\prime}\in\text{unique}(X)\). We now construct the following set:
\[P_{X}=\bigcup_{\begin{subarray}{c}x,x^{\prime}\in\text{unique}(X)\\ x\neq x^{\prime}\end{subarray}}P_{x,x^{\prime}},\]
which is a finite union of \((D-1)\)-dimensional subspaces. Therefore, \(P_{X}\) can not be equal to \(\mathbb{R}^{D}\), that is, \(\mathbb{R}^{D}\setminus P_{X}\) is a nonempty set. For any \(z^{*}\in\mathbb{R}^{D}\setminus P_{X}\), we have
\[\forall x,x^{\prime}\in X,x\neq x^{\prime}:\langle z^{*},x-x^{ \prime}\rangle ={z^{*}}^{\top}x-{z^{*}}^{\top}x^{\prime}\neq 0\] \[\forall x,x^{\prime}\in X,x=x^{\prime}:\langle z^{*},x-x^{ \prime}\rangle ={z^{*}}^{\top}x-{z^{*}}^{\top}x^{\prime}=0.\]
Hence, we have \(|\text{unique}(z^{*}}^{\top}X)|=|\text{unique}(X)|\) for all \(z^{*}\in\text{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\) where separators \(\circ p\big{(}t;z,\Phi(X)\big{)}=\mathbb{R}^{D}\setminus P_{X}\) -- a nonempty subset of \(\mathbb{R}^{D}\).
As a result of Lemma 3, we have
\[\forall z^{*}\in\text{separators}\circ p\big{(}t;z,\Phi(X)\big{)}:|\text{unique }({z^{*}}^{\top}X)|=|\text{unique}(X)|,\]
that is, repeated (or distinct) elements in \(z^{*}}^{\top}X\) correspond to identical (or distinct) elements in \(X\). We now want to show that the following directional derivative is well-defined:
\[\nabla_{v}\text{sort}\big{(}{z^{\top}}X\big{)}|_{z=z^{*}}=\lim_{\delta\to 0 }\frac{1}{\delta}\Big{(}\text{sort}\big{(}({z^{*}}+\delta v)^{\top}X\big{)}- \text{sort}\big{(}{z^{*}}^{\top}X\big{)}\Big{)}\]
for all \(z^{*}\in\text{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\) and unit norm vector \(v\in\mathbb{R}^{D}\).
We break down the rest of the proof in two cases.
**Case 1: \(|\text{unique}(\mathbf{X})|>\mathbf{1}\).**
**Limiting behavior of \((z^{*}+\delta v)^{\top}X\) as \(\delta\to 0\).**
Let \(x,x^{\prime}\) be two distinct elements in \(\text{unique}(X)\), that is, \(\|x-x^{\prime}\|_{2}>0\). If \(z^{*}\in\text{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\), then we have \(|{z^{*}}^{\top}x-{z^{*}}^{\top}x^{\prime}|=\varepsilon>0\); see Lemma 3. Let \(z^{*}_{v}(\delta)=z^{*}+\delta v\) where \(v\in\mathbb{R}^{D}\) is a unit norm vector and \(\varepsilon=\text{gap}({z^{*}}^{\top}X)>0\) -- which is well-defined since \(|\text{unique}(X)|>1\). Then, we have
\[\forall\text{ distinct }x,x^{\prime}\in\text{unique}(X), \delta<\frac{\varepsilon}{2\text{diam}(X)}: \|z^{*}_{v}(\delta)^{\top}(x-x^{\prime})\|_{2}=\|(z^{*}+\delta v )^{\top}(x-x^{\prime})\|_{2}\] \[\overset{\text{(a)}}{\geq}\|{z^{*}}^{\top}(x-x^{\prime})\|_{2}- \delta\|v^{\top}(x-x^{\prime})\|_{2}\] \[\overset{\text{(b)}}{>}\varepsilon-\frac{\varepsilon}{2\text{ diam}(X)}\|x-x^{\prime}\|_{2}\] \[\overset{\text{(c)}}{\geq}\varepsilon-\frac{\varepsilon}{2}= \frac{\varepsilon}{2}>0,\]
where (a) is due to the triangle inequality, (b) is due to \({|{z^{*}}^{\top}}x-{z^{*}}^{\top}x^{\prime}|=\varepsilon\) and \(\delta<\frac{\varepsilon}{2\mathrm{diam}(X)}\), and (c) is due to \(\|x-x^{\prime}\|_{2}\leq\mathrm{diam}(X)\). Therefore, the vector \(z_{v}^{*}(\delta)\) separates distinct elements of \(X\) in \(z_{v}^{*}(\delta)^{\top}X\) -- for all unit norm vectors \(v\in\mathbb{R}^{D}\) and \(\delta<\frac{\varepsilon}{2\mathrm{diam}(X)}\). On the other hand, if \(x,x^{\prime}\) are two identical elements in \(X\), then we have \(z_{v}^{*}(\delta)^{\top}x=z_{v}^{*}(\delta)^{\top}x^{\prime}\) -- that is, the repeated elements in \(X\) correspond to the repeated elements in \(z_{v}^{*}(\delta)^{\top}X\). Therefore, we have \(|\mathrm{unique}(z_{v}^{*}(\delta)^{\top}X)|=|\mathrm{unique}(X)|\), or equivalently \(z_{v}^{*}(\delta)^{\top}\in\mathrm{separators}\circ p(t;z,X)\).
**Directional derivative of \(\mathrm{sort}\circ\mathrm{root}\circ p(t;z,X)\) at \(z=z^{*}\).**
Let \(z^{*}\in\mathrm{separators}\circ p\big{(}t;z,\Phi(X)\big{)}\). Then, we have
\[\mathrm{sort}\circ\mathrm{root}\circ p(t;z^{*},X)=\big{(}{z^{*}}^{\top}x_{ \pi_{z^{*}}(n)}\big{)}_{n\in[N]}\in\mathbb{R}^{N},\]
where \(\pi_{z^{*}}:[N]\to[N]\) is a permutation operator such that \({z^{*}}^{\top}x_{\pi_{z^{*}}(1)}\geq{z^{*}}^{\top}x_{\pi_{z^{*}}(2)}\geq\cdots \geq{z^{*}}^{\top}x_{\pi_{z^{*}}(N)}\). The repeated elements in \(X\) do not change the value of the output of the sort function as they correspond to the repeated elements in \({z^{*}}^{\top}X\). In other words, \(\pi_{z^{*}}\) is not necessarily unique; but our results do not depend on the specific choice of the permutation operator. The minimum distance between distinct elements of \({z^{*}}^{\top}X\) is \(\varepsilon=\mathrm{gap}({z^{*}}^{\top}X)>0\). If \(z_{v}^{*}(\delta)\) is the perturbed version of \(z^{*}\) in direction of \(v\) such that \(\|z^{*}-z_{v}^{*}(\delta)\|_{2}=\delta<\frac{\varepsilon}{2\mathrm{diam}(X)}\), then \(z_{v}^{*}(\delta)\in\mathrm{separators}\circ p(t;z,X)\) -- see our discussion in the previous paragraph.
**Claim 1**.: _The following equality holds true:_
\[\forall n\in[N]:x_{\pi_{z_{v}^{*}(\delta)}(n)}=x_{\pi_{z^{*}}(n)},\]
_for any unit norm vector \(v\in\mathbb{R}^{D}\) and \(\delta<\frac{\varepsilon}{2\mathrm{diam}(X)}\), and any permutation operator \(\pi_{z_{v}^{*}(\delta)}:[N]\to[N]\) such that \(z_{v}^{*}(\delta)^{\top}x_{\pi_{z_{v}^{*}(\delta)}(1)}\geq z_{v}^{*}(\delta) ^{\top}x_{\pi_{z_{v}^{*}(\delta)}(2)}\geq\cdots\geq z_{v}^{*}(\delta)^{\top} x_{\pi_{z_{v}^{*}(\delta)}(N)}\)._
Proof.: Consider \(i,j\in[N]\) where \(i>j\). If \(x_{\pi_{z^{*}}(j)}=x_{\pi_{z^{*}}(i)}\), then we have \(z_{v}^{*}(\delta)^{\top}x_{\pi_{z^{*}}(j)}\geq z_{v}^{*}(\delta)^{\top}x_{\pi_ {z^{*}}(i)}\) -- as both terms are equal to each other. On the other hand, if \(x_{\pi_{z^{*}}(j)}\neq x_{\pi_{z^{*}}(i)}\), then we have
\[z_{v}^{*}(\delta)^{\top}x_{\pi_{z^{*}}(j)}-z_{v}^{*}(\delta)^{ \top}x_{\pi_{z^{*}}(i)} =(z^{*}+\delta v)^{\top}(x_{\pi_{z^{*}}(j)}-x_{\pi_{z^{*}}(i)})\] \[\overset{\mathrm{(a)}}{\geq}{z^{*}}^{\top}(x_{\pi_{z^{*}}(j)}-x_{ \pi_{z^{*}}(i)})-\delta\|x_{\pi_{z^{*}}(j)}-x_{\pi_{z^{*}}(i)}\|_{2}\] \[\overset{\mathrm{(b)}}{\geq}\varepsilon-\delta\mathrm{diam}(X)> \varepsilon-\frac{\varepsilon}{2}=\frac{\varepsilon}{2}>0,\]
where (a) is due to Cauchy-Schwarz inequality, and (b) is due to \(\|x_{\pi_{z^{*}}(j)}-x_{\pi_{z^{*}}(i)}\|_{2}\leq\mathrm{diam}(X)\) and \(\delta<\frac{\varepsilon}{2\mathrm{diam}(X)}\). Therefore, the permutation \(\pi_{z^{*}}\) also sorts the elements of \(z_{v}^{*}(\delta)^{\top}X\), that is, \(x_{\pi_{z_{v}^{*}(\delta)}(n)}=x_{\pi_{z^{*}}(n)}\), for all \(n\in[N]\).
Finally, for all \(d\in[D]\), we have
\[\nabla_{e_{d}}\text{sort}\circ\text{root}\circ p\big{(}t;z,\Phi(X) |_{z=z^{*}} \stackrel{{\text{(a)}}}{{=}}\lim_{\delta\to 0}\frac{1}{ \delta}\Big{(}\text{sort}((z^{*}+\delta e_{d})^{\top}X)-\text{sort}({z^{*}}^{ \top}X)\Big{)}\] \[\stackrel{{\text{(b)}}}{{=}}\lim_{\delta\to 0}\frac{1}{ \delta}\Big{(}(z^{*}+\delta e_{d})^{\top}x_{\pi_{z^{*}+\delta e_{d}}(n)}-{z^{*}} ^{\top}x_{\pi_{z^{*}}(n)}\Big{)}_{n\in[N]}\] \[\stackrel{{\text{(c)}}}{{=}}\lim_{\delta\to 0}\frac{1}{ \delta}\Big{(}(z^{*}+\delta e_{d})^{\top}x_{\pi_{z^{*}}(n)}-{z^{*}}^{\top}x_{ \pi_{z^{*}}(n)}\Big{)}_{n\in[N]}\] \[=\lim_{\delta\to 0}\frac{1}{\delta}(\delta e_{d}^{\top}x_{\pi_{z^{*} }(n)})_{n\in[N]}=(e_{d}^{\top}x_{\pi_{z^{*}}(n)})_{n\in[N]}.\]
where (a) is due to the definition of the directional derivation, (b) follows from the definition of permutation operator in sort function, and (c) follows from Claim 1.
**Case 2: \(|\text{unique}(\mathbf{X})|=\mathbf{1}\).**
This directional derivative is well-defined if \(|\text{unique}(X)|=1\), that is,
\[\nabla_{v}\text{sort}\big{(}z^{\top}X\big{)}|_{z=z^{*}} =\lim_{\delta\to 0}\frac{1}{\delta}\Big{(}\text{sort}\big{(}(z^{*}+ \delta v)^{\top}X\big{)}-\text{sort}\big{(}{z^{*}}^{\top}X\big{)}\Big{)}\] \[=\lim_{\delta\to 0}\frac{1}{\delta}\Big{(}\big{(}(z^{*}+\delta v)^{ \top}x_{\pi_{1}(n)}\big{)}_{n\in[N]}-\big{(}{z^{*}}^{\top}x_{\pi_{2}(n)}\big{)} _{n\in[N]}\Big{)}\qquad=v^{\top}x1,\]
where \(\pi_{1},\pi_{2}:[N]\to[N]\) are two permutation operators, and \(X=\{\{x_{n}:n\in[N]\}\}\) and \(x_{n}=x\) for all \(n\in[N]\), and \(1\in\mathbb{R}^{N}\) is the vector of all ones. Therefore, for all \(z^{*}\in\text{separators}\circ p\big{(}t;z,\Phi(X)\big{)}=\mathbb{R}^{D}\). And we have
\[\forall d\in[D]:\nabla_{e_{d}}\text{sort}\circ\text{root}\circ p\big{(}t;z, \Phi(X)|_{z=z^{*}}=e_{d}^{\top}x1.\]
This readily proves the proposition's statement.
### Two Illustrative examples
**Example 1** (Repeated Roots).: _Let \(N=D=2\), and \(\Phi(X)=\begin{pmatrix}2&0&2&0&0\end{pmatrix}^{\top}\in\mathbb{R}^{\binom{N+D }{D}-1}=\mathbb{R}^{5}\) for a multiset \(X\). The goal is to recover \(X\). In the proof of Proposition 5, Lemma 2 relates parameterized moments of the multivariate polynomial \(p(t;z,\Phi(X))\) to \(\Phi(X)\) using the following functions:_
\[\forall z=(z_{1},z_{2})^{\top}\in\mathbb{R}^{2}:\ \psi(z,1)=\begin{pmatrix}z_{1}&z_{2 }&0&0&0\end{pmatrix}^{\top},\ \psi(z,2)=\begin{pmatrix}0&0&z_{1}^{2}&2z_{1}z_{2}&z_{2}^{2} \end{pmatrix}^{\top}.\]
_Since \(E_{n}(z,X)=\langle\psi(z,n),\Phi(X)\rangle\), for \(n\in[2]\), then we have \(E_{1}(z,X)=2z_{1}\) and \(E_{2}(z,X)=2z_{1}^{2}\). We now can use Girard's formula (see equation (7)):_
\[a_{1}(z;X)=E_{1}(z,X),a_{2}(z;X)=\frac{1}{2}\text{det}\begin{pmatrix}E_{1}(z,X) &1\\ E_{2}(z,X)&E_{1}(z,X),\end{pmatrix}.\]
_to compute the coefficients of the multivariate polynomial as \(a_{1}(z;X)=2z_{1}\) and \(a_{2}(z;X)=z_{1}^{2}\). We arrive at the following multivariate polynomial:_
\[p\big{(}t;z,\Phi(X))=t^{2}-a_{1}(z;X)t+a_{2}(z;X)=t^{2}-2z_{1}t+z_{1}^{2}=(t-z _{1})^{2},\]
_and \(\mathrm{roots}\circ p\big{(}t;z,\Phi(X)=z^{\top}X=\{\{z_{1},z_{1}\}\}\big{)}\), for all \(z\in\mathbb{R}^{2}\). Since \(\mathrm{unique}(z^{\top}X)=1\) --\(\forall z\in\mathbb{R}^{2}\) -- then we have \(\mathrm{separators}\circ p\big{(}t;z,\Phi(X)=\mathbb{R}^{2}\). Let \(z^{*}=(1,1)^{\top}\in\mathbb{R}^{2}\) be a separator vector. Therefore, we have \(\mathrm{sort}(z^{\top}X)|_{z=z^{*}}=(1,1)^{\top}\). We also have_
\[\mathrm{sort}(z+\delta e_{1})^{\top}X|_{z=z^{*}}=(1+\delta,1+\delta)^{\top}, \ \ \mathrm{sort}(z+\delta e_{2})^{\top}X|_{z=z^{*}}=(1,1)^{\top}.\]
_for all \(\delta>0\). These quantities let us compute the directional derivatives in Proposition 6 as follows:_
\[(e_{1}^{\top}x_{\pi_{z^{*}}}(n))_{n\in[2]}=(1,1)^{\top},\ (e_{2}^{\top}x_{\pi_{z^{*} }}(n))_{n\in[2]}=(0,0)^{\top},\]
_see equation (4). Finally, we arrive at \(X=\Phi^{-1}\circ\Phi(X)=\{\{(1,0),(1,0)\}\}\)._
**Example 2** (Unique Roots).: _Let \(N=D=2\), and \(\Phi(X)=\begin{pmatrix}-2&1&10&-7&5\end{pmatrix}^{\top}\in\mathbb{R}^{\binom{N +D}{D}-1}=\mathbb{R}^{5}\) for a multiset \(X\). The goal is to recover \(X\). Since \(E_{n}(z,X)=\langle\psi(z,n),\Phi(X)\rangle\), for \(n\in[2]\), then we have \(E_{1}(z,X)=-2z_{1}+z_{2}\) and \(E_{2}(z,X)=10z_{1}^{2}-14z_{1}z_{2}+5z_{2}^{2}\). We now can use Girard's formula; see Lemma 2 and equation (7)):_
\[a_{1}(z;X)=E_{1}(z,X),a_{2}(z;X)=\frac{1}{2}\mathrm{det}\begin{pmatrix}E_{1}( z,X)&1\\ E_{2}(z,X)&E_{1}(z,X),\end{pmatrix}.\]
_to compute the coefficients of the multivariate polynomial as \(a_{1}(z;X)=-2z_{1}+z_{2}\) and \(a_{2}(z;X)=-3z_{1}^{2}-2z_{2}^{2}+5z_{1}z_{2}\). We then have the following multivariate polynomial:_
\[p\big{(}t;z,\Phi(X)=t^{2}+(2z_{1}-z_{2})t-3z_{1}^{2}-2z_{2}^{2}+5z_{1}z_{2}.\]
_To compute the roots of \(p\big{(}t;z,\Phi(X)\), we use the quadratic formula. The discriminant is given as follows:_
\[\Delta(z,X)=a_{1}(z;X)^{2}-4a_{2}(z;X)=16z_{1}^{2}+9z_{2}^{2}-24z_{1}z_{2}=(4 z_{1}-3z_{2})^{2}.\]
_The parametric roots are \(r_{1}(z,X)=\frac{1}{2}(-a_{1}(z;X)+\sqrt{\Delta(z,X)})=z_{1}-z_{2}\) and \(r_{1}(z,X)=\frac{1}{2}(-a_{1}(z;X)-\sqrt{\Delta(z,X)})=-3z_{1}+2z_{2}\), that is, \(\mathrm{roots}\circ p\big{(}t;z,\Phi(X)=z^{\top}X=\{\{z_{1}-z_{2},-3z_{1}+2z_{2 }\}\}\big{)}\), for all \(z\in\mathbb{R}^{2}\). Since \(\mathrm{unique}(z^{\top}X)=2\) -- \(\forall z\in\mathbb{R}^{2}\setminus\{z\in\mathbb{R}^{2}:z_{1}-z_{2}=-3z_{1}+2z _{2}\}\) -- then we have \(\mathrm{separators}\circ p\big{(}t;z,\Phi(X)=\{z\in\mathbb{R}^{2}:z_{1}\neq \frac{3}{4}z_{2}\}\). Let \(z^{*}=(1,1)^{\top}\in\mathbb{R}^{2}\) be a separator vector. Therefore, we have \(\mathrm{sort}(z^{\top}X)|_{z=z^{*}}=(0,-1)^{\top}\)._
\[\mathrm{sort}(z+\delta e_{1})^{\top}X|_{z=z^{*}}=(\delta,-1-3\delta)^{\top}, \ \ \mathrm{sort}(z+\delta e_{2})^{\top}X|_{z=z^{*}}=(-\delta,-1+2\delta)^{\top}.\]
_for all \(0<\delta<\frac{1}{3}\). Now we can compute the directional derivatives in Proposition 6 as follows:_
\[(e_{1}^{\top}x_{\pi_{z^{*}}}(n))_{n\in[2]}=(1,-3)^{\top},\ (e_{2}^{\top}x_{\pi_{z^{*} }}(n))_{n\in[2]}=(-1,2)^{\top},\]
_see equation (4). Finally, we arrive at \(X=\Phi^{-1}\circ\Phi(X)=\{\{(1,-1),(-3,2)\}\}\)._
Proof of Theorem 3
The function \(f:\mathbb{X}_{\mathbb{D},N}\to\operatorname{codim}(f)\) is continuous over its domain, that is, \(\rho\circ\Phi\) is continuous over \(\mathbb{X}_{\mathbb{D},N}\), and we have \(\rho=f\circ\Phi^{-1}\); see Theorem 8 and its proof for the definition of \(\Phi\) and its inverse. Before proceeding with the proof, let us introduce the following set.
**Definition 10**.: _For any mutiset function \(\Phi\), we let \(\Phi(\mathbb{X}_{\mathbb{D},N})\stackrel{{\mathrm{def}}}{{=}}\{ \Phi(X):X\in\mathbb{X}_{\mathbb{D},N}\}\)._
With the notation in Definition 10, \(\rho=f\circ\Phi^{-1}\) is a map from \(\Phi(\mathbb{X}_{\mathbb{D},N})\) to \(\operatorname{codim}(f)\). Using Lemmas 4 and 5 and Fact 1, we show first that \(\Phi(\mathbb{X}_{\mathbb{D},N})\) is a compact set.
**Lemma 4**.: \(\Phi:\mathbb{X}_{\mathbb{D},N}\to\Phi(\mathbb{X}_{\mathbb{D},N})\) _is a continuous and injective function._
**Lemma 5**.: \(\mathbb{X}_{\mathbb{D},N}\) _is a compact set._
**Fact 1**.: _(Pugh and Pugh 2002) The image of a compact set under continuous map is a compact set._
In Proposition 8, we prove that \(\Phi^{-1}\) is a continuous function over the compact set \(\Phi(\mathbb{X}_{\mathbb{D},N})\).
**Proposition 8**.: _The function \(\Phi^{-1}\) is continuous on the compact set \(\Phi(\mathbb{X}_{\mathbb{D},N})\)._
As a direct result of Proposition 8, \(\rho=f\circ\Phi^{-1}\) is a continuous function on the compact subset \(\Phi(\mathbb{X}_{\mathbb{D},N})\subset\mathbb{R}^{N+D\choose D}-1\).
**Fact 2**.: _Since \(\Phi(\mathbb{X}_{\mathbb{D},N})\) is a compact subset of \(\mathbb{R}^{N+D\choose D}-1\), the continuous function \(\rho:\Phi(\mathbb{X}_{\mathbb{D},N})\to\operatorname{codim}(f)\) has a continuous extension to \(\mathbb{R}^{N+D\choose D}-1\), that is, there exists a continuous function \(\rho_{\mathrm{e}}:\mathbb{R}^{N+D\choose D}-1\to\operatorname{codim}(\rho_{ \mathrm{e}})\) where_
\[\forall u\in\Phi(\mathbb{X}_{\mathbb{D},N}):\rho_{\mathrm{e}}(u)=\rho(u),\]
_and \(\operatorname{codim}(f)\subseteq\operatorname{codim}(\rho_{\mathrm{e}})\). To see the continuous extension theorem, refer to (Deimling, 2010)._
From Fact 2, there exists a continuous function \(\rho_{\mathrm{e}}:\mathbb{R}^{N+D\choose D}-1\to\operatorname{codim}(\rho_{ \mathrm{e}})\) where \(f(X)=\rho_{\mathrm{e}}\circ\Phi(X)\) for all \(X\in\mathbb{X}_{\mathbb{D},N}\). Finally, if we rename \(\rho_{\mathrm{e}}\) to \(\rho\), we arrive at the theorem's statement.
### Proof of Lemma 4
As a direct result of Theorem 8, \(\Phi\) is an injective function as it is invertible over its domain. The continuity of \(\Phi\) follows from the continuity of \(\phi\) -- see Lemma 6.
**Lemma 6**.: _Let \(\phi:\mathbb{D}\to\operatorname{codim}(\phi)\subset\mathbb{R}^{K}\) be a continuous function on metric space \((\mathbb{D},d)\) and \(\Phi:\mathbb{X}_{\mathbb{D},N}\to\operatorname{codim}(\Phi)\subset\mathbb{R }^{K}\), \(\Phi(X)=\sum_{x\in X}\phi(x)\) for \(K,N\in\mathbb{N}\). Then, \(\Phi\) is a continuous multiset function on \(\mathbb{X}_{\mathbb{D},N}\). The same result is also valid on domain \(\mathbb{X}_{\mathbb{D},[N]}\)._
Proof.: We use the following the notion of distance between multisets with elements in \(\mathbb{D}\):
\[d_{M}(X,X^{\prime})=\begin{cases}\min_{\pi\in\Pi(N_{\circ})}\sqrt{\sum_{n\in[N_{ \circ}]}d(x_{n},x^{\prime}_{\pi(n)})^{2}}&\text{ if }|X|=|X^{\prime}|=N_{\circ}\\ \infty&\text{ otherwise,}\end{cases} \tag{12}\]
where \(N_{\circ}\in[N]\), \(|\cdot|\) returns the cardinality of its input multiset, \(\Pi(N_{\circ})\) is the set of permutation operators on \([N_{\circ}]\), \(X=\{\{x_{n}:n\in[|X|]\}\}\), and \(X^{\prime}=\{\{x^{\prime}_{n}:n\in[|X^{\prime}|]\}\}\).
Following the definition of continuity, for any \(\varepsilon>0\), we want to find a \(\delta(\varepsilon)\) such that if \(d_{M}(X,X^{\prime})<\delta(\varepsilon)\), then \(\|\Phi(X)-\Phi(X^{\prime})\|_{2}<\varepsilon\).
For any \(\delta>0\) and \(X\in\mathbb{X}_{\mathbb{D},[N]}\), let \(X^{\prime}\in\mathbb{X}_{\mathbb{D},[N]}\) be such that \(d_{M}(X,X^{\prime})<\delta\), that is, both multisets have the same size of \(|X|=|X^{\prime}|=N_{\circ}\in[N]\) and there is a permutation operator \(\pi:[N_{\circ}]\to[N_{\circ}]\) such that \(d_{M}(X,X^{\prime})=\sqrt{\sum_{n\in[N_{\circ}]}d(x_{n},x^{\prime}_{\pi(n)})^{ 2}}<\delta\). It suffices to show the following:
\[\|\Phi(X)-\Phi(X^{\prime})\|_{2} =\|\sum_{x\in X}\phi(x)-\sum_{x^{\prime}\in X^{\prime}}\phi(x^{ \prime})\|_{2}\stackrel{{\text{(a)}}}{{\leq}}\sum_{n\in[N_{ \circ}]}\|\phi(x_{n})-\phi(x^{\prime}_{\pi(n)})\|_{2}\] \[\stackrel{{\text{(b)}}}{{\leq}}\sum_{n\in[N_{\circ }]}\max_{v\in\mathbb{D}:\|v\|_{2}<\delta}\|\phi(x_{n})-\phi(x_{n}+v)\|_{2}<\varepsilon,\]
where (a) is due to the triangle inequality, (b) is due to \(\|x_{n}-x^{\prime}_{\pi(n)}\|_{2}\leq\delta\), for all \(n\in[N_{\circ}]\). It suffices to show that for any \(\varepsilon>0\), there exits a \(\delta(\varepsilon)\) such that
\[\forall n\in[N],v\in\mathbb{R}^{D},\|v\|_{2}<\delta(\varepsilon):\|\phi(x_{n} )-\phi(x_{n}+v)\|_{2}<N^{-1}\varepsilon<N_{\circ}^{-1}\varepsilon.\]
Since \(\phi\) is a continuous function, there exists a \(\delta_{\phi}(x_{n},N^{-1}\varepsilon)>0\) such that
\[\forall v\in\mathbb{R}^{D},\|v\|_{2}<\delta_{\circ}(x_{n},N^{-1}\varepsilon): \|\phi(x_{n})-\phi(x_{n}+v)\|_{2}<N^{-1}\varepsilon.\]
If we let \(\delta(\varepsilon)=\min_{n\in[N_{\circ}]}\delta_{\phi}(x_{n},N^{-1} \varepsilon)>0\), then we have \(\|\Phi(X)-\Phi(X^{\prime})\|_{2}\leq\varepsilon\). Therefore, \(\Phi\) is a continuous function. The same result is also valid on domain \(\mathbb{X}_{\mathbb{D},N}\).
### Proof of Lemma 5
Let \(\operatorname{OC}(S)\) be the set of all open covers of a topological space \(S\).
**Fact 3**.: _(Engelking 1989) A topological space \(S\) is compact if any open cover of \(S\) has a finite subcover._
**Definition 11**.: _We define the following maps between subsets of \(\mathbb{X}_{\mathbb{D},N}\) and \(\mathbb{D}\subseteq\mathbb{R}^{D}\)._
* _Let_ \(\mathbb{U}\subseteq\mathbb{D}^{N}\) _and_ \(T=[x_{1},\ldots,x_{N}]\in\mathbb{U}\)_. Then, we let_ \(\operatorname{set}(T)\stackrel{{\text{def}}}{{=}}\{\{x_{n}:n\in[ N]\}\}\in\mathbb{X}_{\mathbb{D},N}\) _and_ \(\operatorname{set}(\mathbb{U})\stackrel{{\text{def}}}{{=}}\{ \operatorname{set}(T):T\in\mathbb{U}\}\subseteq\mathbb{X}_{\mathbb{D},N}\)__
* _Let_ \(\mathbb{V}\subseteq\mathbb{X}_{\mathbb{D},N}\) _and_ \(X=\{\{x_{n}:n\in[N]\}\}\in\mathbb{V}\)_. Then, we let_ \(\operatorname{mat}(X)\stackrel{{\text{def}}}{{=}}\{[x_{\pi(1)}, \ldots,x_{\pi(N)}]:\pi\in\Pi(N)\}\subseteq\mathbb{R}^{D}\) _and_ \(\operatorname{mat}(\mathbb{V})\stackrel{{\text{def}}}{{=}}\bigcup_{ X\in\mathbb{V}}\operatorname{mat}(X)\subseteq\mathbb{R}^{D}\)_,_
_where \(\Pi(N)\) is the set of permutation operators \(\pi:[N]\to[N]\) for \(N\in\mathbb{N}\)._
Given a matrix, the function set maps it to a multiset. In contrast, the function mat creates all possible matrices by rearranging elements of its input multiset.
**Claim 2**.: _If \(\{\mathbb{V}_{\lambda}:\lambda\in\Lambda\}\in\mathrm{OC}(\mathbb{X}_{\mathbb{D},N})\), then \(\{\mathrm{mat}(\mathbb{V}_{\lambda}):\lambda\in\Lambda\}\in\mathrm{OC}( \mathbb{D}^{N})\)._
**Claim 3**.: _If \(\{\mathbb{U}_{\lambda}:\lambda\in\Lambda\}\in\mathrm{OC}(\mathbb{D}^{N})\), then \(\{\mathrm{set}(\mathbb{U}_{\lambda}):\lambda\in\Lambda\}\in\mathrm{OC}( \mathbb{X}_{\mathbb{D},N})\)_
Let \(\{\mathbb{V}_{\lambda}:\lambda\in\Lambda\}\) be an open cover for \(\mathbb{X}_{\mathbb{D},N}\). From Claim 2, \(\{\mathrm{mat}(\mathbb{V}_{\lambda}):\lambda\in\Lambda\}\) is an open cover for \(\mathbb{D}^{N}\) -- a closed and bounded subset of \(\mathbb{R}^{D}\). Therefore, there is a finite subsequence \(\{\mathrm{mat}(\mathbb{V}_{\lambda_{k}}):k\in[K]\}\) that forms an open cover for \(\mathbb{D}^{N}\). From Claim 3, \(\{\mathrm{set}\circ\mathrm{mat}(\mathbb{V}_{\lambda_{k}}):k\in[K]\}=\{ \mathbb{V}_{\lambda_{k}}:k\in[K]\}\), is a finite open cover for \(\mathbb{X}_{\mathbb{D},N}\). Therefore, \(\mathbb{X}_{\mathbb{D},N}\) is a compact set.
**Proof of Claim 2** To prove \(\{\mathrm{mat}(\mathbb{V}_{\lambda}):\lambda\in\Lambda\}\) is an open cover for \(\mathbb{D}^{N}\), we first show that for all \(T\in\mathbb{D}^{N}\subseteq\mathbb{R}^{N\times D}\), we have \(T\in\mathrm{mat}(\mathbb{V}_{\lambda})\) for a \(\lambda\in\Lambda\).
Let \(T=[x_{1},\ldots,x_{N}]\in\mathbb{D}^{N}\). Then, we have \(\mathrm{set}(T)=\{\{x_{n}:n\in[N]\}\}\in\mathbb{V}_{\lambda}\subseteq\mathbb{X} _{\mathbb{D},N}\) for a \(\lambda\in\Lambda\). Since the following holds true:
\[\forall\pi\in\Pi(N):[x_{\pi(1)},\ldots,x_{\pi(N)}]\in\mathrm{mat}(\mathbb{V}_ {\lambda}),\]
then, we have \(T\in\mathrm{mat}(\mathbb{V}_{\lambda})\). Therefore, \(\{\mathrm{mat}(\mathbb{V}_{\lambda}):\lambda\in\Lambda\}\) forms a cover for \(\mathbb{D}^{N}\).
Next, we prove that \(\mathrm{mat}(\mathbb{V}_{\lambda})\) is an open set. Let \(T=[x_{1},\ldots,x_{N}]\in\mathrm{mat}(\mathbb{V}_{\lambda})\), \(\varepsilon>0\), and \(\mathcal{N}(T,\varepsilon)=\{T^{\prime}\in\mathbb{R}^{N\times D}:\|T-T^{\prime} \|_{F}\leq\varepsilon\}\). We want to show that for small enough \(\varepsilon>0\), \(\mathcal{N}(T,\varepsilon)\subseteq\mathrm{mat}(\mathbb{V}_{\lambda})\).
For all \(T^{\prime}=[x^{\prime}_{1},\ldots,x^{\prime}_{N}]\in\mathcal{N}(T,\varepsilon)\), we have
\[d_{M}(X,X^{\prime})=\min_{\pi\in\Pi(N)}\sqrt{\sum_{n\in[N]}\|x_{n}-x^{\prime}_ {\pi(n)}\|_{2}^{2}}\leq\|T-T^{\prime}\|_{F}\leq\varepsilon,\text{ where }X^{\prime}=\{\{x^{\prime}_{n}:n\in[N]\}\}.\]
Since \(\mathbb{V}_{\lambda}\) is an open set, for any \(X\in\mathbb{V}_{\lambda}\), there exists \(\varepsilon>0\) such that \(X^{\prime}\in V_{\lambda}\) where \(d_{M}(X,X^{\prime})<\varepsilon\). Therefore, we have \(T^{\prime}\in\mathrm{mat}(\mathbb{V}_{\lambda})\). Since this is the case for all \(T^{\prime}\in\mathcal{N}(T,\varepsilon)\), we have \(\mathcal{N}(T,\varepsilon)\subseteq\mathrm{mat}(\mathbb{V}_{\lambda})\), that is, \(\mathrm{mat}(\mathbb{V}_{\lambda})\) is an open set.
**Proof of Claim 3** To prove \(\{\mathrm{set}(\mathbb{U}_{\lambda}):\lambda\in\Lambda\}\) is an open cover for \(\mathbb{X}_{\mathbb{D},N}\), we first show that for all \(X\in\mathbb{X}_{\mathbb{D},N}\), we have \(X\in\mathrm{set}(\mathbb{U}_{\lambda})\) for a \(\lambda\in\Lambda\).
Let \(X=\{\{x_{n}:n\in[N]\}\}\in\mathbb{X}_{\mathbb{D},N}\). Since \(T_{\pi}=[x_{\pi(1)},\ldots,x_{\pi(N)}]\in\mathbb{D}^{N}\) -- for all \(\pi\in\Pi(N)\) -- we have \(T_{\pi}\in\mathbb{U}_{\lambda}\) for a \(\lambda\in\Lambda\). Therefore, we have \(\mathrm{set}(T_{\pi})=\{\{x_{\pi(n)}:n\in[N]\}\}=X\in\mathrm{set}(\mathbb{U}_ {\lambda})\). This proves that \(\{\mathrm{set}(\mathbb{U}_{\lambda}):\lambda\in\Lambda\}\) is a cover for \(\mathbb{X}_{\mathbb{D},N}\).
We now prove that \(\mathrm{set}(\mathbb{U}_{\lambda})\) is an open set. Let \(X=\{\{x_{n}:n\in[N]\}\}\in\mathrm{set}(\mathbb{U}_{\lambda})\), \(\varepsilon>0\), \(\mathcal{N}(X,\varepsilon)=\{X^{\prime}\in\mathbb{X}_{\mathbb{R}^{D},N}:d_{M}(X,X ^{\prime})\leq\varepsilon\}\), and \(T=[x_{1},\ldots,x_{N}]\). We want to show that for small enough \(\varepsilon>0\), \(\mathcal{N}(X,\varepsilon)\subseteq\mathrm{set}(\mathbb{U}_{\lambda})\).
For all \(X^{\prime}=\{\{x^{\prime}_{n}:n\in[N]\}\}\in\mathcal{N}(X,\varepsilon)\), we have
\[\|T-T^{\prime}_{\pi}\|_{F}=d_{M}(X,X^{\prime})\leq\varepsilon\text{ where }T^{\prime}_{\pi}=[x^{\prime}_{\pi(1)},\ldots,x^{\prime}_{\pi(N)}],\]
for a permutation operator \(\pi:[N]\to[N]\) that best match elements of \(X\) and \(X^{\prime}\). Since \(\mathbb{U}_{\lambda}\) is an open subset, there exists \(\varepsilon>0\) such that \(T_{\pi}\in\mathbb{U}_{\lambda}\). Therefore, we have \(X^{\prime}=\mathrm{set}(T^{\prime}_{\pi})\in\mathrm{set}(\mathbb{U}_{\lambda})\). Since this is the case for all \(X^{\prime}\in\mathcal{N}(X,\varepsilon)\), we have \(\mathcal{N}(X,\varepsilon)\subseteq\mathrm{set}(\mathbb{U}_{\lambda})\), that is, \(\mathrm{set}(\mathbb{U}_{\lambda})\) is an open set.
### Proof of Proposition 8
By definition of continuity, we want to show that, for any \(\varepsilon>0\) and \(X\in\mathbb{X}_{\mathbb{D},N}\), there exists \(\delta_{f}(\varepsilon)>0\) such that
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N},\|\Phi(X)-\Phi(X^{ \prime})\|_{2}<\delta_{f}(\varepsilon) :d_{M}(\Phi^{-1}\circ\Phi(X)-\Phi^{-1}\circ\Phi(X^{\prime}))<\varepsilon\] \[:d_{M}(X,X^{\prime})<\varepsilon,\]
where \(d_{M}\) is given in equation (12). We use the result in Lemma 7 to establish the continuity of \(\Phi^{-1}\) over \(\Phi(\mathbb{X}_{\mathbb{D},N})\).
**Lemma 7**.: _Let \(X\in\mathbb{X}_{\mathbb{D},N}\). The parameterized multiset that consists of the root of the polynomial \(p\big{(}t;z,\Phi(X)\big{)}\) in equation (3) (that is, \(z^{\top}X\)) varies continuously with \(\Phi(X)\). More precisely, for all \(\varepsilon>0\), there exists \(\delta(\varepsilon)>0\) such that_
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N},\|\Phi(X)-\Phi(X^{\prime})\|_{2 }<\delta(\varepsilon):\max_{z\in\mathbb{R}^{D}:\|z\|_{2}=1}d_{M}(z^{\top}X,z^{ \top}X^{\prime})<\varepsilon.\]
Let \(\varepsilon>0\), \(X=\{\{x_{n}:n\in[N]\}\}\) and \(X^{\prime}=\{\{x_{n}^{\prime}:n\in[N]\}\}\in\mathbb{X}_{\mathbb{D},N}\). From Lemma 7, if \(\|\Phi(X)-\Phi(X^{\prime})\|_{2}<\delta(\varepsilon)\), then we have
\[\forall z\in\mathbb{R}^{D},\|z\|_{2}=1:d_{M}(z^{\top}X,z^{\top}X^{\prime})= \sqrt{\sum_{n\in[N]}|z^{\top}x_{n}-z^{\top}x_{\pi^{*}(n)}^{\prime}|^{2}}<\varepsilon\]
for a permutation operator \(\pi^{*}:[N]\to[N]\). Then, we have
\[\forall z\in\mathbb{R}^{D},\|z\|_{2}=1,n\in[N]:|z^{\top}x_{n}-z^{\top}x_{\pi^ {*}(n)}^{\prime}|^{2}<\varepsilon.\]
If \(x_{n}-x_{\pi^{*}(n)}^{\prime}\neq 0\) and \(z=\|x_{n}-x_{\pi^{*}(n)}^{\prime}\|_{2}^{-1}(x_{n}-x_{\pi^{*}(n)}^{\prime})\), then we have arrive at \(\|x_{n}-x_{\pi^{*}(n)}^{\prime}\|_{2}<\varepsilon\), where \(n\in[N]\). If \(x_{n}-x_{\pi^{*}(n)}^{\prime}=0\), then \(\|x_{n}-x_{\pi^{*}(n)}^{\prime}\|_{2}<\varepsilon\) is trivially the case. Therefore, we have
\[d_{M}(X,X^{\prime})=\min_{\pi\in\Pi(N)}\sqrt{\sum_{n\in[N]}\|x_{n}-x_{\pi(n)}^ {\prime}\|_{2}^{2}}\leq\sqrt{N}\varepsilon,\]
where \(\Pi(N)\) is the set of permutation operators on \([N]\). Finally, we establish the continuity of \(\Phi^{-1}\) on \(\Phi(\mathbb{X}_{\mathbb{D},N})\) by letting \(\delta_{f}(\varepsilon)=\delta(\frac{\varepsilon}{\sqrt{N}})\), that is,
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N},\|\Phi(X)-\Phi(X^{ \prime})\|_{2}<\delta_{f}(\varepsilon) :\max_{z\in\mathbb{R}^{D}:\|z\|_{2}=1}d_{M}(z^{\top}X,z^{\top}X^{ \prime})<\frac{\varepsilon}{\sqrt{N}}\] \[:d_{M}(X,X^{\prime})<\varepsilon.\]
**Proof of Lemma 7.** We construct the the polynomial \(p\big{(}t;z,\Phi(X)\big{)}\) in equation (3), that is,
\[\forall t\in\mathbb{R},z\in\mathbb{R}^{D}:p\big{(}t;z,\Phi(X))=t^{N}+\sum_{n \in[N]}(-1)^{n}a_{n}(z;X)t^{N-n}\]
by first computing the following parameterized moments:
\[\forall n\in[N],z\in\mathbb{R}^{D}:E_{n}(z,X)=\langle\psi(z,n),\Phi(X)\rangle.\]
**Fact 4**.: _For a fixed \(z\in\mathbb{R}^{D}\) and \(n\in[N]\), \(E_{n}(z,X)\) is a linear function of \(\Phi(X)\). Furthermore, \(E_{n}(z,X)\) is a continuous functions of \((z,\Phi(X))\)._
The coefficients of \(p\big{(}t;z,\Phi(X)\big{)}\) are polynomial functions of the moments \(\big{(}E_{n}(z,X)\big{)}_{n\in[N]}\); see the Newton-Girard equation (7).
**Fact 5**.: _The coefficients of the polynomial \(p\big{(}t;z,\Phi(X)\big{)}\) in equation (3) vary continuously with the moments \(\big{(}E_{n}(z,X)\big{)}_{n\in[N]}\)._
Therefore, the coefficients of the polynomial \(p\big{(}t;z,\Phi(X)\big{)}\) in equation (3) vary continuously with \((z,\Phi(X))\); see Facts 4 and 5.
**Theorem 9**.: _(Curgus and Mascioni, 2006) The function \(f:\mathbb{C}^{N}\to\mathbb{C}^{N}\), which associates every \(a=(a_{n})_{n\in[N]}\in\mathbb{C}^{N}\) to the multiset of roots, \(f(a)\in\mathbb{C}^{N}\), of the monic polynomial formed using a as the coefficient, i.e., \(t^{N}+a_{1}t^{N-1}+\cdots+(-1)^{N-1}a_{N-1}x+(-1)^{N}a_{N}\), is a homeomorphism._
From Theorem 9, Facts 4, and 5, the parameterized root multiset of \(p\big{(}t;z,\Phi(X)\big{)}\) (that is, \(z^{\top}X\)) vary continuously with \((z,\Phi(X))\). Therefore, for all \(X\in\mathbb{X}_{\mathbb{D},N}\), \(z\in\mathbb{R}^{D}\) and \(\varepsilon>0\), there exists \(\delta(\varepsilon,z)>0\) such
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N},\|\Phi(X)-\Phi(X^{\prime})\|_ {2}<\delta(\varepsilon,z):d_{M}(z^{\top}X,z^{\top}X^{\prime})<\varepsilon.\]
We may fix the norm of the vector \(z\) to one, since by definition of \(d_{M}\) in equation (12), we have
\[\forall\alpha\in\mathbb{R}:d_{M}\big{(}(\alpha z)^{\top}X,(\alpha z)^{\top}X^ {\prime}\big{)}=|\alpha|d_{M}(z^{\top}X,z^{\top}X^{\prime}.\]
After this normalization, for all \(\varepsilon>0\), we have
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N},z\in\mathbb{R}^{D},\|z\|_{2}= 1,\|\Phi(X)-\Phi(X^{\prime})\|_{2}<\delta(\varepsilon,z):d_{M}(z^{\top}X,z^{ \top}X^{\prime})<\varepsilon.\]
Let \(z^{*}\in\operatorname{argmax}_{z\in\mathbb{R}^{D}:\|z\|_{2}=1}d_{M}(z^{\top}X, z^{\top}X^{\prime})\). Then, we have
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N},\|\Phi(X)-\Phi(X^{\prime})\|_ {2}<\delta(\varepsilon,z^{*}):\max_{z\in\mathbb{R}^{D}:\|z\|_{2}=1}d_{M}(z^{ \top}X,z^{\top}X^{\prime})<\varepsilon,\]
which proves the statement if \(z^{*}\) exists. Therefore, we need to prove the existence of \(z^{*}\).
The set \(\{z\in\mathbb{R}^{D}:\|z\|_{2}=1\}\) is compact. If we prove that \(d_{M}(z^{\top}X,z^{\top}X^{\prime})\) is a continuous function of \(z\), then by the extreme-value theorem (Stein and Shakarchi, 2010), \(z^{*}\) does exist. To this end, we show that \(d_{M}^{2}(z^{\top}X,z^{\top}X^{\prime})\) (and hence \(d_{M}(z^{\top}X,z^{\top}X^{\prime})\)) is continuous. We use the following first-order perturbation analysis:
\[d_{M}^{2}((z+\mathrm{dz})^{\top}X,(z+\mathrm{dz})^{\top}X^{\prime})=\sum_{n\in [N]}|(z+\mathrm{dz})^{\top}x_{n}-(z+\mathrm{dz})^{\top}x^{\prime}_{\pi_{z+ \mathrm{dz}}(n)}|^{2}\]
where \(\pi_{z+\mathrm{dz}}:[N]\to[N]\) is a permutation operator that best matches elements of perturbed multisets \((z+\mathrm{dz})^{\top}X\) and \((z+\mathrm{dz})^{\top}X^{\prime}\). Let \(X^{\prime\prime}=\{\{x_{n}-x^{\prime}_{\pi_{z}(n)}:n\in[N]\}\}\). As we discussed in the proof of Proposition 6, if \(\|\mathrm{dz}\|_{2}<\frac{\mathrm{gap}(z^{\top}X^{\prime\prime})}{\mathrm{ diam}(\mathbb{D})}\) -- \(\mathrm{gap}(z^{\top}X^{\prime\prime})\neq 0\) since \(X\neq X^{\prime}\) -- then \(x^{\prime}_{\pi_{z}(n)}=x^{\prime}_{\pi_{z+\mathrm{dz}}(n)}\) for all \(n\in[N]\). Therefore, we have
\[d_{M}^{2}((z+\mathrm{dz})^{\top}X,(z+\mathrm{dz})^{\top}X^{\prime})=d_{M}^{2} (z^{\top}X,z^{\top}X^{\prime})+O(\|\mathrm{dz}\|_{2}^{2}),\]
that is, \(d_{M}(z^{\top}X,z^{\top}X^{\prime})\) is a continuous function of \(z\). This concludes the proof.
Proof of Theorem 4
### Extension of Theorem 8
Let \(\mathbb{D}\) be a compact subset of \(\mathbb{R}^{D}\), that is, compact \(\mathbb{D}\neq\mathbb{R}^{D}\). The encoding function \(\Phi(X)=\sum_{x\in X}\phi(x)\) -- where \(\phi:\mathbb{D}\to\operatorname{codim}(\phi)\) -- is an injective map over multisets with exactly \(N\) elements, that is, \(\Phi^{-1}\circ\Phi(X)=X\) where \(X\in\mathbb{X}_{\mathbb{D},N}\). To extend the result to multisets of variable sizes, we follow the proof sketch for the one-dimensional case (Wagstaff et al., 2019). Let \(x_{\circ}\in\mathbb{R}^{D}\setminus\mathbb{D}\). Then, we define \(\phi^{\prime}(x)=\phi(x)-\phi(x_{\circ})\). For a multiset \(X\in\mathbb{X}_{\mathbb{D},[N]}\) with \(|X|\leq N\) elements, we have
\[\forall X\in\mathbb{X}_{\mathbb{D},[N]}:\Phi^{\prime}(X) =\sum_{x\in X}\phi^{\prime}(x)=\sum_{x\in X}\phi(x)-|X|\phi(x_{ \circ})\] \[=\Phi(X\cup\{\underbrace{x_{\circ},\ldots,x_{\circ}}_{N-|X|}\}) -N\phi(x_{\circ})\] \[=\Phi(X\cup\{\underbrace{x_{\circ},\ldots,x_{\circ}}_{N-|X|}\}) +\operatorname{const}\]
where \(\operatorname{const}=-N\phi(x_{\circ})\). Since \(\Phi\) is injective over \(\mathbb{X}_{\mathbb{D},N}\), \(\Phi^{\prime}\) is an injective map. That is to say,
\[\forall X\in\mathbb{X}_{\mathbb{D},[N]}:\Big{(}\Phi^{-1}\circ( \Phi^{\prime}(X)-\operatorname{const})\Big{)}\cap\mathbb{D} =\Big{(}\Phi^{-1}\circ\Phi(X\cup\{\underbrace{x_{\circ},\ldots,x_{ \circ}}_{N-|X|}\})\Big{)}\cap\mathbb{D}\] \[=(X\cup\{\underbrace{x_{\circ},\ldots,x_{\circ}}_{N-|X|}\})\cap \mathbb{D}\] \[=X.\]
Therefore, we have \({\Phi^{\prime}}^{-1}(U)=\Phi^{-1}\big{(}U-\operatorname{const}\big{)}\cap \mathbb{D}\) for all \(U\in\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N]})=\{\Phi^{\prime}(X):X\in\mathbb{ X}_{\mathbb{D},[N]}\}\). If we define \(\rho=f\circ(\Phi^{\prime})^{-1}\) where \(\operatorname{dom}(\rho)=\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N]})\), then we have \(f(X)=\rho\circ\Phi^{\prime}(X)\) for all \(X\in\mathbb{X}_{\mathbb{D},[N]}\). We arrive at the theorem's exact statement by renaming \(\Phi^{\prime}\) to \(\Phi\).
### Extension of Theorem 3
Let \(\mathbb{D}\) be a compact subset of \(\mathbb{R}^{D}\), that is, compact \(\mathbb{D}\neq\mathbb{R}^{D}\). In Lemma 5, we prove that \(\mathbb{X}_{\mathbb{D},n}\) is a compact set, for all \(n\in\mathbb{N}\). Since \(\mathbb{X}_{\mathbb{D},[N]}\) is a finite union of compact sets, that is, \(\mathbb{X}_{\mathbb{D},[N]}=\bigcup_{n=1}^{N}\mathbb{X}_{\mathbb{D},n}\), itself is a compact set (Sutherland, 2009). Since \(\Phi^{\prime}\) is a continuous map (see Lemma 6), \(\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N]})\) is also a compact set (Pugh and Pugh, 2002).
Now let us show that \({\Phi^{\prime}}^{-1}\) is a continuous map over compact set \(\operatorname{codim}(\Phi^{\prime})=\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N]})\). We have to show that for all \(\varepsilon>0\) and all \(X,X^{\prime}\in\mathbb{X}_{\mathbb{D},[N]}\) such that \(\|\Phi^{\prime}(X)-\Phi^{\prime}(X^{\prime})\|_{2}<\delta(\varepsilon)\) we have \(d_{M}({\Phi^{\prime}}^{-1}\circ\Phi^{\prime}(X),{\Phi^{\prime}}^{-1}\circ\Phi ^{\prime}(X^{\prime}))<\varepsilon\) where \(\delta(\varepsilon)>0\) and \(d_{M}\) is the matching distance between multisets, that is,
\[d_{M}(X,X^{\prime})=\begin{cases}\min_{\text{bijection }\pi:[N_{\circ}]\to[N_{ \circ}]}\sqrt{\sum_{n\in[N_{\circ}]}\|x_{n}-x^{\prime}_{\pi(n)}\|_{2}^{2}}& \text{if }\ |X|=|X^{\prime}|=N_{\circ}\\ \infty&\text{if }\ |X|\neq|X^{\prime}|,\end{cases}\]
where \(X=\{\{x_{n}:n\in[N_{\circ}]\}\}\), \(X^{\prime}=\{\{x_{n}^{\prime}:n\in[N_{\circ}]\}\}\), \(N_{\circ}\in[N]\). On the other hand, we have \(\Phi^{\prime-1}(U)=\Phi^{-1}\big{(}U-\text{const}\big{)}\cap\mathbb{D}\) for all \(U\in\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N]})\) where \(\Phi^{-1}\) is a continuous function; see Proposition 8.
Consider the continuous function \(\Psi(U)=\Phi^{-1}\big{(}U-\text{const}\big{)}\) where \(U\in\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N]})\). By definition of continuity, for all \(\varepsilon>0\) and all \(X,X^{\prime}\in\mathbb{X}_{\mathbb{D},[N]}\) such that \(\|\Phi^{\prime}(X)-\Phi^{\prime}(X^{\prime})\|_{2}<\delta(\varepsilon)\) we have \(d_{M}(\Psi\circ\Phi^{\prime}(X),\Psi\circ\Phi^{\prime}(X^{\prime}))<\varepsilon\) where \(\delta(\varepsilon)>0\). Since we have,
\[\Psi\circ\Phi^{\prime}(X) =X\cup\{\{\underbrace{x_{\circ},\ldots,x_{\circ}}_{N-|X|}\}\}\] \[\Psi\circ\Phi^{\prime}(X^{\prime}) =X^{\prime}\cup\{\{\underbrace{x_{\circ},\ldots,x_{\circ}}_{N-|X ^{\prime}|}\}\},\]
we can simplify \(d_{M}(\Psi\circ\Phi^{\prime}(X),\Psi\circ\Phi^{\prime}(X^{\prime}))<\varepsilon\) as follows:
\[d_{M}(X\cup\{\{\underbrace{x_{\circ},\ldots,x_{\circ}}_{N-|X|}\},X^{\prime} \cup\{\{\underbrace{x_{\circ},\ldots,x_{\circ}}_{N-|X^{\prime}|}\}\})<\varepsilon.\]
If \(X\) and \(X^{\prime}\) have different number of elements in \(\mathbb{D}\), then we have \(\varepsilon>\inf_{x\in\mathbb{D}}\|x-x_{\circ}\|_{2}\). Let \(\varepsilon_{\circ}>0\) be such that \(\varepsilon_{\circ}<\inf_{x\in\mathbb{D}}\|x-x_{\circ}\|_{2}\). If we pick \(0<\varepsilon<\varepsilon_{\circ}\), then \(X\) and \(X^{\prime}\) have the same number of elements in \(\mathbb{D}\) and
\[d_{M}((\Phi^{\prime})^{-1}\circ\Phi^{\prime}(X),(\Phi^{\prime}) ^{-1}\circ\Phi^{\prime}(X^{\prime})) =d_{M}(\Psi\circ\Phi^{\prime}(X)\cap\mathbb{D},\Psi\circ\Phi^{ \prime}(X^{\prime})\cap\mathbb{D})\] \[=d_{M}(\Psi\circ\Phi^{\prime}(X),\Psi\circ\Phi^{\prime}(X^{\prime} ))<\varepsilon\]
That is, \((\Phi^{\prime})^{-1}\) is a continuous function over \(\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N]})\). Therefore, \(\rho=f\circ(\Phi^{\prime})^{-1}\) is a continuous function on compact set \(\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N]})\subset\mathbb{R}^{\binom{N+D}{D}-1}\), and it has a continuous extension to \(\mathbb{R}^{\binom{N+D}{D}-1}\); refer to Fact 2. We arrive at the theorem's statement by renaming \(\Phi^{\prime}\) to \(\Phi\).
## Appendix D Proof of Proposition 1
Let \(\Phi^{\prime}:\mathbb{X}_{\mathbb{D},[N]}\to\operatorname{codim}(\Phi^{\prime})\) where \(N=\max\{N_{1},N_{2}\}\), \(\Phi^{\prime}(X)=\sum_{x\in X}\phi^{\prime}(x)\), and \(\phi^{\prime}\) is given in the proof of Theorem 4. The function \(\Phi^{\prime}\) is injective on \(\mathbb{X}_{\mathbb{D},[N]}\) and \((\Phi^{\prime})^{-1}\) is continuous on compact set \(\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N]})\). Since \(\mathbb{X}_{\mathbb{D},[N_{1}]}\) and \(\mathbb{X}_{\mathbb{D},[N_{2}]}\) are compact subsets of \(\mathbb{X}_{\mathbb{D},[N]}\subseteq\mathbb{R}^{{N+D\choose D}-1}\), the function \(\Phi^{\prime}\) is injective on \(\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N_{1}]})\) and \(\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N_{2}]})\) and \((\Phi^{\prime})^{-1}\) is continuous on both domains. We define the following function:
\[\forall U_{1}\in\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N_{1}]}),U_{2}\in\Phi^{ \prime}(\mathbb{X}_{\mathbb{D},[N_{2}]}):\rho(U_{1},U_{2})=f\big{(}(\Phi^{ \prime})^{-1}(U_{1}),(\Phi^{\prime})^{-1}(U_{2})\big{)}.\]
If \(f\) is a continuous multiset function, \(\rho\) (defined above) is a continuous function on its compact domain \(\Phi^{\prime}(\mathbb{X}_{\mathbb{D},[N_{1}]})\times\Phi^{\prime}(\mathbb{X}_ {\mathbb{D},[N_{2}]})\) as it is the composition of continuous functions. Therefore, it has a continuous extension to \(\mathbb{R}^{{N+D\choose D}-1}\times\mathbb{R}^{{N+D\choose D}-1}\); refer to Fact 2.
Proof of Theorem 5
We define \(\phi:\mathbb{R}^{D}\to\mathrm{codim}(\phi)\subset\mathbb{C}^{D\times N}\) as follows:
\[\forall x\in\mathbb{R}^{D}:\phi(x)=\begin{pmatrix}r(x)&r(x)^{\odot 2}&\cdots&r(x)^{ \odot N}\end{pmatrix}\in\mathbb{C}^{D\times N}, \tag{13}\]
where \(r(x)=x+1l(x)j\), \(1\in\mathbb{R}^{D}\) is a vector of all ones, \(l:\mathbb{R}^{D}\to\mathbb{R}\) is a continuous function, \(j=\sqrt{-1}\), and \(\odot\) computes elementwise exponents.
**Fact 6**.: _The function \(\phi\) is continuous._
**Lemma 8**.: _Let \(\phi\) be the function defined in equation (13). Then, the function \(\Phi(X)=\sum_{x\in X}\phi(x)\) is injective on \(\mathbb{X}^{l}_{\mathbb{R}^{D},N}\)._
Let \(\Phi(\mathbb{X}^{l}_{\mathbb{R}^{D},N})\stackrel{{\mathrm{def}}} {{=}}\{\Phi(X):X\in\mathbb{X}^{l}_{\mathbb{R}^{D},N}\}\). From Lemma 8, there exists an inverse function \(\Phi^{-1}:\Phi(\mathbb{X}^{l}_{\mathbb{R}^{D},N})\to\mathbb{X}^{l}_{\mathbb{R} ^{D},N}\), that is, \(\Phi^{-1}\circ\Phi(X)=X\) for all \(X\in\mathbb{X}^{l}_{\mathbb{R}^{D},N}\). We construct \(\rho:\Phi(\mathbb{X}^{l}_{\mathbb{R}^{D},N})\to\mathrm{codim}(f)\) as \(\rho=f\circ\Phi^{-1}\). This completes the proof as follows:
\[\forall X\in\mathbb{X}^{l}_{\mathbb{R}^{D},N}:\rho\circ\Phi(X)=f\circ\Phi^{-1} \circ\Phi(X)=f(X).\]
### Proof of Lemma 8
From equation (13), we have
\[\forall X\in\mathbb{X}^{l}_{\mathbb{R}^{D},N}:\ \Phi(X)=\sum_{x\in X}\begin{pmatrix} r(x)&r(x)^{\odot 2}&\cdots&r(x)^{\odot N}\end{pmatrix}\in\mathbb{C}^{D\times N}. \tag{14}\]
**Definition 12**.: _Let \(\Phi^{-1}_{\mathrm{deep}}\) be the continuous function introduced in Deep Sets paper (Zaheer et al., 2017), viz., \(\Phi^{-1}_{\mathrm{deep}}\circ\Phi_{\mathrm{deep}}=X\) where \(\Phi_{\mathrm{deep}}(X)=(\sum_{x\in X}x,\ldots,\sum_{x\in X}x^{N})\), where \(X\in\mathbb{X}_{\mathbb{C},N}\) is a multiset of \(N\) scalars in \(\mathbb{C}\). With slight abuse of notation, we generalize this definition to the following row-wise function:_
\[\forall X_{1},\ldots,X_{D}\in\mathbb{X}_{\mathbb{C},N}:\ \Phi^{-1}_{ \mathrm{deep}}(\begin{pmatrix}\Phi_{\mathrm{deep}}(X_{1})\\ \vdots\\ \Phi_{\mathrm{deep}}(X_{D})\end{pmatrix})=\begin{pmatrix}\Phi^{-1}_{\mathrm{ deep}}\circ\Phi_{\mathrm{deep}}(X_{1})\\ \vdots\\ \Phi^{-1}_{\mathrm{deep}}\circ\Phi_{\mathrm{deep}}(X_{D})\end{pmatrix}= \begin{pmatrix}X_{1}\\ \vdots\\ X_{D}\end{pmatrix}\]
**Definition 13**.: _Let \(X=\{\{x_{n}\in\mathbb{C}:n\in[N]\}\}\in\mathbb{X}_{\mathbb{C},N}\) be a multiset of \(N\) complex-valued elements. We then define the function \(\mathrm{sort}\) as follows:_
\[\mathrm{sort}(X)=\begin{pmatrix}\mathrm{Re}(x_{\pi(n)})\end{pmatrix}_{n\in[N]} \in\mathbb{R}^{N},\]
_where \(\pi:[N]\to[N]\) is any permutation operator such that \(\mathrm{Im}(x_{\pi(1)})\leq\cdots\leq\mathrm{Im}(x_{\pi(N)})\)._
**Definition 14**.: _Let \(X_{1},\ldots,X_{D}\in\mathbb{X}_{\mathbb{C},N}\) be multisets of \(N\) complex-valued elements. We then define the function \(\mathrm{sortvec}\) as follows:_
\[\mathrm{sortvec}(\begin{pmatrix}X_{1}\\ \vdots\\ X_{D}\end{pmatrix})=\{\{\begin{pmatrix}e_{n}^{\top}\mathrm{sort}(X_{1})\\ \vdots\\ e_{n}^{\top}\mathrm{sort}(X_{D})\end{pmatrix}\in\mathbb{R}^{D}:n\in[N]\}\}\in \mathbb{X}_{\mathbb{R}^{D},N},\]
_where \(e_{n}\in\mathbb{R}^{N}\) is the \(n\)-th standard basis vector for \(\mathbb{R}^{N}\)._
**Remark 8**.: _Permutation operators in Definitions 13 and 14 may not be unique. This happens if the input multiset has at least two elements with nonunique imaginary parts. If this is the case, functions \(\operatorname{sort}\) and \(\operatorname{sortvec}\) are both ill-defined. In what follows, we show that for certain inputs of interest both functions are indeed well-defined._
Let \(\Psi:\Phi(\mathbb{X}_{\mathbb{R}^{D},N}^{l})\to\mathbb{X}_{\mathbb{R}^{D},N}^{l}\) where \(\Psi=\operatorname{sortvec}\circ\Phi_{\operatorname{deep}}^{-1}\). Then, we have
\[\forall X\in\mathbb{X}_{\mathbb{R}^{D},N}^{l}: \Psi\circ\Phi(X)=\operatorname{sortvec}\circ\Phi_{\operatorname{ deep}}^{-1}\circ\Phi(X)\] \[\stackrel{{\text{(a)}}}{{=}}\operatorname{sortvec} \circ\Phi_{\operatorname{deep}}^{-1}\big{(}\sum_{x\in X}\big{(}(x+1l(x)j) \;\;(x+1l(x)j)^{\odot 2}\;\;\cdots\;\;\;(x+1l(x)j)^{\odot N}\big{)}\,\big{)}\] \[\stackrel{{\text{(b)}}}{{=}}\operatorname{sortvec} \circ\Phi_{\operatorname{deep}}^{-1}\big{(}\begin{pmatrix}\sum_{x\in X}e_{1}^ {\top}x+l(x)j&\cdots&\sum_{x\in X}(e_{1}^{\top}x+l(x)j)^{N}\\ \vdots&&\\ \sum_{x\in X}e_{D}^{\top}x+l(x)j&\cdots&\sum_{x\in X}(e_{D}^{\top}x+l(x)j)^{N }\end{pmatrix}\big{)}\] \[\stackrel{{\text{(c)}}}{{=}}\operatorname{sortvec} \circ\Phi_{\operatorname{deep}}^{-1}\big{(}\begin{pmatrix}\Phi_{\operatorname{ deep}}(\{\{e_{1}^{\top}x+l(x)j:x\in X\}\})\\ \vdots&\vdots\\ \Phi_{\operatorname{deep}}(\{\{e_{D}^{\top}x+l(x)j:x\in X\}\})\end{pmatrix} \big{)}\] \[\stackrel{{\text{(d)}}}{{=}}\operatorname{sortvec} \big{(}\begin{pmatrix}\{\{e_{1}^{\top}x+l(x)j:x\in X\}\}\\ \vdots\\ \{\{e_{D}^{\top}x+l(x)j:x\in X\}\}\end{pmatrix}\big{)}.\]
where (a) is due to equations (14) and (13), (b) follows from explicitly writing the elements of \(\Phi(X)\), (c) follows from the definition of \(\Phi_{\operatorname{deep}}\) (see Definition 12), and finally (d) is due to the fact that we allow \(\Phi_{\operatorname{deep}}^{-1}\) to operate elementwise.
**Case 1 (Distinct Identifiers).** Let \(X=\{\{x_{n}:n\in[N]\}\}\). If all elements of \(l(X)=\{\{l(x):x\in X\}\}\) are unique, then we have
\[\forall d\in[D]:\operatorname{sort}(\{\{e_{d}^{\top}x+l(x)j:x\in X\}\})= \big{(}e_{d}^{\top}x_{\pi(n)}\big{)}_{n\in[N]}\in\mathbb{R}^{N}\]
where \(\pi:[N]\to[N]\) is the permutation operator such that \(l(x_{\pi(1)})<\cdots<l(x_{\pi(N)})\). Then, we have
\[\Psi\circ\Phi(X)=\{\{\begin{pmatrix}e_{1}^{\top}x_{\pi(n)}\\ \vdots\\ e_{D}^{\top}x_{\pi(n)}\end{pmatrix}:n\in[N]\}\}=\{\{x_{\pi(n)}:n\in[N]\}\}=X.\]
**Case 2 (Repeated Identifiers).** If \(l(X)\) has repeated elements, then there exists at least two distinct permutation operators \(\pi\) and \(\pi^{\prime}\) (\(\pi\neq\pi^{\prime}\)) that sort the elements of \(l(X)\), that is,
\[l(x_{\pi(1)}) \leq l(x_{\pi(2)})\leq\cdots\leq l(x_{\pi(N)})\] \[l(x_{\pi^{\prime}(1)}) \leq l(x_{\pi^{\prime}(2)})\leq\cdots\leq l(x_{\pi^{\prime}(N)}).\]
In this case, we have \(l(x_{\pi(n)})=l(x_{\pi^{\prime}(n)})\) for all \(n\in[N]\) -- even though \(\pi(n)\neq\pi^{\prime}(n)\) for some \(n\in[N]\). From Definition 2, since \(l(x_{\pi(n)})=l(x_{\pi^{\prime}(n)})\), we have \(x_{\pi^{\prime}(n)}=x_{\pi(n)}\) where \(n\in[N]\). Consequently, we have
\[\forall d\in[D]:\operatorname{sort}(\{\{e_{d}^{\top}x+l(x)j:x\in X \}\}) =\big{(}e_{d}^{\top}x_{\pi(n)}\big{)}_{n\in[N]}\] \[=\big{(}e_{d}^{\top}x_{\pi^{\prime}(n)}\big{)}_{n\in[N]}\in\mathbb{ R}^{N}.\]
Therefore, even though there are multiple permutation operators that sorts the elements of \(\{\{e_{d}^{\top}x+l(x)j:x\in X\}\}\), the output of the sort function remains unchanged, that is, sort is a well-defined function for any element of \(X\in\mathbb{X}_{\mathbb{R}^{D},N}^{l}\). Consequently, sortvec is well-defined on \(\mathbb{X}_{\mathbb{R}^{D},N}^{l}\) and we have
\[\forall X\in\mathbb{X}_{\mathbb{R}^{D},N}^{l}:\Psi\circ\Phi(X) =\{\{\begin{pmatrix}e_{1}^{\top}x_{\pi_{1}(n)}\\ \vdots\\ e_{D}^{\top}x_{\pi_{D}(n)}\end{pmatrix}:n\in[N]\}\}\] \[\stackrel{{\text{(a)}}}{{=}}\{\{\begin{pmatrix}e_{1 }^{\top}x_{\pi_{1}(n)}\\ \vdots\\ e_{D}^{\top}x_{\pi_{1}(n)}\end{pmatrix}:n\in[N]\}\}=X,\]
where \(\pi_{d}\) is a permutation operator that sorts the elements of \(\{\{e_{d}^{\top}x+l(x)j:x\in X\}\}\) -- for all \(d\in[D]\) -- and (a) is due to \(x_{\pi_{i}(n)}=x_{\pi_{j}(n)}\) for all \(i,j\in[D]\) and \(n\in[N]\). Therefore, we have
\[\forall X\in\mathbb{X}_{\mathbb{R}^{D},N}^{l}=\Psi\circ\Phi(X)=\text{sortvec} \circ\Phi_{\text{deep}}^{-1}\circ\Phi(X)=X,\]
that is, \(\Psi=\text{sortvec}\circ\Phi_{\text{deep}}^{-1}\) is well-defined on \(\Phi(\mathbb{X}_{\mathbb{R}^{D},N}^{l})\) and \(\Psi=\Phi^{-1}:\Phi(\mathbb{X}_{\mathbb{R}^{D},N}^{l})\rightarrow\mathbb{X}_{ \mathbb{R}^{D},N}^{l}\). This proves that \(\Phi\) is an injective function on \(\mathbb{X}_{\mathbb{R}^{D},N}^{l}\).
Proof of Proposition 2
The proof is similar to that of Theorem 4. Let \(\mathbb{D}\) be a compact subset of \(\mathbb{R}^{D}\), that is, \(\mathbb{D}\neq\mathbb{R}^{D}\).
The encoding function \(\phi:\mathbb{D}\to\operatorname{codim}(\phi)\) (defined in the proof Theorem 5) such that \(\Phi(X)=\sum_{x\in X}\phi(x)\) is an injective map over multisets with exactly \(N\) elements, that is, \(\Phi^{-1}\circ\Phi(X)=X\) where \(X\in\mathbb{X}^{l}_{\mathbb{D},N}\) and \(l:\mathbb{D}\to\mathbb{R}\) is the continuous identifier function. Let \(x_{\circ}\in\mathbb{R}^{D}\setminus\mathbb{D}\). Then, we define \(\phi^{\prime}(x)=\phi(x)-\phi(x_{\circ})\). For a multiset \(X\in\mathbb{X}^{l}_{\mathbb{D},[N]}\) with \(|X|\leq N\) elements, we have
\[\forall X\in\mathbb{X}^{l}_{\mathbb{D},[N]}:\Phi^{\prime}(X) =\sum_{x\in X}\phi^{\prime}(x)=\sum_{x\in X}\phi(x)-|X|\phi(x_{ \circ})\] \[=\Phi(X\cup\{\underbrace{x_{\circ},\ldots,x_{\circ}}_{N-|X|}\})- N\phi(x_{\circ})\] \[=\Phi(X\cup\{\underbrace{x_{\circ},\ldots,x_{\circ}}_{N-|X|}\})+ \operatorname{const}\]
where \(\operatorname{const}=-N\phi(x_{\circ})\). Since \(\Phi\) is injective over \(\mathbb{X}_{\mathbb{D},N}\), \(\Phi^{\prime}\) is an injective map. That is to say,
\[\forall X\in\mathbb{X}^{l}_{\mathbb{D},[N]}:\Big{(}\Phi^{-1} \circ(\Phi^{\prime}(X)-\operatorname{const})\Big{)}\cap\mathbb{D} =\Big{(}\Phi^{-1}\circ\Phi(X\cup\{\underbrace{x_{\circ},\ldots,x_{ \circ}}_{N-|X|}\})\Big{)}\cap\mathbb{D}\] \[=(X\cup\{\underbrace{x_{\circ},\ldots,x_{\circ}}_{N-|X|}\})\cap \mathbb{D}\] \[=X.\]
Therefore, we have \({\Phi^{\prime}}^{-1}(U)=\Phi^{-1}\big{(}U-\operatorname{const}\big{)}\cap \mathbb{D}\) for all \(U\in\Phi^{\prime}(\mathbb{X}^{l}_{\mathbb{D},[N]})=\{\Phi^{\prime}(X):X\in \mathbb{X}^{l}_{\mathbb{D},[N]}\}\). If we define \(\rho=f\circ(\Phi^{\prime})^{-1}\) where \(\operatorname{dom}(\rho)=\Phi^{\prime}(\mathbb{X}^{l}_{\mathbb{D},[N]})\), then we have \(f(X)=\rho\circ\Phi^{\prime}(X)\) for all \(X\in\mathbb{X}^{l}_{\mathbb{D},[N]}\). We arrive at the exact form of sum-decomposition by renaming \(\Phi^{\prime}\) to \(\Phi\).
Proof of Proposition 3
Let \(X\in\mathbb{X}_{\mathbb{Q}^{D},N}\). For any rational-valued vectors \(x,x^{\prime}\in X\) such that \(l(x)=l(x^{\prime})\), we have
\[\text{const}\sum_{d\in[D]}(x_{d}-x^{\prime}_{d})\log\zeta(d)=0, \tag{15}\]
where \(x_{d}\) and \(x^{\prime}_{d}\) are \(d\)-th elements of \(x\) and \(x^{\prime}\), and \(\text{const}\in\mathbb{N}\) is such that \(y_{d}\stackrel{{\text{def}}}{{=}}\text{const}(x_{d}-x^{\prime}_{ d})\in\mathbb{Z}\) -- for all \(d\in[D]\). From equation (15), we have
\[\sum_{d\in[D]}y_{d}\log\zeta(d)=0\ \to\ \prod_{d\in[D]}\zeta(d)^{y_{d}}=1.\]
Therefore, we have
\[\prod_{\begin{subarray}{c}d\in[D]\\ y_{d}>0\end{subarray}}\zeta(d)^{y_{d}}=\prod_{\begin{subarray}{c}d\in[D]\\ -y_{d}>0\end{subarray}}\zeta(d)^{-y_{d}}=n\in\mathbb{N} \tag{16}\]
Both sides of equation (16) are prime number decompositions of an integer \(n\in\mathbb{N}\) with completely exclusive set of prime numbers. Therefore, we have \(n=1\) which results in \(y_{d}=\text{const}(x_{d}-x^{\prime}_{d})=0\) for all \(d\in[D]\), that is, \(x=x^{\prime}\). This proves the following result:
\[\forall x,x^{\prime}\in X\big{(}\in\mathbb{X}_{\mathbb{Q}^{D},N}\big{)}:\ l(x)=l(x^{ \prime})\longrightarrow\ x=x^{\prime}.\]
Finally, since \(l\) is a continuous linear function on \(\mathbb{R}^{D}\), \(\mathbb{X}_{\mathbb{Q}^{D},N}\) is an \(l\)-identifiable set.
Proof of Lemma 1
We need to show for any \(X\in\mathbb{X}_{\mathbb{D},N}\), there exists a sequence \(\{X_{n}\in\mathbb{X}_{Q(\mathbb{D}),N}:n\in\mathbb{N}\}\) such that \(\lim_{n\to\infty}\Phi(X_{n})=\Phi(X)\). From Lemma 6, \(\Phi\) is a continuous map. Therefore, we simply need to prove the following property:
\[\forall X\in\mathbb{X}_{\mathbb{D},N}:\lim_{n\to\infty}X_{n}=X,\]
where \(X_{n}\in\mathbb{X}_{Q(\mathbb{D}),N}\) for all \(n\in\mathbb{N}\). By definition, we want show \(\forall\varepsilon>0,\exists N(\varepsilon)\in\mathbb{N}\) such that
\[\forall n\geq N(\varepsilon):d_{M}(X_{n}^{m},X)<\varepsilon.\]
Let \(\mathcal{N}_{n}(x)=\{y\in Q(\mathbb{D}):\|x-y\|_{2}\leq\frac{1}{n}\}\) be a bounded set centered at \(x\in\mathbb{D}\) and \(n\in\mathbb{N}\). It is important to note that \(\mathcal{N}_{n}(x)\) is a nonempty set for all \(n\in\mathbb{N}\), that is, the intersection of \(Q(\mathbb{D})\) with the nonempty interior of \(\mathbb{D}\) is nonempty because \(Q(\mathbb{D})\) is a dense subset of \(\mathbb{D}\). We let \(q_{n}(x)\) be **any** random point in \(\mathcal{N}_{n}(x)\). Then, for any \(X\in\mathbb{X}_{\mathbb{D},N}\), we let \(X_{n}=\{\{q_{n}(x):x\in X\}\}\in\mathbb{X}_{Q(\mathbb{D}),N}\). By construction, we have
\[d_{M}(X_{n},X)\leq\sqrt{N\max_{x\in\mathbb{D}}\|x-q_{n}(x)\|_{2}^{2}}\leq\sqrt {N}n^{-1}.\]
If we let \(N(\varepsilon)=\lfloor\frac{\sqrt{N}}{\varepsilon}\rfloor+1\), then \(d_{M}(X_{n},X)\leq\varepsilon\) for all \(n\geq N(\varepsilon)\). Therefore, we have \(\lim_{n\to\infty}d_{M}(X_{n},X)=0\), that is, \(\lim_{n\to\infty}\mathcal{X}_{n}^{m}=X\). Any realization of the random process \(\{X_{n}\}_{n\in\mathbb{N}}\) forms a sequence in \(\mathbb{X}_{Q(\mathbb{D}),N}\) that converges to \(X\), that is, \(\mathbb{X}_{Q(\mathbb{D}),N}\) is a dense subset of \(\mathbb{X}_{\mathbb{D},N}\).
The function \(\Phi\) in Theorem 5 is continuous; see Fact 6 and Lemma 6. Furthermore, we showed that \(\mathbb{X}_{Q(\mathbb{D}),N}\) is a dense subset of \(\mathbb{X}_{\mathbb{D},N}\). Therefore, for any \(U\in\Phi(\mathbb{X}_{\mathbb{D},N})\) there exists (at least) a \(X\in\mathbb{X}_{\mathbb{D},N}\) such that \(U=\Phi(X)\). Let \(\{X_{n}\in\mathbb{X}_{Q(\mathbb{D}),N}:n\in\mathbb{N}\}\) be such that \(\lim_{n\to\infty}X_{n}=X\). Since \(\Phi\) is a continuous map, we have \(\lim_{n\to\infty}\Phi(X_{n})=\Phi(X)\). That is, there exists a sequence \(\{U_{n}=\Phi(X_{n})\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N}):n\in\mathbb{N}\}\) such that \(\lim_{n\to\infty}U_{n}=U\). This proves that \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) is a dense subset of \(\Phi(\mathbb{X}_{\mathbb{D},N})\). This completes the proof.
Proof of Theorem 6
**Fact 7**.: _Let \(\mathbb{D}\) be a compact subset of \(\mathbb{R}^{D}\) with nonempty interior. The function \(\phi\) in Proposition 3 is continuous. Its associated multiset function \(\Phi:\mathbb{X}_{\mathbb{D},N}\to\mathrm{codim}(\Phi)\) is a continuous function (see Lemma 4)_
From Fact 7 and Corollary 1, there exist a continuous multiset function and \(\Phi:\mathbb{X}_{\mathbb{D},N}\to\mathrm{codim}(\Phi)\subset\mathbb{C}^{D\times N}\) and \(\rho:\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\to\mathrm{codim}(\rho)\) such that
\[\forall X\in\mathbb{X}_{Q(\mathbb{D}),N}:\ f(X)=\rho\big{(}\sum_{x\in X}\phi(x )\big{)}=\rho\circ\Phi(X).\]
In this proof, we want to define the function \(\rho_{e}:\Phi(\mathbb{X}_{\mathbb{D},N})\to\mathrm{codim}(\rho_{e})\) as follows:
\[\forall Z\in\Phi(\mathbb{X}_{\mathbb{D},N}):\rho_{e}(Z)=\lim_{Z_{n}\to Z}\rho( Z_{n}), \tag{17}\]
where \(Z_{n}\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) for all \(n\in\mathbb{N}\). The goal is to show that \(\rho_{e}\) is (1) well-defined and (2) continuous over its compact domain \(\Phi(\mathbb{X}_{\mathbb{D},N})\). If these two conditions are valid, we let \(Z=\Phi(X)\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) where \(X\in\mathbb{X}_{Q(\mathbb{D}),N}\) and \(\{Z_{n}\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N}):Z_{n}=Z,n\in\mathbb{N}\}\). Then, we have
\[\forall X\in\mathbb{X}_{Q(\mathbb{D}),N}:\ f(X)=\rho_{e}\circ\Phi(X)=\rho\circ \Phi(X).\]
**Proposition 9** (**Well-definedness)**.: _Let \(\mathcal{Z}\stackrel{{\mathrm{def}}}{{=}}\{Z_{n}\in\Phi(\mathbb{ X}_{Q(\mathbb{D}),N}):n\in\mathbb{N}\}\) be the convergent sequence, that is, \(\lim_{n\to\infty}Z_{n}=Z\). Given a continuous multiset function \(f:\mathbb{X}_{\mathbb{D},N}\to\mathrm{codim}(f)\), let \(\rho:\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\to\mathrm{codim}(\rho)\subset f( \mathbb{X}_{\mathbb{D},N})\) be defined in Corollary 1. Then, the sequence \(\rho(\mathcal{Z})\stackrel{{\mathrm{def}}}{{=}}\{\rho(Z_{n}):n\in \mathbb{N}\}\) is convergent to a unique point in \(f(\mathbb{X}_{\mathbb{D},N})\). The term \(\lim_{n\to\infty}\rho(Z_{n})\) only depends on \(Z\), and not specific choice of the sequence \(\mathcal{Z}\)._
As a result of Proposition 9, the function \(\rho_{e}:\Phi(\mathbb{X}_{\mathbb{D},N})\to\mathrm{codim}(\rho_{e})\subseteq f (\mathbb{X}_{\mathbb{D},N})\) is well-defined. That is, \(\lim_{Z_{n}\to Z}\rho(Z_{n})\) does not depend on the specific convergent sequence \(\mathcal{Z}\) so long as its limiting point -- \(\lim_{n\to\mathbb{N}}Z_{n}=Z\) -- is fixed.
**Proposition 10** (**Continuity)**.: _The function \(\rho_{e}\) is continuous on the compact domain \(\Phi(\mathbb{X}_{\mathbb{D},N})\)._
In summary, we have
\[\forall X\in\mathbb{X}_{Q(\mathbb{D}),N}:f(X)=\rho_{e}\circ\Phi(X).\]
where \(\rho_{e}:\Phi(\mathbb{X}_{\mathbb{D},N})\to\mathrm{codim}(\rho_{e})\) and \(\Phi:\mathbb{X}_{\mathbb{D},N}\to\mathrm{codim}(\Phi)\) are continuous functions. Therefore, \(\rho_{e}\circ\Phi\) is a continuous function on \(\mathbb{X}_{\mathbb{D},N}\). Since \(\mathbb{X}_{Q(\mathbb{D}),N}\) is a dense subset of \(\mathbb{X}_{\mathbb{D},N}\) (see Lemma 1) and \(f:\mathbb{X}_{\mathbb{D},N}\to\mathrm{codim}(f)\) is a continuous multiset function, we have
\[\forall X\in\mathbb{X}_{\mathbb{D},N}:\ f(X)=\lim_{n\to\infty}f(X_{n})\]
for any sequence \(\{X_{n}\in\mathbb{X}_{Q(\mathbb{D}),N}:n\in\mathbb{N}\}\) where \(\lim_{n\to\infty}X_{n}=X\). Therefore, we have
\[\forall X\in\mathbb{X}_{\mathbb{D},N}:\ f(X)=\lim_{n\to\infty}\rho_{e}\circ\Phi (X_{n}).\]
Since \(\rho_{e}\circ\Phi\) is a continuous function on \(\mathbb{X}_{\mathbb{D},N}\), we have
\[\forall X\in\mathbb{X}_{\mathbb{D},N}:\ f(X)=\lim_{n\to\infty}\rho_{e}\circ\Phi( X_{n})=\rho_{e}\circ\Phi(\lim_{n\to\infty}X_{n})=\rho_{e}\circ\Phi(X).\]
We argue that \(\rho_{e}\) has a continuous extension to \(\mathbb{C}^{D\times N}\). The set \(\mathbb{X}_{\mathbb{D},N}\) is a compact set. From Lemma 6, \(\Phi(\mathbb{X}_{\mathbb{D},N})\) is also a compact set. Finally, Fact 1 shows this continuous extension is admitted. After renaming \(\rho_{e}\) to \(\rho\), we arrive at the exact statement of the theorem.
### Proof of Proposition 9
**Lemma 9**.: _Let \(\mathcal{Z}\stackrel{{\rm def}}{{=}}\{Z_{n}\in\Phi(\mathbb{X}_{Q (\mathbb{D}),N}):n\in\mathbb{N}\}\) be the convergent sequence, that is, \(\lim_{n\to\infty}Z_{n}=Z\in\Phi(\mathbb{X}_{\mathbb{D},N})\). Given a continuous multiset function \(f:\mathbb{X}_{\mathbb{D},N}\to\operatorname{codim}(f)\), let \(\rho:\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\to\operatorname{codim}(\rho)\subset f (\mathbb{X}_{\mathbb{D},N})\) be defined in Corollary 1. The sequence \(\rho(\mathcal{Z})\stackrel{{\rm def}}{{=}}\{\rho(Z_{n}):n\in \mathbb{N}\}\) is Cauchy in compact metric space \((f(\mathbb{X}_{\mathbb{D},N}),\|\cdot\|_{2})\)._
**Theorem 10**.: _(Attenborough, 2003) A Cauchy sequence in a compact metric space is convergent to a point in the metric space._
**Lemma 10**.: _Let \(\mathcal{Z}\stackrel{{\rm def}}{{=}}\{Z_{n}\in\Phi(\mathbb{X}_{Q (\mathbb{D}),N}):n\in\mathbb{N}\}\) be the convergent sequence, that is, \(\lim_{n\to\infty}Z_{n}=Z\in\Phi(\mathbb{X}_{\mathbb{D},N})\). Given a continuous multiset function \(f:\mathbb{X}_{\mathbb{D},N}\to\operatorname{codim}(f)\), let \(\rho:\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\to\operatorname{codim}(\rho)\subset f (\mathbb{X}_{\mathbb{D},N})\) be defined in Corollary 1. The sequence \(\rho(\mathcal{Z})\stackrel{{\rm def}}{{=}}\{\rho(Z_{n}):n\in \mathbb{N}\}\) is convergent to a unique point in \(f(\mathbb{X}_{\mathbb{D},N})\). The term \(\lim_{n\to\infty}\rho(Z_{n})\) only depends on \(Z=\lim_{n\to}Z_{n}\)._
#### i.1.1 Proof of Lemma 9
**Fact 8**.: _Every convergent sequence is Cauchy. Hence, the convergent sequence \(\mathcal{Z}\stackrel{{\rm def}}{{=}}\{Z_{n}:n\in\mathbb{N}\}\) is Cauchy in \((\Phi(\mathbb{X}_{Q(\mathbb{D}),N}),\|\cdot\|_{F})\)._
From Fact 8, for any \(\delta>0\), there exists \(N(\delta)\in\mathbb{N}\) such that \(\|Z_{n_{1}}-Z_{n_{2}}\|_{F}<\delta\) for all \(n_{1},n_{2}>N(\delta)\). Therefore, we have
\[\forall n_{1},n_{2}>N(\delta):\|\Phi(X_{n_{1}})-\Phi(X_{n_{2}})\|_{F}<\delta. \tag{18}\]
where \(X_{n}=\Phi^{-1}(Z_{n})\) for all \(n\in\mathbb{N}\). The set \(\mathbb{X}_{Q(\mathbb{D}),N}\) is an \(l\)-identifiable subset of \(\mathbb{X}_{\mathbb{D},N}\).
**Proposition 11**.: _Let \(\mathbb{X}_{\mathbb{R}^{D}/l,N}\) be an \(l\)-identifiable set, and \(\Phi(\mathbb{X}_{\mathbb{R}^{D}/l,N})=\{\Phi(X):X\in\mathbb{X}_{\mathbb{R}^{D }/l,N}\}\) where \(\Phi\) is defined in equations (13) and (14). The function \(\Phi^{-1}:\Phi(\mathbb{X}_{\mathbb{R}^{D}/l,N})\to\mathbb{X}_{\mathbb{R}^{D}/l,N}\) is defined in the proof of Lemma 8. We claim that \(\Phi^{-1}\) is a continuous function on its domain._
From Proposition 11, \(\Phi^{-1}\) is a continuous function on \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\). Since \(f\) is a continuous multiset function on its domain, the function \(\rho=f\circ\Phi^{-1}\) is continuous on \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\). By definition of continuity, for any \(\varepsilon>0\) and \(\Phi(X)\) where \(X\in\mathbb{X}_{Q(\mathbb{D}),N}\), there exists \(\delta(\varepsilon)>0\) such that
\[\forall X^{\prime}\in\mathbb{X}_{Q(\mathbb{D}),N}:\|\Phi(X)-\Phi(X^{\prime}) \|_{F}<\delta(\varepsilon)\to\|f(X)-f(X^{\prime})\|_{2}<\varepsilon, \tag{19}\]
where \(X=\Phi^{-1}\circ\Phi(X)\) and \(X^{\prime}=\Phi^{-1}\circ\Phi(X^{\prime})\).
Comparing equation (18) to the left-hand-side of equation (19) -- letting \(Z_{n_{1}}=\Phi(X)\) and \(Z_{n_{2}}=\Phi(X^{\prime})\) -- we have
\[\forall n_{1},n_{2}>N(\delta(\varepsilon)):\|f\circ\Phi^{-1}(Z_{n_{1}})-f\circ \Phi^{-1}(Z_{n_{2}})\|_{2}<\varepsilon,\]
that is, \(f\circ\Phi^{-1}(\mathcal{Z})=\rho(\mathcal{Z})\) is a Cauchy sequence; see Corollary 1 and proof of Theorem 5. Finally, Lemma 5 and the following fact show that \(f(\mathbb{X}_{\mathbb{D},N})\) is indeed a compact set, that is, \((f(\mathbb{X}_{\mathbb{D},N}),\|\cdot\|_{2})\) is a sequentially compact metric space.
**Fact 9**.: _(Pugh and Pugh 2002) The image of a compact set under continuous map is a compact set._
**Proof of Proposition 11** Let \(\varepsilon>0\) and \(Z\in\Phi(\mathbb{X}_{\mathbb{R}^{D}/l,N})\) -- that is, \(Z=\Phi(X)\in\mathbb{C}^{D\times N}\) for a unique \(X\in\mathbb{X}_{\mathbb{R}^{D}/l,N}\). We define \(\mathbb{D}_{\Phi}(\varepsilon,Z)=\{Z^{\prime}\in\Phi(\mathbb{X}_{\mathbb{R}^{D }/l,N}):\|Z-Z^{\prime}\|_{F}<\varepsilon\}\). For any \(Z^{\prime}\in\mathbb{D}_{\Phi}(\varepsilon,Z)\), we have
\[d_{M}(\Phi^{-1}(Z),\Phi^{-1}(Z^{\prime}))\stackrel{{ \rm(a)}}{{\leq}}\sup_{\begin{subarray}{c}\mathrm{dZ}\in\mathbb{C}^{D \times N}:\\ Z\pm\mathrm{dZ}\in\mathbb{D}_{\Phi}(\varepsilon,Z)\end{subarray}}d_{M}(X,\Phi ^{-1}\circ(Z+\mathrm{d}Z))\] \[\stackrel{{\rm(b)}}{{=}}\sup_{\begin{subarray}{c} \mathrm{dZ}\in\mathbb{C}^{D\times N}:\\ Z+\mathrm{dZ}\in\mathbb{D}_{\Phi}(\varepsilon,Z)\end{subarray}}d_{M}\Big{(}X, \Phi^{-1}(\begin{pmatrix}\Phi_{\mathrm{deep}}(\{\{e_{1}^{\top}x+l(x)j:x\in X\} \})\\ \Phi_{\mathrm{deep}}(\{\{e_{D}^{\top}x+l(x)j:x\in X\}\})\end{pmatrix}+\mathrm{dZ} )\Big{)}\] \[\stackrel{{\rm(c)}}{{=}}\sup_{\begin{subarray}{c} \mathrm{dZ}\in\mathbb{C}^{D\times N}:\\ Z+\mathrm{dZ}\in\mathbb{D}_{\Phi}(\varepsilon,Z)\end{subarray}}d_{M}\Big{(}X, \mathrm{sortvec}(\begin{pmatrix}\{\{e_{1}^{\top}x+l(x)j+r_{1}(x,\mathrm{dZ}_{1} ;X):x\in X\}\}\\ \{\{e_{D}^{\top}x+l(x)j+r_{D}(x,\mathrm{dZ}_{D};X):x\in X\}\}\end{pmatrix})\,) \Big{)}\Big{)}\]
where (a) is due to the fact that we have \(\|Z-Z^{\prime}\|_{F}<\varepsilon\) and \(\Phi^{-1}=\mathrm{sortvec}\circ\Phi_{\mathrm{deep}}^{-1}\) is well-defined for \(Z,Z^{\prime}\in\Phi(\mathbb{X}_{\mathbb{R}^{D}/l,N})\), (b) is due to the definition of \(\Phi\) -- see equations (13) and (14) -- and the fact that \(Z=\Phi(X)\), (c) is due letting \(\mathrm{d}z_{d}=e_{D}^{\top}\mathrm{d}Z\in\mathbb{C}^{N}\) where \(e_{d}\) is the \(d\)-th standard basis of \(\mathbb{R}^{D}\) -- for all \(d\in[D]\) -- and the fact that \(\Phi_{\mathrm{deep}}^{-1}\) is a continuous function (Zaheer et al., 2017), that is,
\[\forall d\in[D],X\in\mathbb{X}_{\mathbb{R}^{D}/l,N},x\in X:\lim_{\mathrm{dz} \in\mathbb{C}^{N}:\mathrm{dz}\to 0}r_{d}(x,\mathrm{dz};X)=0.\]
For any \(\varepsilon>0\), \(x\in X\), and \(d\in[D]\), there exists a finite \(\delta_{d}(\varepsilon,x;X)=\sup_{\mathrm{dz}\in\mathbb{D}(\varepsilon)}|r_{d} (x,\mathrm{dz};X)|\) where \(\mathbb{D}(\varepsilon)=\{z\in\mathbb{C}^{N}:\|z\|_{2}<\varepsilon\}\) and \(\lim_{\varepsilon\to 0}\delta_{d}(\varepsilon,x;X)=0\). For any \(\varepsilon>0\) and \(X\in\mathbb{X}_{\mathbb{R}^{D}/l,N}\), we have
\[\delta^{*}(\varepsilon,X)\stackrel{{\rm def}}{{=}}\max_{d\in[D], x\in X}\delta_{d}(\varepsilon,x;X)=\sup_{d\in[D],x\in X,\mathrm{dz}\in\mathbb{D}( \varepsilon)}|r_{d}(x,\mathrm{dz};X)|, \tag{20}\]
where \(\lim_{\varepsilon\to 0}\delta^{*}(\varepsilon,X)=0\). Let \(X=\{\{x_{n}:n\in[N]\}\}\). Then, we have
\[\forall d\in[D]:\mathrm{sort}(\{\{e_{d}^{\top}x+l(x)j:x\in X\}\})=\big{(}e_{ d}^{\top}x_{\pi(n)}\big{)}_{n\in[N]}\in\mathbb{R}^{N}\]
where \(\pi:[N]\to[N]\) is a permutation operator such that \(l(x_{\pi(1)})\leq\cdots\leq l(x_{\pi(N)})\). Even though the permutation operator \(\pi\) may not be unique, sort is a well-defined function; see the proof of Theorem 2. We let \(S(X)\stackrel{{\rm def}}{{=}}\{\varepsilon:\delta^{*}(\varepsilon,X )<\psi(X)\}\) and \(\psi(X)\stackrel{{\rm def}}{{=}}\min_{\begin{subarray}{c}x,x^{ \prime}\in X\\ x\neq x^{\prime}\end{subarray}}\frac{1}{2}|l(x)-l(x^{\prime})|>0\) where \(l:\mathbb{R}^{D}\to\mathbb{R}\) is the continuous identifier function. From equation (20), we have
\[\forall d\in[D],\varepsilon\in S(X),x\in X,\mathrm{dz}\in\mathbb{D}( \varepsilon):\mathrm{Im}(r_{d}(x,\mathrm{dz};X))<\delta^{*}(\varepsilon,X)< \psi(X),\]
that is,
\[\forall d\in[D],\varepsilon\in S(X),x\in X,\mathrm{dz}\in\mathbb{D}( \varepsilon):\mathrm{Im}(r_{d}(x,\mathrm{dz};X))<\min_{\begin{subarray}{c}x, x^{\prime}\in X\\ x\neq x^{\prime}\end{subarray}}\frac{1}{2}|l(x)-l(x^{\prime})|.\]
Therefore, \(\mathrm{dZ}\) perturbs the imaginary components of \(\{\{e_{d}^{\top}x+l(x)j+r_{d}(x,\mathrm{dz}_{d};X):x\in X\}\}\) (for any \(d\in[D]\)) by at most \(\delta^{*}(\varepsilon,X)<\min_{\begin{subarray}{c}x,x^{\prime}\in X\\ x\neq x^{\prime}\end{subarray}}\frac{1}{2}|l(x)-l(x^{\prime})|\) and distinct elements do not switch place after adding the perturbation \(\mathrm{dZ}\). More precisely, for all \(d\in[D]\), \(\varepsilon\in S(X)\), and \(\mathrm{dz}\in\mathbb{D}(\varepsilon)\), we have
\[\mathrm{sort}(\{\{e_{d}^{\top}x+l(x)j+r_{d}(x,\mathrm{dz};X):x\in X\}\})= \big{(}e_{d}^{\top}x_{\pi^{\prime}(n)}+\mathrm{Re}(r_{d}(x_{\pi^{\prime}(n)}, \mathrm{dz};X))\big{)}_{n\in[N]}\in\mathbb{R}^{N},\]
where \(\pi^{\prime}:[N]\to[N]\) is such that for all \(\varepsilon\in S(X)\), we have
\[\forall\mathrm{dz}\in\mathbb{D}(\varepsilon):l(x_{\pi^{\prime}(1)})+\mathrm{ Im}(r_{d}(x_{\pi^{\prime}(1)},\mathrm{dz};X))\leq\cdots\leq l(x_{\pi^{\prime}(N)})+ \mathrm{Im}(r_{d}(x_{\pi^{\prime}(N)},\mathrm{dz};X)).\]
Since \(\mathrm{Im}(r_{d}(x,\mathrm{dz};X))<\min_{\begin{subarray}{c}x,x^{\prime}\in X \\ x\neq x^{\prime}\end{subarray}}\frac{1}{2}|l(x)-l(x^{\prime})|\), we also have the following inequalities:
\[l(x_{\pi^{\prime}(1)})\leq\cdots\leq l(x_{\pi^{\prime}(N)}). \tag{21}\]
**Remark 9**.: _The permutation operator \(\pi^{\prime}\) may vary with \(\mathrm{dz}\) and \(x\), if two (or more) elements of \(\{\{l(x):x\in X\}\}\) are identical. A proper notation should be \(\pi^{\prime}(\mathrm{dz},x;X)\). For simplicity in notation, we avoid expressing this proper parameterization._
**Remark 10**.: _The perturbation \(\mathrm{dz}\) may switch the rank (or position) of two elements only if they are equal to each other, that is, if \(l(x_{\pi^{\prime}(1)})=l(x_{\pi^{\prime}(2)})\), then we may have \(l(x_{\pi^{\prime}(1)})+\mathrm{Im}(r_{d}(x_{\pi^{\prime}(1)},\mathrm{dz};X))< l(x_{\pi^{\prime}(2)})+\mathrm{Im}(r_{d}(x_{\pi^{\prime}(2)},\mathrm{dz};X))\). This does not provide any issue, since \(l(x_{\pi^{\prime}(1)})\leq l(x_{\pi^{\prime}(2)})\). In short, independent of \(\mathrm{dz}\) and \(x\), \(\pi^{\prime}\) is such that \(l(x_{\pi^{\prime}(1)})\leq\cdots\leq l(x_{\pi^{\prime}(N)})\)._
From equation (21) and the definition of \(\pi\), we have \(l(x_{\pi(n)})=l(x_{\pi^{\prime}(n)})\) for all \(n\in[N]\) -- even though, we may have \(\pi\neq\pi^{\prime}\). From Definition 2, since \(l(x_{\pi(n)})=l(x_{\pi^{\prime}(n)})\), we have \(x_{\pi^{\prime}(n)}=x_{\pi(n)}\) for all \(n\in[N]\). Consequently, for all \(d\in[D]\), \(\varepsilon\in S(X)\) and \(\mathrm{dz}\in\mathbb{D}(\varepsilon)\), we have
\[\mathrm{sort}(\{\{e_{d}^{\top}x+l(x)j+r_{d}(x,\mathrm{dz};X):x\in X\}\}) =\big{(}e_{d}^{\top}x_{\pi^{\prime}(n)}+\mathrm{Re}(r_{d}(x_{ \pi^{\prime}(n)},\mathrm{dz};X))\big{)}_{n\in[N]}\] \[\mathrm{sort}(\{\{e_{d}^{\top}x+l(x)j:x\in X\}\}) =\big{(}e_{d}^{\top}x_{\pi(n)}\big{)}_{n\in[N]}=\big{(}e_{d}^{ \top}x_{\pi^{\prime}(n)}\big{)}_{n\in[N]}.\]
Therefore, even though there are multiple permutation operators that sorts the elements of \(\{\{e_{d}^{\top}x+l(x)j+r_{d}(x,\mathrm{d}z;X):x\in X\}\}\), the output of the sort function gives an ordering that remains unchanged for distinct elements of \(X\) for any \(x\in X\in\mathbb{X}_{\mathbb{R}^{D}/l,N}\) and \(\mathrm{d}z\in\mathbb{D}(\varepsilon)\) where \(\varepsilon\in S(X)\). Consequently, we have
\[d_{M}(\Phi^{-1}(Z),\Phi^{-1}(Z^{\prime}))\leq\sup_{\begin{subarray}{c}\mathrm{ dZ}\in\mathbb{C}^{D\times N}:\\ Z+\mathrm{d}Z\in\mathbb{D}_{\mathfrak{g}}(\varepsilon,Z)\end{subarray}}d_{M}(X, \mathrm{sortvec}(\begin{pmatrix}\{\{e_{1}^{\top}x+l(x)j+r_{1}(x,\mathrm{d}z_{1 };X):x\in X\}\}\\ \vdots\\ \{\{e_{D}^{\top}x+l(x)j+r_{D}(x,\mathrm{d}z_{D};X):x\in X\}\}\end{pmatrix}))\] \[\stackrel{{\mathrm{(a)}}}{{\leq}}\sup_{ \begin{subarray}{c}\mathrm{dZ}\in\mathbb{C}^{D\times N}:\\ Z+\mathrm{d}Z\in\mathbb{D}_{\mathfrak{g}}(\varepsilon,Z)\end{subarray}}d_{M}(X, \{\{\begin{pmatrix}e_{1}^{\top}x_{\pi_{1}(n)}+\mathrm{Re}(r_{1}(x_{\pi_{1}(n)},\mathrm{d}z_{1};X))\\ e_{D}^{\top}x_{\pi_{D}(n)}+\mathrm{Re}(r_{D}(x_{\pi_{D}(n)},\mathrm{d}z_{D};X)) \end{pmatrix}:n\in[N]\}\})\] \[\stackrel{{\mathrm{(b)}}}{{\leq}}\sup_{ \begin{subarray}{c}\mathrm{dZ}\in\mathbb{C}^{D\times N}:\\ Z+\mathrm{d}Z\in\mathbb{D}_{\mathfrak{g}}(\varepsilon,Z)\end{subarray}}d_{M}(\{\{ \begin{pmatrix}e_{1}^{\top}x_{\pi_{1}(n)})\\ \vdots\\ e_{D}^{\top}x_{\pi_{D}(n)}\end{pmatrix}:n\in[N]\}\},\{\{\begin{pmatrix}e_{1}^{ \top}x_{\pi_{1}(n)}+\mathrm{Re}(r_{1}(x_{\pi_{1}(n)},\mathrm{d}z_{1};X))\\ \vdots\\ e_{D}^{\top}x_{\pi_{D}(n)}+\mathrm{Re}(r_{D}(x_{\pi_{D}(n)},\mathrm{d}z_{D};X) )\end{pmatrix}:n\in[N]\}\})\}\] \[\stackrel{{\mathrm{(c)}}}{{\leq}}\sup_{ \begin{subarray}{c}\mathrm{dZ}\in\mathbb{C}^{D\times N}:\\ Z+\mathrm{d}Z\in\mathbb{D}_{\mathfrak{g}}(\varepsilon,Z)\end{subarray}}\sqrt{ \sum_{n\in[N]}\|\begin{pmatrix}\mathrm{Re}(r_{1}(x_{\pi_{1}(n)},\mathrm{d}z_{1} ;X))\\ \vdots\\ \mathrm{Re}(r_{D}(x_{\pi_{D}(n)},\mathrm{d}z_{D};X))\end{pmatrix}\|_{2}^{2}}\] \[\stackrel{{\mathrm{(d)}}}{{\leq}}\sqrt{DN}\sup_{d\in[ D],x\in X,\mathrm{d}z\in\mathbb{D}(\varepsilon)}|r_{d}(x,\mathrm{d}z;X)|=\sqrt{DN} \delta^{*}(\varepsilon,X)\]
where (a) uses permutation operators \(\pi_{d}:[N]\to[N]\) and it depends on elements of \(\{\{r_{d}(x,\mathrm{d}z_{1};X):x\in X\}\), but \(x_{\pi_{d}(n)}=x_{\pi(n)}\) for all \(n\in[N]\) and \(d\in[D]\), (b) follows from the fact that \(x_{\pi_{d}(n)}=x_{\pi(n)}\) for all \(n\in[N]\) and \(d\in[D]\), (c) follows from the definition of the matching distance \(d_{M}\), and (d) follows from the fact that if \(\mathrm{d}Z\in\mathbb{C}^{D\times N}\) is such that \(Z+\mathrm{d}Z\in\mathbb{D}_{\Phi}(\varepsilon,Z)\), then its individual rows \(\mathrm{d}z_{1},\ldots,\mathrm{d}z_{D}\in\mathbb{R}^{N}\) have norms upper bounded by \(\varepsilon\), that is, \(\mathrm{d}z_{d}\in\mathbb{D}(\varepsilon)\) and \(|\mathrm{Re}(r_{d}(x,\mathrm{d}z_{d};X))|\leq|r_{d}(x,\mathrm{d}z_{d};X)|\) for all \(d\in[D]\).
**Continuity Statement.** For any \(Z=\Phi(X)\in\Phi(\mathbb{X}_{\mathbb{R}^{D},N})\) and \(\delta>0\), there exists a positive \(\epsilon(\delta)\in\{\varepsilon^{\prime}\in S(X):\sqrt{DN}\delta^{*}( \varepsilon^{\prime},X)<\delta\}\) where
\[\forall Z^{\prime}\in\Phi(\mathbb{X}_{\mathbb{R}^{D},N}):\|Z-Z^{\prime}\|_{F} <\epsilon(\delta)\to d_{M}(\Phi^{-1}(Z),\Phi^{-1}(Z^{\prime}))<\delta.\]
#### i.1.2 Proof of Lemma 10
Let \(\mathcal{Z}_{1}=\{Z_{1,n}\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N}):n\in\mathbb{N}\}\) and \(\mathcal{Z}_{2}=\{Z_{2,n}\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N}):n\in\mathbb{N}\}\) be two sequences such that \(\lim_{n\to\infty}Z_{1,n}=\lim_{n\to\infty}Z_{2,n}=Z\).
From Lemma 9, the following limits are well-defined:
\[\lim_{n\to\infty}\rho(Z_{1,n})=f_{1},\ \lim_{n\to\infty}\rho(Z_{2,n})=f_{2}\in f (\mathbb{X}_{\mathbb{D},N}).\]
We construct \(\mathcal{Z}=\{Z_{n}:n\in\mathbb{N}\}\) in \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) where \(Z_{2n}=Z_{1,n}\) and \(Z_{2n+1}=Z_{2,n}\) for all \(n\in\mathbb{N}\). By construction, we have \(\lim_{n\to\infty}Z_{n}=Z\). Since all convergent sequences are Cauchy, \(\mathcal{Z}\) is a Cauchy sequence. Therefore, from our discussion the proof of Lemma 9, the sequence \(\rho(\mathcal{Z})\) must converge to \(f^{*}\in f(\mathbb{X}_{\mathbb{D},N})\).
**Fact 10**.: _Every subsequence of a convergent sequence converges to the same limit as the original sequence._
Both \(\rho(\mathcal{Z}_{1})\) and \(\rho(\mathcal{Z}_{2})\) are subsequences of the convergent sequence \(\rho(\mathcal{Z})\). Therefore, we have
\[\lim_{n\to\infty}\rho(Z_{1,n})=\lim_{n\to\infty}\rho(Z_{2,n})=\lim_{n\to\infty} \rho(Z_{n})=f^{*},\]
that is, \(f_{1}=f_{2}\). Therefore, the limit of \(\rho(\mathcal{Z})\) only depends on the limit of the sequence \(\mathcal{Z}\).
### Proof of Proposition 10
We want to show that, for any \(\Phi(X)\in\Phi(\mathbb{X}_{\mathbb{D},N})\) and \(\varepsilon>0\), there is \(\delta(\varepsilon)>0\) such that
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N}:\|\Phi(X)-\Phi(X^{\prime})\|_{ F}<\delta(\varepsilon)\to\|\rho_{e}\circ\Phi(X)-\rho_{e}\circ\Phi(X^{\prime})\|<\varepsilon. \tag{22}\]
We first use the definition of \(\rho_{e}\) to reformulate the left-hand-side of equation (22) in terms of convergent sequences in \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\). This is formalized in Lemma 11.
**Lemma 11**.: _Let \(X,X^{\prime}\in\mathbb{X}_{\mathbb{D},N}\). There exist convergent sequences \(\mathcal{Z}_{x}\stackrel{{\rm def}}{{=}}\{Z_{x,n}\in\Phi(\mathbb{ X}_{Q(\mathbb{D}),N}):n\in\mathbb{N}\}\) and \(\mathcal{Z}_{y}\stackrel{{\rm def}}{{=}}\{Z_{y,n}\in\Phi(\mathbb{ X}_{Q(\mathbb{D}),N}):n\in\mathbb{N}\}\) and \(N_{x}(\delta),N_{y}(\delta)\in\mathbb{N}\) such that_
\[\forall n>N_{x}(\delta):\ \|Z_{x,n}-\Phi(X)\|_{2}<\delta\] \[\forall n>N_{y}(\delta):\ \|Z_{y,n}-\Phi(X^{\prime})\|_{2}<\delta.\]
_for any \(\delta>0\). If \(\|\Phi(X)-\Phi(X^{\prime})\|_{2}<\delta\), then we have_
\[\forall n>N(\delta)\stackrel{{\rm def}}{{=}}\max\{N_{x}(\delta), N_{y}(\delta)\}:\ \|Z_{x,n}-Z_{y,n}\|_{2}<3\delta.\]
As the result of Lemma 11, the left-hand-side of equation (22) gives us the following inequality:
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N}:\|\Phi(X)-\Phi(X^{\prime})\|_{ F}<\delta,n>N(\delta)\to\|Z_{x,n}-Z_{y,n}\|_{2}<3\delta,\]
where \(N(\delta)\in\mathbb{N}\), \(\mathcal{Z}_{x}=\{Z_{x,n}:n\in\mathbb{N}\}\) and \(\mathcal{Z}_{y}=\{Z_{y,n}:n\in\mathbb{N}\}\) are convergent sequences in \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) (in Lemma 11), that is,
\[\lim_{n\to\infty}Z_{x,n}=\Phi(X),\ \lim_{n\to\infty}Z_{y,n}=\Phi(X^{\prime}) \in\Phi(\mathbb{X}_{\mathbb{D},N}).\]
In Lemma 11 we prove that convergent sequences \(\mathcal{Z}_{x}\) and \(\mathcal{Z}_{y}\) become arbitrary close to each other as \(\delta\to 0\). In Lemma 12, we use the fact that \(\rho\) (not \(\rho_{e}\)) is a continuous function on noncompact domain \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\), and argue that \(\|\rho(Z_{x,n})-\rho(Z_{y,n})\|_{2}\) converges to zero as \(\delta\to 0\).
**Lemma 12**.: _For all \(Z\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) and \(\delta>0\), there exists a \(\gamma(\delta)>0\) such that_
\[\forall Z^{\prime}\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N}):\|Z-Z^{\prime}\|_{F}< \gamma(\delta)\to\|\rho(Z)-\rho(Z^{\prime})\|_{2}<\delta.\]
_For any \(\delta>0\), we have_
\[\forall n>N^{{}^{\prime}}(\delta):\|\rho(Z_{x,n})-\rho(Z_{y,n})\|_{2}<\delta.\]
_where \(N^{{}^{\prime}}(\delta)=N(\min\{\frac{\delta}{3},\frac{\gamma(\delta)}{3}\})\), and \(N(\delta)\in\mathbb{N}\), convergent sequences \(\mathcal{Z}_{x}=\{Z_{x,n}:n\in\mathbb{N}\}\) and \(\mathcal{Z}_{y}=\{Z_{y,n}:n\in\mathbb{N}\}\) are defined in Lemma 11._
We now use Lemma 12 to show that for all \(\delta>0\), we have
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N}:\|\Phi(X)-\Phi(X^{\prime})\|_{F}< \delta,n>N^{{}^{\prime}}(\delta)\to\|\rho(Z_{x,n})-\rho(Z_{y,n})\|_{2}<\delta,\]
where \(\mathcal{Z}_{x}=\{Z_{x,n}:n\in\mathbb{N}\}\) and \(\mathcal{Z}_{y}=\{Z_{y,n}:n\in\mathbb{N}\}\) are the convergent sequences in \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) and \(N^{{}^{\prime}}(\delta)\) is defined in Lemma 12.
**Lemma 13**.: _Let \(\mathcal{Z}_{x}=\{Z_{x,n}\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N}):n\in\mathbb{N}\}\) and \(\mathcal{Z}_{y}=\{Z_{y,n}\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N}):n\in\mathbb{N}\}\) be the convergent sequences in Lemma 11. For any \(\delta>0\), there exists \(N^{\prime}_{x}(\delta),N^{\prime}_{y}(\delta)\in\mathbb{N}\) such that_
\[\forall n>N^{\prime}_{x}(\delta):\ \|\rho\circ Z_{x,n}-\rho_{e} \circ\Phi(X)\|_{2}<\delta\] \[\forall n>N^{\prime}_{y}(\delta):\ \|\rho\circ Z_{y,n}-\rho_{e} \circ\Phi(X^{\prime})\|_{2}<\delta.\]
_Let \(\|\rho(Z_{x,n})-\rho(Z_{y,n})\|_{2}<\delta\) for all \(n>N^{{}^{\prime\prime}}(\delta)\stackrel{{\rm def}}{{=}}\max\{N^{ \prime}_{x}(\delta),N^{\prime}_{y}(\delta),N^{{}^{\prime}}(\delta)\}\). Then, we have_
\[\forall n>N^{{}^{\prime\prime}}(\delta):\ \|\rho_{e}\circ\Phi(X)-\rho_{e} \circ\Phi(X^{\prime})\|_{2}<3\delta.\]
Combining the results of Lemmas 11 to 13 we arrive at the following result:
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N}:\|\Phi(X)-\Phi(X^{\prime})\|_{ F}<\delta\to\|\rho_{e}\circ\Phi(X)-\rho_{e}\circ\Phi(X^{\prime})\|_{2}<3\delta,\]
that is, \(\delta(\varepsilon)=\frac{\varepsilon}{3}\) in equation (22), and \(\rho_{e}\) is a continuous function on the compact domain \(\Phi(\mathbb{X}_{\mathbb{D},N})\).
#### i.2.1 Proof of Lemma 11
Let \(X,X^{\prime}\in\mathbb{X}_{\mathbb{D},N}\). Since \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) is a dense subset of \(\Phi(\mathbb{X}_{\mathbb{D},N})\) (see Lemma 1), there exists sequences \(\mathcal{Z}_{x}\stackrel{{\rm def}}{{=}}\{Z_{x,n}\in\Phi(\mathbb{ X}_{Q(\mathbb{D}),N}):n\in\mathbb{N}\}\) and \(\mathcal{Z}_{y}\stackrel{{\rm def}}{{=}}\{Z_{y,n}\in\Phi(\mathbb{ X}_{Q(\mathbb{D}),N}):n\in\mathbb{N}\}\) such that
\[\lim_{n\to\infty}Z_{x,n}=\Phi(X),\ \lim_{n\to\infty}Z_{y,n}=\Phi(X^{\prime}) \in\Phi(\mathbb{X}_{\mathbb{D},N}),\]
and
\[\rho_{e}\circ\Phi(X)=\lim_{n\to\infty}\rho(Z_{x,n}),\ \rho_{e}\circ\Phi(X^{\prime})=\lim_{n\to\infty}\rho(Z_{y,n})\in \operatorname{codim}(\rho_{e})\subseteq f(\mathbb{X}_{\mathbb{D},N}).\]
That is, there exists \(N_{x}(\delta),N_{y}(\delta)\in\mathbb{N}\) such that
\[\forall n>N_{x}(\delta):\ \|Z_{x,n}-\Phi(X)\|_{2}<\delta\] \[\forall n>N_{y}(\delta):\ \|Z_{y,n}-\Phi(X^{\prime})\|_{2}<\delta.\]
for any \(\delta>0\). If \(\|\Phi(X)-\Phi(X^{\prime})\|_{2}<\delta\), then for all \(n>N(\delta)\), we have
\[\|Z_{x,n}-Z_{y,n}\|_{2} \stackrel{{\rm(a)}}{{\leq}}\|Z_{x,n}-\Phi(X)\|_{2}+ \|Z_{y,n}-\Phi(X^{\prime})\|_{2}+\|\Phi(X)-\Phi(X^{\prime})\|_{2}\] \[<\delta+\delta+\delta=3\delta,\]
where \(N(\delta)\stackrel{{\rm def}}{{=}}\max\{N_{x}(\delta),N_{y}( \delta)\}\) and (a) follows from the triangle inequality.
#### i.2.2 Proof of Lemma 12
The function \(\Phi^{-1}\) is continuous on its noncopact domain \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\); see Proposition 11. Therefore, \(\rho=f\circ\Phi^{-1}\) is a continuous function on \(\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\). By definition of continuity, for all \(Z\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N})\) and \(\delta>0\), there exists a \(\gamma(\delta)>0\) such that
\[\forall Z^{\prime}\in\Phi(\mathbb{X}_{Q(\mathbb{D}),N}):\|Z-Z^{\prime}\|_{F}< \gamma(\delta)\to\|\rho(Z)-\rho(Z^{\prime})\|_{2}<\delta. \tag{23}\]
Let \(\mathcal{Z}_{x}=\{Z_{x,n}:n\in\mathbb{N}\}\) and \(\mathcal{Z}_{y}=\{Z_{y,n}:n\in\mathbb{N}\}\) be the convergent sequences in Lemma 11. For all \(\delta>0\), we have
\[\forall X^{\prime}\in\mathbb{X}_{\mathbb{D},N}:\|\Phi(X)-\Phi(X^{\prime})\|_{F }<\delta,n>N(\delta)\to\|Z_{x,n}-Z_{y,n}\|_{2}<3\delta.\]
For any \(\delta>0\), we let \(N^{\prime}(\delta)=N(\min\{\frac{\delta}{3},\frac{\gamma(\delta)}{3}\})\) where \(N(\delta)\in\mathbb{N}\) is defined in Lemma 11. By definition, we have
\[\forall n>N^{\prime}(\delta):\ \|Z_{x,n}-Z_{y,n}\|_{2}<\min\{\delta,\gamma( \delta)\}\leq\gamma(\delta).\]
Since \(\rho\) is a continuous map, from equation (23), we arrive at the following inequality:
\[\forall n>N^{\prime}(\delta):\|\rho(Z_{x,n})-\rho(Z_{y,n})\|_{2}<\delta.\]
#### i.2.3 Proof of Lemma 13
The sequences \(\mathcal{Z}_{x}=\{Z_{x,n}:n\in\mathbb{N}\}\) and \(\mathcal{Z}_{y}=\{Z_{y,n}:n\in\mathbb{N}\}\) are convergent, that is,
\[\lim_{n\to\infty}Z_{x,n}=\Phi(X),\ \lim_{n\to\infty}Z_{y,n}=\Phi(X^{\prime}) \in\Phi(\mathbb{X}_{\mathbb{D},N}).\]
Since we have \(\rho_{e}\circ\Phi(X)=\lim_{n\to\infty}\rho(Z_{x,b}),\ \rho_{e}\circ\Phi(X^{\prime})=\lim_{n\to\infty}\rho(Z_{y,b})\), there exists \(N^{\prime}_{x}(\delta),N^{\prime}_{y}(\delta)\in\mathbb{N}\) such that
\[\forall n>N^{\prime}_{x}(\delta):\ \|\rho\circ Z_{x,n}-\rho_{e} \circ\Phi(X)\|_{2}<\delta\] \[\forall n>N^{\prime}_{y}(\delta):\ \|\rho\circ Z_{y,n}-\rho_{e} \circ\Phi(X^{\prime})\|_{2}<\delta.\]
If \(\|\rho(Z_{x,n})-\rho(Z_{y,n})\|_{2}<\delta\) for all \(n>N^{{}^{\prime\prime}}(\delta)\stackrel{{\rm def}}{{=}}\max\{N^{ \prime}_{x}(\delta),N^{\prime}_{y}(\delta),N^{\prime}(\delta)\}\), then, from the triangle inequality and Lemma 12, we have
\[\|\rho_{e}\circ\Phi(X)-\rho_{e}\circ\Phi(X^{\prime})\|_{2} \leq\|\rho(Z_{x,n})-\rho(Z_{y,n})\|_{2}+\|\rho(Z_{x,n})-\rho_{e} \circ\Phi(X)\|_{2}+\|\rho(Z_{y,n})-\rho_{e}\circ\Phi(X^{\prime})\|_{2}\] \[<\delta+\delta+\delta=3\delta.\]
Proof of Proposition 4
For \(K,N\in\mathbb{N}\), let \(T,T^{\prime}\in\mathbb{T}^{l}_{N,K}\) be such that \(S(T)=S(T^{\prime})\), that is,
\[\{(e^{\top}_{n_{1}}l(T),\alpha^{1}_{n_{1}}(T)):n_{1}\in[N]\}=\{(e^{\top}_{n_{1} }l(T^{\prime}),\alpha^{1}_{n_{1}}(T^{\prime})):n_{1}\in[N]\},\]
where \(e_{n}\) is the \(n\)-th standard basis vector of \(\mathbb{R}^{N}\), for \(n\in[N]\). By definition of \(\ell\)-identifiable tensors, all elements of \(\{\{e^{\top}_{n}l(T):n\in[N]\}\}\) are unique. Therefore, we we have
\[\forall n_{1}\in[N]:\ e^{\top}_{n_{1}}l(T)=e^{\top}_{\pi(n_{1})}l(T^{\prime}),\text{ and }\alpha^{1}_{n_{1}}(T)=\alpha^{1}_{\pi(n_{1})}(T^{\prime}),\]
for a unique permutation operator \(\pi:[N]\to[N]\).
**Lemma 14**.: _For all \(k\in[K]\), we have_
\[\forall n_{1},n_{2},\ldots,n_{k}\in[N]:\alpha^{k}_{n_{1},n_{2},\ldots,n_{k}}(T )=\alpha^{k}_{\pi(n_{1}),\pi(n_{2}),\ldots,\pi(n_{k})}(T^{\prime}),\]
_where \(\pi:[N]\to[N]\) is a unique permutation operator._
Proof.: The claim holds for \(k=1\). We prove this statement by induction. Let us assume the claim is true for \(k\in[K-1]\). We want to show that it also holds for \(k+1\), that is,
\[\forall n_{1},n_{2},\ldots,n_{k+1}\in[N]:\alpha^{k+1}_{n_{1},n_{2},\ldots,n_{k +1}}(T)=\alpha^{k+1}_{\pi(n_{1}),\pi(n_{2}),\ldots,\pi(n_{k+1})}(T^{\prime}).\]
From the definition of \(\alpha^{k}\), we have
\[\forall n_{1},\ldots,n_{k+1}\in[N]:\ e^{\top}_{n_{k+1}}l(T)=e^{\top}_{\pi(n_{ k+1})}l(T^{\prime}),\text{ and }\alpha^{k+1}_{n_{1},\ldots,n_{k+1}}(T)=\alpha^{k+1}_{\pi(n_{1}),\ldots,\pi(n_{k+1})}(T ^{\prime})\]
-- which follows from the fact that elements of \(\{\{e^{\top}_{n}l(T):n\in[N]\}\}\) are unique. This concludes the proof.
From Lemma 14, we have
\[\forall n_{1},\ldots,n_{K}\in[N]:\alpha^{k+1}_{n_{1},\ldots,n_{K}}(T)=\alpha^ {K}_{\pi(n_{1}),\ldots,\pi(n_{K})}(T^{\prime}),\]
that is, \(T=\pi(T^{\prime})\).
Now let \(T,T^{\prime}\in\mathbb{T}^{l}_{N,K}\) be such that \(T=\pi(T^{\prime})\) where \(\pi:[N]\to[N]\) is a permutation operator. By definition, we have
\[\forall n_{1},\ldots,n_{K}\in[N]:\alpha^{K}_{n_{1},\ldots,n_{K}}(T)=\alpha^{K }_{\pi(n_{1}),\ldots,\pi(n_{K})}(T^{\prime}),\]
and
\[\forall n\in[N]:e^{\top}_{n}l(T)=e^{\top}_{n}l(\pi(T^{\prime}))=e^{\top}_{\pi (n)}l(T^{\prime}).\]
For all \(n_{1},\ldots,n_{K-1}\in[N]\), we have
\[\alpha^{K-1}_{n_{1},\ldots,n_{K-1}}(T) =\{(e^{\top}_{n_{K}}l(T),\alpha^{K}_{n_{1},\ldots,n_{K}}(T)):n_{K }\in[N]\}\] \[=\{(e^{\top}_{\pi(n_{K})}l(T^{\prime}),\alpha^{K}_{\pi(n_{1}), \ldots,\pi(n_{K})}(T^{\prime})):n_{K}\in[N]\}\] \[=\{(l_{n_{K}}(T^{\prime}),\alpha^{K}_{\pi(n_{1}),\ldots,\pi(n_{K-1 }),n_{K}}(T^{\prime})):n_{K}\in[N]\}\] \[=\alpha^{K-1}_{\pi(n_{1}),\ldots,\pi(n_{K-1})}(T^{\prime})\]
Using a simple argument by induction, we arrive at the statement in Lemma 14. Therefore, we have
\[S(T) =\{(e_{n_{1}}^{\top}l(T),\alpha_{n_{1}}^{1}):n_{1}\in[N]\}=\{(e_{\pi (n_{1})}^{\top}l(T^{\prime}),\alpha_{\pi(n_{1})}^{1}(T^{\prime})):n_{1}\in[N]\}\] \[=\{(e_{n_{1}}^{\top}l(T^{\prime}),\alpha_{n_{1}}^{1}(T^{\prime})) :n_{1}\in[N]\}\] \[=S(T^{\prime})\]
Proof of Theorem 7
**Definition 15**.: _Let \(K,N\in\mathbb{N}\). For all \(k\in[K]\), let \(\mathbb{D}_{k}\) be a domain and \(\phi_{k}:\mathbb{D}_{k}\to\operatorname{codim}(\phi_{k})\), we define the following multiset function_
\[\Phi_{k}\big{(}\{\{x_{n}\in\mathbb{D}_{k}:n\in[N]\}\}\big{)}=\sum_{n\in[N]}\phi _{k}(x_{n}),\]
_and \(\operatorname{codim}(\Phi_{k})=\{\sum_{n\in[N]}\phi_{k}(x_{n}):x_{n}\in \mathbb{D}_{k},\forall n\in[N]\}\)._
Let us first show that the proposed sum-decomposable model is injective on \(\mathbb{T}_{N,K}^{l}\). Let \(K,N\in\mathbb{N}\) and \(T,T^{\prime}\in\mathbb{T}_{N,K}^{l}\) where
\[\sum_{n_{1}\in[N]}\phi_{1}(e_{n_{1}}^{\top}l(T),\beta_{n_{1}}^{1}(T))=\sum_{n _{1}\in[N]}\phi_{1}(e_{n_{1}}^{\top}l(T^{\prime}),\beta_{n_{1}}^{1}(T^{\prime })),\]
that is,
\[\Phi_{1}(\{\{(e_{n_{1}}^{\top}l(T),\beta_{n_{1}}^{1}(T)):n_{1}\in[N]\}\})=\Phi _{1}(\{\{(e_{n_{1}}^{\top}l(T^{\prime}),\beta_{n_{1}}^{1}(T^{\prime})):n_{1} \in[N]\}\})\]
Let us **assume** that \(\phi_{1}\) is such that the corresponding \(\Phi_{1}\) is an injective multiset function (see Definition 15) -- we shall discuss its sufficient condition later in the proof. Since \(\{\{e_{n}^{\top}l(T):n\in[N]\}\}\) has all distinct elements for all \(T\in\mathbb{T}_{N,K}^{l}\), we have \(e_{n_{1}}^{\top}(T)=e_{\pi(n_{1})}^{\top}l(T^{\prime})\) for a unique permutation operator \(\pi:[N]\to[N]\) and for all \(n_{1}\in[N]\). Therefore, we have
\[\forall n_{1}\in[N]:\beta_{n_{1}}^{1}(T)=\beta_{\pi(n_{1})}^{1}(T^{\prime})\]
**Lemma 15**.: _Let \(k\in[K]\) and_ **assume**_\(\{\Phi_{k}:k\in[K]\}\) are injective multiset functions over their domains, that is,_
\[\forall k\in[K]:\phi_{k}:\mathbb{D}_{k}\to\operatorname{codim}(\phi_{k})\]
_where \(\mathbb{D}_{k}=\operatorname{codim}(l)\times\operatorname{codim}(\Phi_{k+1})\) and \(\mathbb{D}_{K}=\operatorname{codim}(l)\times\mathbb{R}^{D}\). Then, for all \(n_{1},\ldots,n_{k}\in[N]\), we have \(\beta_{n_{1},\ldots,n_{k}}^{k}(T)=\beta_{\pi(n_{1}),\ldots,\pi(n_{k})}^{k}(T^ {\prime})\) where \(\pi:[N]\to[N]\) is a unique permutation operator and \(k\in[K]\)._
Proof.: The claim holds for \(k=1\). We prove this statement by induction. Let us assume this claim is true for \(k\in[K-1]\). We want to show that it also holds for \(k+1\), that is,
\[\forall n_{1},n_{2},\ldots,n_{k+1}\in[N]:\beta_{n_{1},n_{2},\ldots,n_{k+1}}^{k +1}(T)=\beta_{\pi(n_{1}),\pi(n_{2}),\ldots,\pi(n_{k+1})}^{k+1}(T^{\prime}).\]
From the definition of \(\beta^{k}\), we have
\[\forall n_{1},n_{2},\ldots,n_{k}\in[N]:\sum_{n_{k+1}\in[N]}\phi_{k+1}(e_{n_{k +1}}^{\top}l(T),\beta_{n_{1}\ldots n_{k+1}}^{k+1}(T))=\sum_{n_{k+1}\in[N]}\phi _{k+1}(e_{n_{k+1}}^{\top}l(T^{\prime}),\beta_{n_{1}\ldots n_{k+1}}^{k+1}(T^{ \prime})),\]
that is,
\[\Phi_{k+1}(\{\{(e_{n_{k+1}}^{\top}l(T),\beta_{n_{1}\ldots n_{k+1}}^{k+1}(T)):n_ {k+1}\in[N]\}\})=\Phi_{k+1}(\{\{(e_{n_{k+1}}^{\top}l(T^{\prime}),\beta_{n_{1} \ldots n_{k+1}}^{k+1}(T^{\prime})):n_{k+1}\in[N]\}\}).\]
for all \(n_{1},n_{2},\ldots,n_{k}\in[N]\). Since \(\Phi_{k}\) is an injective multiset function, we have
\[\forall n_{1},\ldots,n_{k+1}\in[N]:\ e_{n_{k+1}}^{\top}l(T)=e_{\pi(n_{k+1})}^{ \top}l(T^{\prime}),\text{ and }\beta_{n_{1},\ldots,n_{k+1}}^{k+1}(T)=\beta_{\pi(n_{1}),\ldots,\pi(n_{k+1})}^{k +1}(T^{\prime})\]
-- which follows from the fact that elements of \(\{\{e_{n}^{\top}l(T):n\in[N]\}\}\) are unique. This concludes the proof.
From Lemma 15, we arrive at
\[\forall n_{1},\ldots,n_{K}\in[N]:\beta^{K}_{n_{1},n_{2},\ldots,n_{K}}(T)=\beta^{K }_{\pi(n_{1}),\pi(n_{2}),\ldots,\pi(n_{K})}(T^{\prime}),\]
for a unique permutation operator \(\pi:[N]\to[N]\), that is, \(T=\pi(T^{\prime})\) and \(S(T)=S(T^{\prime})\); see Proposition 4.
Using induction, one can easily verify that given \(S(T)\), we can compute \(\sum_{n_{1}\in[N]}\phi_{1}(e^{\top}_{n_{1}}l(T),\beta^{1}_{n_{1}}(T))\). Therefore, the following function is well-defined and injective:
\[\forall T\in\mathbb{T}^{l}_{N,K}:m\circ S(T)=\sum_{n_{1}\in[N]}\phi_{1}(l_{n_ {1}}(T),\beta^{1}_{n_{1}}(T)),\]
that is, if \(m\circ S(T)=m\circ S(T^{\prime})\) then we have \(S(T)=S(T^{\prime})\) where \(T,T^{\prime}\in\mathbb{T}^{l}_{N,K}\).
Now we define the function \(f_{s}:S(\mathbb{T}^{l}_{N,K})\to\operatorname{codim}(f)\) as follows:
\[\forall T\in\mathbb{T}^{l}_{N,K}:f_{s}\circ S(T)\stackrel{{ \mathrm{def}}}{{=}}f(T).\]
Since \(f\) is a permutation-invariant, the function \(f_{s}\) is well-defined, that is, \(f_{s}\circ S(T)=f(T)=f(\pi(T))=f_{s}\circ S(\pi(T))=f_{s}\circ S(T)\) for any permutation operator \(\pi:[N]\to[N]\). Since \(m\) is an injective function over its domain, it is invertible on it. Now we define the following function:
\[\forall u\in m\circ S(\mathbb{T}^{l}_{N,K}):\rho(u)\stackrel{{ \mathrm{def}}}{{=}}f_{s}\circ m^{-1}(u).\]
For any \(u\in m\circ S(\mathbb{T}^{l}_{N,K})\), we have \(u=\sum_{n_{1}\in[N]}\phi_{1}(e^{\top}_{n_{1}}l(T),\beta^{1}_{n_{1}}(T))\) where \(T\in\mathbb{T}^{l}_{N,K}\), that is,
\[\forall T\in\mathbb{T}^{l}_{N,K}:\rho(u)=\rho\big{(}\sum_{n_{1}\in[N]}\phi_{1} (e^{\top}_{n_{1}}l(T),\beta^{1}_{n_{1}}(T))\big{)}=f_{s}\circ m^{-1}\circ m \circ S(T)=f(T).\]
**Sufficient conditions for injective multiset functions \(\{\Phi_{k}:k\in[K]\}\).**
(1) If \(l(T)\in\mathbb{R}^{N\times M}\), then we use the result in Theorem 8 to ensure the injectivity of \(\Phi_{k}\), for all \(k\in[K]\). The function \(\phi_{k}\) is defined on domain \(\mathbb{D}_{k}=\operatorname{codim}(l)\times\operatorname{codim}(\Phi_{k+1})\) where \(\mathbb{D}_{K}=\operatorname{codim}(l)\times\mathbb{R}^{D}\), \(\operatorname{codim}(l)\subset\mathbb{R}^{M}\), and \(\operatorname{codim}(\Phi_{k+1})\subset\mathbb{R}^{D_{k+1}}\). From Theorem 8, \(D_{k}=\binom{N+D_{k+1}}{N}-1\) ensures the injectivity of \(\Phi_{k}\), for all \(k\in[K]\).
(2) If \(l(T)\in\mathbb{Q}^{N\times M}\), then we use the result in Theorem 5 to ensure the injectivity of \(\Phi_{k}\), for all \(k\in[K]\). This is due the fact that rational-valued vectors are identifiable (see Proposition 3). From Theorem 5, \(D_{k}=2N(M+D_{k+1})\) ensures the injectivity of \(\Phi_{k}\), for all \(k\in[K]\).
Supplementary Discussion
As discussed in the main text, the imporant step in showing the existence of sum-decomposable representation is proving that the multiset encoding function \(\Phi:\mathrm{dom}(\Phi)\to\mathrm{codom}(\Phi)\) is an injective map, that is, \(\rho=f\circ\Phi^{-1}\) is well-defined over its admissible inputs, that is, \(\mathrm{codom}(\Phi)\).
**Proposition 12** (Fereydounian et al. 2022).: _Consider the following continuous map 1:_
Footnote 1: This is a trivially altered version of the function in (Fereydounian et al., 2022).
\[\forall x\in\mathbb{R}^{\mathbb{D}},d_{1},d_{2}\in[D],n\in[N]:\big{(}\phi(x) \big{)}_{d_{1},d_{2},n}=\begin{cases}\mathrm{Re}\{(x_{d_{1}}+x_{d_{2}}\sqrt{-1} )^{n}\}&\text{ if }d_{2}>d_{1}\\ \mathrm{Im}\{(x_{d_{1}}+x_{d_{2}}\sqrt{-1})^{n}\}&\text{ if }d_{1}>d_{2}\\ 0&\text{ otherwise}.\end{cases}\]
_The map \(\phi:\mathbb{R}^{D}\to\mathrm{codom}(\phi)\subset\mathbb{R}^{D\times D\times N}\) defines the following injective multiset function:_
\[\forall X\in\mathbb{X}_{\mathbb{R}^{D},N}:\Phi(X)=\sum_{x\in X}\phi(x).\]
We argue that the result in Proposition 12 is not valid for all multisets, as the following example suggests.
**Example 3**.: _Consider the following distinct sets:_
\[X=\{\{\begin{bmatrix}1\\ 1\end{bmatrix},\begin{bmatrix}3\\ 2\\ 1\end{bmatrix},\begin{bmatrix}1\\ 2\\ 2\end{bmatrix},\begin{bmatrix}3\\ 1\\ 2\end{bmatrix}\}\},\ X^{\prime}=\{\{\begin{bmatrix}1\\ 2\\ 1\end{bmatrix},\begin{bmatrix}3\\ 1\\ 1\end{bmatrix},\begin{bmatrix}3\\ 2\\ 2\end{bmatrix},\begin{bmatrix}1\\ 1\\ 2\end{bmatrix}\}\}.\]
_One can be readily verify that \(\Phi(X)=\Phi(X^{\prime})\), where \(\Phi\) is defined in Proposition 12. The main insight behind this example is the fact that \(\{\{(e_{d}^{\top}x_{n},e_{d^{\prime}}^{\top}x_{n}):n\in[N]\}\}=\{\{(e_{d}^{ \top}x_{n},e_{d^{\prime}}^{\top}x_{n}^{\prime}):n\in[N]\}\}\) for all distinct \(d,d^{\prime}\in[D]\), where \(X=\{\{x_{n}:n\in[N]\}\}\), \(X^{\prime}=\{\{x_{n}^{\prime}:n\in[N]\}\}\), \(D=3\), and \(N=4\). This later equality does indeed show \(X=X^{\prime}\)_**if both multisets contains distinct vectors with_ distinct _elements, namely, sets with distinct vectors._
The key elements in proving Theorem 8 is to construct an injective \(\Phi\) which guarantees the existence of \(\rho=f\circ\Phi^{-1}\). Even assuming input multisets contain vectors with distinct elements, the above result doe not guarantee the continuity of \(\rho\). Furthermore, one can easily show that \(\mathrm{codom}(\Phi)\) is not a compact set. To show this, note that the domain of \(\Phi\) does not include a single point \(X\) in the example above. Now, one can construct a sequence of (multi)ests \(X_{n}\) where \(\lim_{n\to\infty}X_{n}=X\) such that, for all \(n\in\mathbb{N}\), all elements of \(X_{n}\) are distinct and have distinct values, that is, \(X_{n}\in\mathrm{dom}(\Phi)\). Since \(\Phi\) is a continuous map, \(\{\Phi(X_{n}):n\in\mathbb{N}\}\) is a Cauchy sequence in \(\mathrm{codom}(\Phi)\) whose limit does not belong to \(\mathrm{codom}(\Phi)\), that is, the co domain of \(\Phi\) is not compact. |
2303.13911 | Cross-Platform Comparison of Arbitrary Quantum Processes | In this work, we present a protocol for comparing the performance of
arbitrary quantum processes executed on spatially or temporally disparate
quantum platforms using Local Operations and Classical Communication (LOCC).
The protocol involves sampling local unitary operators, which are then
communicated to each platform via classical communication to construct quantum
state preparation and measurement circuits. Subsequently, the local unitary
operators are implemented on each platform, resulting in the generation of
probability distributions of measurement outcomes. The max process fidelity is
estimated from the probability distributions, which ultimately quantifies the
relative performance of the quantum processes. Furthermore, we demonstrate that
this protocol can be adapted for quantum process tomography. We apply the
protocol to compare the performance of five quantum devices from IBM and the
"Qianshi" quantum computer from Baidu via the cloud. Remarkably, the
experimental results reveal that the protocol can accurately compare the
performance of the quantum processes implemented on different quantum
computers, requiring significantly fewer measurements than those needed for
full quantum process tomography. We view our work as a catalyst for
collaborative efforts in cross-platform comparison of quantum computers. | Congcong Zheng, Xutao Yu, Kun Wang | 2023-03-24T10:51:11Z | http://arxiv.org/abs/2303.13911v1 | # Cross-Platform Comparison of Arbitrary Quantum Processes
###### Abstract
In this work, we present a protocol for comparing the performance of arbitrary quantum processes executed on spatially or temporally disparate quantum platforms using Local Operations and Classical Communication (LOCC). The protocol involves sampling local unitary operators, which are then communicated to each platform via classical communication to construct quantum state preparation and measurement circuits. Subsequently, the local unitary operators are implemented on each platform, resulting in the generation of probability distributions of measurement outcomes. The max process fidelity is estimated from the probability distributions, which ultimately quantifies the relative performance of the quantum processes. Furthermore, we demonstrate that this protocol can be adapted for quantum process tomography. We apply the protocol to compare the performance of five quantum devices from IBM and the "Qianshi" quantum computer from Baidu via the cloud. Remarkably, the experimental results reveal that the protocol can accurately compare the performance of the quantum processes implemented on different quantum computers, requiring significantly fewer measurements than those needed for full quantum process tomography. We view our work as a catalyst for collaborative efforts in cross-platform comparison of quantum computers.
## I Introduction
As the field of quantum computing and quantum information gains traction, an increasing number of manufacturers are entering the market, producing their own quantum computers. However, the current generation of noisy intermediate-scale quantum (NISQ) computers, despite their potential, are still hindered by quantum noise [1]. A great challenge is how to compare the performance of the quantum computers fabricated by different manufacturers and located in different laboratories, termed as _cross-platform comparison_. This task is especially relevant when we move towards regimes where comparing to classical simulations becomes computationally challenging, and therefore a direct comparison of quantum computers is necessary.
A standard method to achieve cross-platform comparison is the quantum tomography [2], in which we first reconstruct the full information of quantum computers under investigation, and then estimate their relative fidelity from the obtained matrices. However, quantum tomography is known to be time consuming and computationally difficult; even learning a few-qubit quantum state is already experimentally challenging [3; 4]. A more efficient way is to estimate the fidelity of the quantum computer without resorting to the full information. Indeed, a variety of estimation and verification tools [5; 6], such as fidelity estimation [7; 8; 9; 10; 11; 12] and quantum verification [13; 14; 15; 16], have been developed along this way. However, these methods assume that one can access a known and theoretical target, usually simulated by classical computers. They quickly become inaccessible for quantum computers containing several hundreds or even thousands of highly entangled qubits, due to the intrinsic time complexity of classical simulation.
Recently, Elben _et al._[17] proposed the first cross-platform protocol for estimating the fidelity of quantum states, which are possibly generated by spatially and temporally separated quantum computers. This protocol requires only local measurements in randomized product bases and classical communication between quantum computers. Numerical simulation shows that it consumes significantly fewer measurements than full quantum state tomography. It is expected be applicable in state-of-the-art quantum computers consisting of a few tens of qubits. Later on, Knorzer _et al._[18] extended Elben's protocol to cross-platform comparison of quantum networks, assuming the existence of quantum links that can teleport quantum states. Nevertheless, a quantum link transferring quantum states of many qubits with high accuracy between two distant quantum computers is far from reach in the near future.
In this work, by elaborating the core idea of [17], we present a novel protocol for cross-platform comparing spatially and temporally separated quantum processes. The protocol uses only single-qubit unitary gates and classical communication between quantum computers, without requiring quantum links or ancilla qubits. This approach allows for accurate estimation of the performance of quantum devices manufactured in separate laboratories and companies using different technologies. Furthermore, the protocol can be used to monitor the stable function of target quantum computers over time. We apply the protocol to compare the performance of five quantum devices from IBM and the "Qianshi" quantum computer from Baidu via the cloud. Our experimental results reveal that our protocol can accurately compare the performance of arbitrary quantum processes. Although the sample complexity of our protocol still scales exponentially with the number of qubits, it has a significantly smaller exponent factor compared with that of quantum process tomography. Overall, our proto
col serves as a novel application of the powerful randomized measurement toolbox [19].
The rest of the paper is organized as follows. Section II reviews the cross-platform quantum state comparison protocol in [17] and introduces the quantum process performance metric. Section III elaborates the main result, a new protocol for cross-platform comparing arbitrary quantum processes. Particularly, we summarize the similarities and differences between our protocol and that of [18]. Section IV reports a thorough experimental cross-platform comparison on spatially and temporally separated quantum computers, along with a comprehensive investigation of the experimental data. The Appendices summarize technical details of the main text.
## II Preliminaries
### Cross-platform comparison of quantum states
In quantum information, fidelity is an important metric that is widely used to characterize the closeness between quantum states. There are many different proposals for the definition of state fidelity [20]. In this work, we will concentrate on the _max fidelity_, formally defined as [20; 17]
\[F_{\max}(\rho_{1},\rho_{2}):=\frac{\mathrm{Tr}[\rho_{1}\rho_{2}]}{\max\{ \mathrm{Tr}[\rho_{1}^{2}],\mathrm{Tr}[\rho_{2}^{2}]\}}, \tag{1}\]
where \(\rho_{i}\) is an \(n\)-qubit quantum state produced by the quantum computer, \(i=1,2\).
Elben _et al._[17] proposed a randomized measurement protocol to estimate \(F_{\max}\), which functions as follows. First, we construct an \(n\)-qubit unitary \(U=\bigotimes_{k=1}^{n}U_{k}\), where each \(U_{k}\) is identically and independently sampled from a single-qubit set \(\mathcal{X}_{2}\) satisfying unitary \(2\)-design [21; 22]. This information will be classically communicated to the quantum computers, possibly spatially or temporally separated, that produce the quantum states \(\rho_{1}\) and \(\rho_{2}\), respectively. Then, each quantum computer executes the unitary \(U\), performs a computational basis measurement, and records the measurement outcome \(\mathbf{s}\). Repeating the above procedure for fixed \(U\) a number of times, we are able to obtain two probability distributions over the outcomes of the form \(\mathrm{Pr}_{U}^{(1)},\mathrm{Pr}_{U}^{(2)}\), where the superscript \(i\) represents that the distribution is obtained from quantum state \(\rho_{i}\). Next, we repeat the whole procedure for many different random unitaries \(U\), yielding a set of probability distributions \(\{\mathrm{Pr}_{U}^{(1)},\mathrm{Pr}_{U}^{(2)}\}_{U}\). From the experimental data, we estimate the overlap between \(\rho_{i}\) and \(\rho_{j}\) as [17]
\[\mathrm{Tr}[\rho_{i}\rho_{j}]=2^{n}\sum_{\mathbf{s},\mathbf{s}^{\prime}\in\{0,1\}^{n} }(-2)^{-\mathcal{D}[\mathbf{s},\mathbf{s}^{\prime}]}\overline{\mathrm{Pr}_{U}^{(i)}[ \mathbf{s}]\mathrm{Pr}_{U}^{(j)}[\mathbf{s}^{\prime}]}, \tag{2}\]
where \(\overline{\cdots}\) denotes the ensemble average over the sampled unitaries \(U\) and \(\mathcal{D}[\mathbf{s},\mathbf{s}^{\prime}]\) denotes the hamming distance between two bitstrings \(\mathbf{s}\) and \(\mathbf{s}^{\prime}\). Specially, \(\mathrm{Tr}[\rho_{1}\rho_{2}]\) can be estimated from (2) by setting \(i=1\) and \(j=2\), whereas the purities \(\mathrm{Tr}[\rho_{1}^{2}]\) and \(\mathrm{Tr}[\rho_{2}^{2}]\) can be obtained by setting \(i=j=1\) and \(i=j=2\), respectively. Using the above estimated quantities, we successfully compute the max fidelity \(F_{\max}(\rho_{1},\rho_{2})\).
Using experimental data from [23], Elben _et al._ showcased the experiment-theory fidelities and experiment-experiment fidelities of highly entangled quantum states prepared via quench dynamics in a trapped ion quantum simulator as a proof of principle [17]. Recently, Zhu _et al._ reported thorough cross-platform comparison of quantum states in four ion-trap and five superconducting quantum platforms, with detailed analysis of the results and an intriguing machine learning approach to explore the data [24].
### Quantum process performance metric
A quantum process, also known as a quantum operation or a quantum channel, is a mathematical description of the evolution of a quantum system. It is mathematically formulated as a completely positive and trace-preserving (CPTP) linear map on the quantum states [25]. The Choi-Jamiolkowski isomorphism provides a unique way to represent quantum processes as quantum states in a larger Hilbert space. Formally, the Choi state of an \(n\)-qubit quantum process \(\mathcal{E}\) is defined as [26]
\[\eta_{\mathcal{E}}:=(\mathcal{I}\otimes\mathcal{E})|\psi_{+}\rangle\!\langle \psi_{+}|, \tag{3}\]
where \(\mathcal{I}\) is the identity channel and \(|\psi_{+}\rangle:=1/\sqrt{2^{n}}\sum|ii\rangle\) is a maximally entangled state of a bipartite quantum system composed of two \(n\)-qubit subsystems.
One lesson we can learn from the cross-platform state comparison protocol is that, we must choose a process metric before comparing two quantum processes. Gilchrist _et al._[27] have introduced a systematic way to generalize a metric originally defined on quantum states to a corresponding metric on quantum processes, utilizing the Choi-Jamiolkowski isomorphism. Specifically, the _max fidelity_ between two \(n\)-qubit quantum processes \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\), implemented on different quantum platforms, is defined as
\[F_{\max}(\mathcal{E}_{1},\mathcal{E}_{2}):=F_{\max}(\eta_{\mathcal{E}_{1}},\eta _{\mathcal{E}_{2}}), \tag{4}\]
where \(\eta_{\mathcal{E}}\) is the Choi state of quantum process \(\mathcal{E}\). This metric fulfills the axioms for quantum process fidelities following the argument in [27]. It is reasonable to believe that, at least to some extent, this metric reveals that the quantum processes have implemented the same quantum evolution.
In the following, we propose an experimentally efficient protocol to estimate this metric. This protocol makes use of only single-qubit unitaries and classical communication, thus can be executed in spatially and temporally separated quantum devices. This enables cross-platform comparison of arbitrary quantum processes.
## III Cross-platform comparison
In this section, we first provide a simple example to illustrate the necessity of cross-platform comparison. Then, we introduce a protocol for estimating the max process fidelity that is conceptually straightforward yet experimentally challenging. Next, we propose a modification to the protocol that
employs randomized input states and provide a detailed explanation of the approach. Furthermore, we demonstrate that our protocol can be extended to accomplish full process tomography. Our protocol is motivated by the observation that even identical quantum computers cannot produce identical outcomes on each run due to the intrinsic randomness of quantum mechanics, but they do generate identical probability distributions from a statistical perspective.
Cross-platform comparison of quantum computers is essential for at least two reasons. Firstly, comparing the actual implementation with an idealized theoretical simulation can be challenging, as classical simulations become computationally demanding with an increasing number of qubits. Secondly, due to the presence of varying forms of quantum noise across different quantum platforms, the actual implementation of quantum processes can vary significantly, even if they maintain the same process fidelity with respect to the ideal target. To illustrate this point, consider the following example. Suppose Alice has a superconducting quantum computer and Bob has a trapped-ion quantum computer. They implement the single-qubit Hadamard gate \(\mathcal{H}(\rho)=H\rho H^{\dagger}\) on their respective quantum computers. However, Alice's implementation \(\mathcal{E}_{1}\) suffers from the depolarizing noise, yielding \(\mathcal{E}_{1}(\rho)=(1-p_{1})H\rho H^{\dagger}+p_{1}1/2\), where \(p_{1}=7/30\). On the other hand, Bob's implementation \(\mathcal{E}_{2}\) suffers from the dephasing noise, such that \(\mathcal{E}_{2}(\rho)=(1-p_{2})H\rho H^{\dagger}+p_{2}\Delta(\rho)\), where \(p_{2}=1/5\) and \(\Delta(\cdot)\) is the dephasing operation. After simple calculations, we obtain \(F_{\max}(\mathcal{E}_{1},\mathcal{H})=F_{\max}(\mathcal{E}_{2},\mathcal{H}) \approx 0.808\) and \(F_{\max}(\mathcal{E}_{1},\mathcal{E}_{2})\approx 0.978\). Despite \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) having the same fidelity level when compared to the ideal target \(\mathcal{H}\), discernible difference exists between them. Therefore, solely comparing the fidelity of a quantum process to an ideal reference is insufficient, and a direct comparison between quantum processes is warranted.
### Ancilla-assisted cross-platform comparison
In this section, we recover a conceptually simple approach for estimating the max process fidelity defined in Eq. (4), which was recently proposed in [18]. The key observation is that this fidelity can be seen as the max state fidelity between the Choi states of the corresponding quantum processes. To construct the Choi state of the \(n\)-qubit quantum process \(\mathcal{E}\), we need to introduce an additional \(n\)-qubit clean auxiliary system. Using the auxiliary system, we prepare a \(2n\)-qubit maximally entangled state \(|\psi_{+}\rangle\) and apply \(\mathcal{E}\) to half of the whole system, which successfully prepares the Choi state of \(\mathcal{E}\). We can then estimate the max state fidelity using the procedure introduced in Section II.1. The complete protocol is illustrated in Figure 1(a)-(c).
We refer to this protocol as the _ancilla-assisted cross-platform comparison_ because it requires additional clean ancilla qubits to prepare the Choi state of the quantum process. To perform this protocol, a maximally entangled state is required as input, resulting in a two-fold overhead when comparing \(2n\)-qubit states instead of \(n\)-qubit states. Consequently, this protocol may not be practical in scenarios with limited quantum computing resources. Furthermore, preparing high-fidelity maximally entangled states can be experimentally challenging, which may negatively impact the accuracy of the protocol.
### Ancilla-free cross-platform comparison
To overcome the limitations of the ancilla-assisted protocol, we propose an efficient and ancilla-free approach for estimating the max process fidelity. Our protocol does not require any additional qubits or the preparation of maximally entangled states. The key observation is that the auxiliary system in the ancilla-assisted protocol only needs to perform random
Figure 1: Two protocols to estimate the max process fidelity \(F_{\max}\) between quantum processes implemented on different quantum platforms. (a) _Ancilla-assisted protocol_: Prepare the maximally entangled state, execute the target quantum process, and perform the randomized measurements given by \(\otimes_{k=1}^{n}U_{1}^{(k)}\otimes\otimes_{k=1}^{n}U_{2}^{(k)}\). (b) _Ancilla-free protocol_: Randomly sample a computational basis \(|\mathbf{s}\rangle\), execute the unitaries \(\otimes_{k=1}^{n}U_{1}^{(k)T}\), execute the target quantum process, and perform the randomized measurements given by \(\otimes_{k=1}^{n}U_{2}^{(k)}\). (c) Run the quantum circuits constructed in (a) or (b) on platform \(\mathcal{S}_{i}\) to obtain the probability distribution \(\Pr_{U}^{(i)}[\mathbf{s},\mathbf{k}]\). The max process fidelity \(F_{\max}(\mathcal{E}_{i},\mathcal{E}_{j})\) is inferred from the probability distributions (see text).
ized measurements. After the measurement, the auxiliary system collapses to one eigenstate of the sampled measurement operator. Based on the identity \((\ket{u}\!\!\bra{u}\otimes\mathbb{1})\ket{\psi_{+}}=\ket{u}\otimes\ket{u^{*}}\), where \(\mathbb{1}\) is the identity matrix, and the deferred measurement principle [28], we can eliminate the auxiliary system by preparing computational states and applying the transposed unitary operator on the main system. Please refer to Appendix A for a detailed analysis.
We refer to the new protocol as the _ancilla-free cross-platform comparison_ and it works as follows. We consider two \(n\)-qubit quantum processes \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) realized on different quantum platforms \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\), whose Choi states are \(\eta_{1}\) and \(\eta_{2}\), respectively. The protocol, illustrated in Figure 1(b)-(c), consists of three main steps: sampling unitaries, running circuits, and post-processing.
**Step 1. Sampling unitaries:** Construct two \(n\)-qubit unitaries \(U_{i}=\bigotimes_{k=1}^{n}U_{i}^{(k)}\), \(i=1,2\), where each \(U_{i}^{(k)}\) is identically and independently sampled from a single-qubit set \(\mathcal{X}_{2}\) satisfying unitary \(2\)-design. The information of \(U_{i}\) is then communicated to both platforms via classical communication.
**Step 2. Running circuits:** After receiving the information of the sampled unitaries, Each platform \(\mathcal{S}_{i}\) (\(i=1,2\)) initializes its quantum system to the computational states \(\ket{\mathbf{s}}\) and applies the first unitary \(U_{1}\) to \(\ket{\mathbf{s}}\). Subsequently, \(\mathcal{S}_{i}\) implements the quantum process \(\mathcal{E}_{i}\) and applies the second unitary \(U_{2}\). Finally, \(\mathcal{S}_{i}\) performs the projective measurement in the computational basis and obtains an outcome \(\mathbf{k}\). Repeating the above procedure many times, we obtain two probability distributions \(\Pr_{K|\mathbf{s},U_{1},U_{2}}^{(1)}\) and \(\Pr_{K|\mathbf{s},U_{1},U_{2}}^{(2)}\) over the measurement outcomes \(\mathbf{k}\) for the fixed computational state \(\ket{\mathbf{s}}\) and unitaries \(U_{1}\) and \(U_{2}\). By exhausting the computational states and repeatedly sampling the unitaries, we obtain two probability distributions \(\Pr_{K,S|U_{1},U_{2}}^{(i)}\) with respect to the sampled unitaries and computational state inputs. For simplicity, we abbreviate \(\Pr_{K,S|U_{1},U_{2}}^{(i)}\) to \(\Pr_{U}^{(i)}\).
**Step 3. Post-processing:** From the experimental data, we estimate the overlap between the Choi states \(\eta_{i}\) and \(\eta_{j}\) for \(i,j=1,2\) as
\[\begin{split}\mathrm{Tr}[\eta_{i}\eta_{j}]=4^{n}\sum_{\mathbf{s}, \mathbf{s}^{\prime},\mathbf{k},\mathbf{k}^{\prime}\in\{0,1\}^{n}}&(-2)^{- \mathcal{D}[\mathbf{s},\mathbf{s}^{\prime}]-\mathcal{D}[\mathbf{k},\mathbf{k}^{\prime}]}\\ &\times\overline{\Pr_{U}^{(i)}[\mathbf{s},\mathbf{k}]\mathrm{Pr}_{U}^{(j) }[\mathbf{s}^{\prime},\mathbf{k}^{\prime}]}.\end{split} \tag{5}\]
where \(\overline{\cdots}\) denotes the ensemble average over the sampled unitaries \(U_{1}\) and \(U_{2}\). This is proven in Appendix A. By setting \(i=1\) and \(j=2\), we can estimate the overlap \(\mathrm{Tr}[\eta_{1}\eta_{2}]\) from the above equation, which is the second-order cross-correlation of the probabilities \(\Pr_{U}^{(1)}\) and \(\Pr_{U}^{(2)}\). We can obtain the purities \(\mathrm{Tr}[\eta_{1}^{2}]\) and \(\mathrm{Tr}[\eta_{2}^{2}]\) by setting \(i=j=1\) and \(i=j=2\), respectively. These are the second-order autocorrelations of the probabilities. Using the estimated quantities, we compute the max process fidelity \(F_{\max}(\mathcal{E}_{1},\mathcal{E}_{2})\) in Eq. (4).
There are several important points to note about our protocol. First, when classical simulation is available, the protocol can be used to compare the experimentally implemented process to the theoretical simulation, providing a useful tool for experiment-theory comparison. Second, our protocol can also estimate the process purity \(\mathrm{Tr}[\eta_{2}^{2}]\) of a quantum process \(\mathcal{E}\), which measures the extent to which \(\mathcal{E}\) preserves the purity of the quantum state. This is an important measure for characterizing quantum processes, and our protocol provides an efficient way to estimate it. Finally, it is worth noting that the definition of max process fidelity is not unique, and different approaches exist [27; 29]. Our protocol, based on statistical correlations of randomized inputs and measurements, can be readily extended to any metric that depends solely on the process overlap \(\mathrm{Tr}[\eta_{\mathcal{E}_{1}}\eta_{\mathcal{E}_{2}}]\) and the process purities \(\mathrm{Tr}[\eta_{\mathcal{E}_{1}}^{2}]\) and \(\mathrm{Tr}[\eta_{\mathcal{E}_{2}}^{2}]\). This makes our protocol highly versatile and applicable to a wide range of quantum computing scenarios.
### Randomized quantum process tomography
Here we argue that our protocol is applicable for full quantum process tomography. It is worth noting that in Ref. [30], a method was proposed for performing full quantum state tomography using randomized measurements. For an \(n\)-qubit quantum process \(\mathcal{E}\), we can first construct the Choi state of \(\mathcal{E}\) and then use the proposed protocol to obtain the full information of the Choi state \(\eta_{\mathcal{E}}\). However, as previously mentioned, this method is not efficient and is impractical due to the imperfect preparation of maximally entangled states and the requirement for an additional \(n\)-qubit auxiliary system.
Likewise, we may use the randomized input states trick introduced in Section III.2 to overcome the above issues. Specifically, based on the experimental data \(\mathrm{Pr}_{U}\) collected in Section III.2, the full information of an unknown \(n\)-qubit quantum process \(\mathcal{E}\) can be obtained via
\[\begin{split}\eta_{\mathcal{E}}=4^{n}\sum_{\mathbf{s},\mathbf{s}^{\prime}, \mathbf{k},\mathbf{k}^{\prime}\in\{0,1\}^{n}}&(-2)^{-\mathcal{D}[\mathbf{s}, \mathbf{s}^{\prime}]-\mathcal{D}[\mathbf{k},\mathbf{k}^{\prime}]}\\ &\times\overline{\Pr_{U}[\mathbf{s},\mathbf{k}]U^{\dagger}|\mathbf{s}^{\prime} \mathbf{k}^{\prime}\rangle\!\langle\mathbf{s}^{\prime}\mathbf{k}^{\prime}|U},\end{split} \tag{6}\]
where \(U=U_{1}\otimes U_{2}\) and \(\overline{\cdots}\) denotes the ensemble average over the sampled unitaries \(U_{1}\) and \(U_{2}\) as before. The is proven in Appendix B.
### Comparison with previous works
Knorzer _et al._[18] have recently introduced a new set of protocols that enable pair-wise comparisons between distant nodes in a quantum network. The authors propose four cross-platform state comparison schemes as alternatives to Elben's protocol [17], each of which relies on the presence of quantum links. In addition to this, they present three protocols, referred to as M1, M2, and M3, which facilitate cross-platform comparisons of quantum processes, assuming that a cross-platform state comparison protocol is available.
We will now explain how our protocol differs from M1, M2, and M3. While M1 involves an ancilla-assisted comparison protocol that we have rephrased in Section III.1, our protocol does not rely on ancilla qubits. Similarly, M3 involves
a series of entanglement tests that are fundamentally different from our protocol. Although our protocol and M2 share some similarities, such as the absence of ancilla qubits and the need to sample random unitaries and computational basis states, there are notable differences. Specifically, our protocol only needs to sample from a _single-qubit_ unitary \(2\)-design, can accurately estimate the max fidelity, and can compare the performance of arbitrary quantum processes. On the other hand, M2(i) estimates the average gate fidelity and requires sampling from a _multi-qubit_ unitary \(2\)-design, which can be resource-intensive as the number of qubits increases. M2(iii) is conceptually straightforward but can only estimate the ability of quantum processes to preserve quantum information in the computational basis. Additionally, M2(i) and M2(iii) are limited to comparing the performance of unitary quantum processes.
## IV Experiments
In this section, we report experimental results on cross-platform comparison of quantum processes across various spatially and temporally separated quantum devices. First, we demonstrate the efficacy of our protocol in comparing the H and CNOT gates implemented on different platforms with their ideal counterparts obtained from classical simulation. Next, we monitor the stability of the "Qianshi" quantum computer from Baidu over a week with our protocol. Finally, we conduct an extensive numerical analysis to determine the expected number of experimental runs required to obtain reliable results. All the experiments are conducted using the Quantum Error Processing toolkit developed on the Baidu Quantum Platform [31].
### Comparing spatially separated quantum processes
We utilize our ancilla-free cross-platform comparison protocol to assess the performance of H and CNOT gates implemented on seven distinct platforms that are freely accessible to the public over the internet. These platforms include six superconducting quantum computers, namely _ibmq_quito_ (IBM_1), _ibmq_oslo_ (IBM_2), _ibmq_lima_ (IBM_3), _ibm_nairobi_ (IBM_4), _ibmq_manila_ (IBM_5), and _baidu_gianshi_ (BD_1), as well as the _baidu ideal simulator_ (IDEAL), which is intended for experiment-theory comparisons.
First of all, it is noteworthy that the random Pauli basis measurements \(\{X,Y,Z\}\) are equivalent to randomized measurements with a single qubit Clifford group [24; 32]. The Clifford group is a unitary \(2\)-design group, and it can be employed to conduct complete process tomography of \(n\)-qubit quantum processes. This equivalence enables us to sample directly from the \(3^{n}\) Pauli preparation and \(3^{n}\) Pauli measurement unitaries in our experiments.
To begin with, we utilize our protocol to compare the performance of the single-qubit H gate across seven quantum platforms. To achieve this, we create \(2^{1}\times N_{U}=20\) random circuits and execute \(M_{\rm shots}=500\) projective measurements for each circuit on each platform. Furthermore, we employ
Figure 2: The performance matrices for the single-qubit H and two-qubit CNOT gates generated from seven different quantum platforms. The entry in the \(i\)-th row and \(j\)-th column of the matrix represents the max process fidelity between platform-\(i\) and platform-\(j\). The entries in the upper right corner are visualized in pie chart format. (a) The performance matrix of the H gate. Each entry is inferred from \(2^{1}\cdot N_{U}=20\) random circuits and each circuit is repeated \(M_{\rm shots}=500\) times. (b) The performance matrix of the CNOT gate. Each entry is inferred from \(2^{2}\cdot N_{U}=20\) random circuits and each circuit is repeated \(M_{\rm shots}=500\) times.
the same protocol to compare the performance of the CNOT gate implemented on these platforms. To accomplish this, we generate \(2^{2}\times N_{U}=400\) random circuits and performe \(M_{\rm shots}=500\) repetitions for each quantum circuit on each platform. The performance matrices for the H and CNOT gates are presented in Figure 2.
The experimental results make it clear that, while some quantum devices may achieve fidelities that are comparable to those of the ideal simulator, there remains a significant discrepancy between them. This emphasizes the importance of directly comparing the performance of quantum devices with each other, rather than relying solely on comparisons to an ideal simulator, as such comparisons may not be adequate.
### Comparing temporally separated quantum processes
Our protocol is also useful for monitoring the stable performance of quantum devices. To this end, we employe the ancilla-free cross-platform comparison protocol to assess the stability of H and CNOT gates implemented on Baidu's "Qianshi" quantum computer (BD_1) over the course of one week. The experimental settings for the H and CNOT gates are identical to those used in the previous section. Specifically, for the single-qubit H gate, we create \(2^{1}\times N_{U}=20\) random circuits daily and execute \(M_{\rm shots}=500\) projective measurements for each circuit on "Qianshi". For the two-qubit CNOT gate, we create \(2^{2}\times N_{U}=400\) random circuits daily and execute \(M_{\rm shots}=500\) projective measurements for each circuit on "Qianshi". The performance matrices of the single-qubit H and two-qubit CNOT gates generated from the daily data of "Qianshi" are shown in Figure 3.
After analyzing the cross-platform fidelities presented in Figure 3, we discover several noteworthy features. First, we observe that the stability of the H gate is considerably higher than that of the CNOT gate on "Qianshi," which aligns with the expectation that two-qubit gates are harder to implement and maintain in a superconducting quantum computer than single-qubit gates. Additionally, on the last day of the week (DAY_7), there is a significant drop in the performance of the CNOT gate. After consulting with researchers from Baidu's Quantum Computing Hardware Laboratory, it is determined that the instability is caused by the sudden halt of the dilution cooling system. After the system is restarted, all native quantum gates have to be re-calibrated to achieve optimal performance. Furthermore, it is observed that the temperature variation had a negligible impact on the H gate. This observation might be helpful for the experimenters to identify potential hardware issues.
### Scaling of the required number of experimental runs
In practice, the accuracy of the estimated fidelity is unavoidably subject to statistical error, as a result of the finite number of random circuits (\(2^{n}\cdot N_{U}\)) and the finite number of projective measurements (\(M_{\rm shots}\)) performed per random circuit. Therefore, it is experimentally crucial to consider the scaling of the total number of experimental runs, which equals to \(2^{n}\cdot N_{U}\cdot M_{\rm shots}\), constituting the measurement budget, in or
Figure 3: The performance matrices of the single-qubit H and two-qubit CNOT gates generated from the daily data of Baidu’s “Qianshi” quantum computer for one week. The entry in the \(i\)-th row and \(j\)-th column of the matrix represents the max process fidelity between platform-\(i\) and platform-\(j\). The entries in the upper right corner are visualized in pie chart format. (a) The performance matrix of the H gate. Each entry is inferred from \(2^{1}\cdot N_{U}=20\) random circuits and each circuit is repeated \(M_{\rm shots}=500\) times. (b) The performance matrix of the CNOT gate. Each entry is inferred from \(2^{2}\cdot N_{U}=400\) random circuits and each circuit is repeated \(M_{\rm shots}=500\) times.
der to effectively suppress the statistical error to a prespecified threshold \(\epsilon\) when evaluating the performance of an \(n\)-qubit quantum process. In the following, we present numerical simulation to investigate this behavior.
In Figure 4, numerical results for the average statistical error as a function of the measurement budget \(2^{n}\cdot N_{U}\cdot M_{\rm shots}\) are presented and the scaling of the measurement budget with respect to the system size \(n\) is derived. In order to keep consistent with previous experiments, we choose the \(\mathbb{H}\) gate when \(n=1\) and the CNOT gate when \(n=2\) in the simulation. Note that in this case the ideal fidelity \(F_{\rm max}=1\) is known. We repeat our protocol on the ideal simulator \(5\) times for each point in the figure and record the mean of the statistical errors \(|\widetilde{F}_{\rm max}-1.0|\). We find that the statistical error scales as \(|\widetilde{F}_{\rm max}-1.0|\sim 1/(2^{n}N_{U}M_{\rm shots})\), where \(\widetilde{F}_{\rm max}\) is the estimated max process fidelity via simulation.
Now we investigate the scaling of the required number of experimental runs, \(2^{n}M_{\rm shots}\), per unitary to estimate the max fidelity \(\widetilde{F}_{\rm max}\) within an average statistical error of \(\epsilon=0.05\) while fixing \(N_{U}\) to \(100\). We employ our protocol to two very different types of quantum processes, with different numbers of qubits \(n\): (i) a highly entangled quantum process corresponding to an \(n\)-qubit GHZ state preparation circuit (Entangled) and (ii) a completely local quantum process composed of \(n\) single-qubit rotation gates (Non-Entangled). The numerical results are presented in Figure 5. From the fitted data, we find that \(2^{n}M_{\rm shots}\sim 2^{bn}\), where \(b=2.02\pm 4e-4\) for the entangled case and \(b=1.94\pm 2e-4\) for the non-entangled case. The analysis shows that our ancilla-free cross-platform protocol requires a total number of experimental runs that scales as \(2^{n}N_{U}M_{\rm shots}\sim 2^{bn}\) with \(b\approx 2\). This scaling, despite exponential, is significantly less than full quantum process tomography (QPT), which has an exponent \(b\geq 4\)[33].
## V Conclusions
We have proposed an ancilla-free cross-platform protocol that enables the performance comparison of arbitrary quantum processes, using only single-qubit unitaries and classical communication. This protocol is thus suitable for comparing quantum processes that are independently manufactured over different times and locations, built by different teams using different technologies. We have experimentally demonstrated the cross-platform protocol on six remote quantum computers fabricated by IBM and Baidu, and monitored the stable functioning of Baidu's "Qianshi" quantum computer over one week. The experimental results reveal that our protocol accurately compares the performance of different quantum computers with significantly fewer measurements than quantum process tomography. Additionally, we have shown that our protocol is applicable to quantum process tomography.
However, some problems must be further explored to make the cross-platform protocols more practical. Firstly, the sample complexity of these protocols lacks theoretical guarantees, thereby necessitating the empirical selection of experimental parameters. To address this challenge, it may be possible to adapt techniques from [34]. Secondly, it is vital to make the protocols robust against state preparation and measurement errors. One possible solution is to apply quantum error mitigation methods [35; 36; 37; 38; 39] to alleviate quantum errors and increase the estimation accuracy. We suggest that ideas and insights from randomized benchmarking [40] and quantum gateset tomography [41] might be helpful for designing error robust cross-platform protocols.
Figure 5: Scaling of the minimal number of required experimental runs \(2^{n}M_{\rm shots}\) to estimate \(\widetilde{F}_{\rm max}\) up to a fixed statistical error of \(0.05\) as a function of the number of qubits \(n\). The number of random unitaries is fixed to \(N_{U}=100\). The target quantum process is taken to be the \(n\)-qubit GHZ state preparation circuit for the entangled case and the rotation circuit composed of \(n\) single-qubit rotation gates for the non-entangled case. The data is obtained via numerical simulation.
## Acknowledgements
Part of this work was done when C. Z. was a research intern at Baidu Research. We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. This work was partially supported by the National Science Foundation of China (Nos. 61871111 and 61960206005).
|
2305.01043 | Multiphasic stochastic epidemic models | At the onset of the Covid-19 pandemic, a number of non-pharmaceutical
interventions have been implemented in order to reduce transmission, thus
leading to multiple phases of transmission. The disease reproduction number
$R_t$, a way of quantifying transmissibility, has been a key part in assessing
the impact of such interventions. We discuss the distinct types of transmission
models used and how they are linked. We consider a hierarchical stochastic
epidemic model with piece-wise constant $R_t$, appropriate for modelling the
distinct phases of the epidemic and quantifying the true disease magnitude. The
location and scale of $R_t$ changes are inferred directly from data while the
number of transmissibility phases is allowed to vary. We determine the model
complexity via appropriate Poisson point process and Dirichlet process-type
modelling components. The models are evaluated using synthetic data sets and
the methods are applied to freely available data from California and New York
states as well as the United Kingdom and Greece. We estimate the true infected
cases and the corresponding $R_t$, among other quantities, and independently
validate the proposed approach using a large seroprevalence study. | Petros Barmpounakis, Nikolaos Demiris | 2023-05-01T19:04:09Z | http://arxiv.org/abs/2305.01043v1 | # Multiphasic stochastic epidemic models
###### Abstract
At the onset of the Covid-19 pandemic, a number of non-pharmaceutical interventions have been implemented in order to reduce transmission, thus leading to multiple phases of transmission. The disease reproduction number \(R_{t}\), a way of quantifying transmissibility, has been a key part in assessing the impact of such interventions. We discuss the distinct types of transmission models used and how they are linked. We consider a hierarchical stochastic epidemic model with piece-wise constant \(R_{t}\), appropriate for modelling the distinct phases of the epidemic and quantifying the true disease magnitude. The location and scale of \(R_{t}\) changes are inferred directly from data while the number of transmissibility phases is allowed to vary. We determine the model complexity via appropriate Poisson point process and Dirichlet process-type modelling components. The models are evaluated using synthetic data sets and the methods are applied to freely available data from California and New York states as well as the United Kingdom and Greece. We estimate the true infected cases and the corresponding \(R_{t}\), among other quantities, and independently validate the proposed approach using a large seroprevalence study.
First keyword Second keyword More
The emergence on early 2020 of Covid-19, an infectious disease caused by the virus SARS-CoV2, has placed health systems around the globe under immense pressure. On March 2020, the World Health Organization declared Covid-19 as a global pandemic, and as of the end of September 2022 more than 6.5 million have died due to illness or complications of it. At the beginning of the pandemic in the absence of available vaccines or suitable medication the majority of governments around the globe resorted to Non-Pharmaceutical-Interventions (NPIs) in an attempt to stop the exponential spreading of the virus and reduce transmissibility. Such NPs involved measures like work-from-home policies, school and university closures, stay-at-home guidance for people in high-risk groups and full lockdowns.
These measures had an effect on reducing the transmissibility and resulted in spreading trajectories that could not be properly described by the standard epidemic models due to the resulting multiphasic nature of transmission. The first systematic technique to assess these interventions was due to Flaxman et al. (2020) who proposed a renewal equation model whose infection dynamics were modelled through a multilevel framework incorporating NPIs. We amend this model by inferring the points in time that the transmissibility changes as well as the magnitude of infectiousness in a data-driven manner. We determine the model complexity by using appropriate stochastic processes based upon variations of the Poisson process (PP) and Dirichlet process (DP)-based priors via their stick-breaking constructions (Miller and Harrison (2018); Sethuraman (1994)).
Several models have been proposed in the literature for the estimation of multiphasic infectious diseases, particularly Covid-19. Briefly, a stochastic Susceptible-Exposed-Infectious-Removed (SEIR) model with a regression framework for the effect of the NPIs on transmissibility is used in Knock et al. (2021) while Birrell et al. (2021), Li et al. (2021) and Chatzilena et al. (2022) use stochastic SEIR models where the transmission mechanism is described by a system of non-linear ordinary differential equations and the transmission rate is modelled by a diffusion process. Modelling the transmission rate as a random walk facilitates gradual and smooth changes in time. A piecewise linear quantile trend model was proposed by Jiang et al. (2021), a kernel-based SIR model distinguishing the different phases of the transmissibility in space was developed by Geng et al. (2021) while Wistuba et al. (2022) incorporated splines to estimate the reproduction number in Germany.
Simpler forms of deterministic and stochastic multiphasic epidemic models have been considered before. In the context of modelling SARS-CoV2 transmission Flaxman et al. (2020) used an approach with fixed number, location and scale of the \(R_{t}\) change. Related work based upon variations of Dirichlet process mixtures is presented in Hu and Geng (2021) and Creswell et al. (2023). In the former, the authors used a Mixture of finite mixtures (MFM) model on a Susceptible-Infected-Recovered-Susceptible model, while in the latter the authors used a suitably modified Pitman-Yor process but only for the scenario of fitting to the observed cases, thus dispensing with the effort to estimate the complete epidemic burden and the suitable adjustment for the reproduction number. The main advantage of the proposed methodology is the intuitive characterization of the epidemic in terms of multiple phases of transmissibility. The number and magnitude of the distinct phases are determined purely by data without explicitly using information about policy changes and NPIs. This approach should be central to a retrospective assessment of the NPIs: an evidence-based method for estimating the timing and effect of those interventions, minimising the risk of introducing several types of bias.
The paper is organized as follows. In section 1 we define the proposed compartmental process, elucidate its equivalence with renewal process-based models and describe the observation regimes of the data. In section 2 we complete the model definition by characterising the complexity regimes. Section 3 assesses the proposed models via simulation experiments while section 4 contains the application to data from California and New York state, the United Kingdom and Greece. The paper concludes with discussion.
## 1 Modelling Disease Transmission
### Model Definition and Related Characterisations
The methodology for modelling the time-varying disease transmissibility has been implemented under two distinct but equivalent models, the compartmental Susceptible-Infectious-Removed (SIR) model and the seemingly simpler time-since-infection model with population susceptibility reduction. Here we define both models and delineate their equivalence.
The model assumes that the population has size \(n\), is closed (demographic changes during the course of the epidemic are ignored) homogeneous and homogeneously mixing. In the stochastic SIR model, an infected individual makes contact with any other individual on day \(t\) at the points of a time-homogeneous Poisson process with time-varying intensity \(\frac{\lambda_{t}}{m}\). This scaling is commonly adopted as it makes the contact process independent of the size of the population (e.g., Andersson and Britton, 2000). If these (close) contacts of an infected individual occur with a susceptible they result in an infection. Each individual remains infectious for a random time period \(Y\). All Poisson processes in this construction are assumed to be independent. The disease reproduction number is defined as \(R_{t}=\lambda_{t}*E[Y]\), \(t=1,\ldots,T\) where \(T\) is the time horizon of the study.
For this model the expected number of new infections \(c_{t+1}\) at day \(t+1\) is given by:
\[E[c_{t+1}]=S_{t}*\frac{\lambda_{t}}{n}*I_{t}*\Delta_{t+1-t}, \tag{1}\]
with \(I_{t}\) denoting the active set of infectives:
\[I_{t}=\sum_{s=0}^{t}\sum_{j=1}^{c_{s}}P(Y_{j}>t-s) \tag{2}\]
and \(P(Y_{j}>t-s)\) the probability that individual \(j\) infected on day \(s\) remains infectious on day \(t\). This probability is implicitly determined by the disease characteristics. Then (1) can be rewritten as
\[E[c_{t+1}]=S_{t}*\frac{R_{t}}{n}*\frac{\sum_{s=0}^{t}\sum_{j=1}^{c_{s}}P(Y_{j }>t-s)}{E[Y]}=\frac{S_{t}}{n}*R_{t}*\sum_{s=0}^{t}c_{s}*g_{s}(t), \tag{3}\]
where \(g_{s}(t)=\frac{P(Y>t-s)}{E[Y]}\) is called the generation interval which defines the time from infection of an individual until the first infection they generate, see for example Ake Svensson (2015), Ake Svensson (2007) and Champredon et al. (2018). Note that equation (3) is used in the commonly adopted technique of Cori et al. (2013) for estimating the instantaneous reproduction number. In that approach, the term \(\frac{S_{t}}{n}\) which accounts for the depletion of the susceptible population is ignored since the aim is somewhat different.
One should also consider potential'superspreading' events when certain individuals infected unusually large numbers of secondary cases (Shen et al., 2004; Lipsitch et al., 2003). We account for this variability assuming that the individual reproduction number is gamma distributed with mean \(R_{t}\) and dispersion parameter \(k\), yielding \(c_{t}\sim NegativeBinomial(E[c_{t}],k)\)(Lloyd-Smith et al., 2005).
#### 1.1.1 The Disease Reproduction Number
The reproduction number \(R_{t}\) is of great practical interest as it is used to assess if the epidemic is growing or shrinking. Here we consider two distinct instances of reproduction number. The effective reproduction number \(R_{e}(t)=S_{t}*R_{t}\) describes the expected number of secondary cases generated by an infectious individual. Then \(R_{e}(t)>1\) and \(R_{e}(t)<1\) indicate that the epidemic is growing or shrinking respectively and reducing \(R_{e}(t)\) below unity is the typical target of public health authorities. In contrast, \(R_{t}\) quantifies contacts that may not always result in new infections, due to mixing with the immune proportion of the population. Therefore, \(R_{t}>1\) does not necessarily mean that the epidemic is growing. A detailed discussion about reproduction numbers can be found in Pellis et al. (2022).
### Observation Regimes
We consider two distinct observation regimes, one where the observed number of cases corresponds to the total number of infections, explained below, and whence the total number of infections is indirectly estimated, outlined in 1.2.2.
#### 1.2.1 Observed Infections
The regime where the total number of infections are observed may be of interest in its own right but may also be used for certain transmissible diseases, for example in the analysis of influenza like illness data when seroprevalence study information is available. Epidemic models are attractive for analysing such data and are naturally defined in terms of infector-infectee pair and the timing of such events. In reality however this type of data is rarely available. Disease monitoring is based on the daily reported infections, which are known to be susceptible to multiple problems, including a time lag between the timing of infection and symptom onset or testing positive.
In the case of Covid-19 a large proportion of the population experiences asymptomatic or mild disease (Ward et al., 2021) leading to severe under-reporting. Inference about the reproduction number can be robust when the reported cases are used if depletion of the susceptible population is accounted for, or if the observed proportion of cases remains constant over time. One way to validate this assumption is by sequentially performing seroprevalence studies to estimate the true disease prevalence and the proportion of unreported incidences. However, regular such information was not available in most countries. In the following subsection, we describe an alternative approach that dispenses with the need for this assumption.
#### 1.2.2 Unobserved Cases
The case where infections may not be directly observed has been studied in a different context by Demiris et al. (2014). In the case of the pandemic, it became immediately apparent that the observed number of infections only partially accounts for the complete epidemic burden. An alternative technique was proposed by Flaxman et al. (2020) where the true cases were estimated by back-calculating infections from the daily reported deaths which are likely less prone to under-reporting. This method has the additional advantage of yielding an estimate of \(S_{t}\). We adopt this approach for the second level of our model and the daily deaths are linked with the true cases via:
\[\begin{split} d_{t}&\sim NegativeBinomial(E[d_{t}],k) \\ E[d_{t}]&=IFR*\sum_{i=0}^{t-1}c_{t}*\pi(i)\end{split} \tag{4}\]
Accurate estimates of the infection fatality ratio (\(IFR\)) and time-from-infection-to-death distribution (\(\pi(i)\)) are necessary for estimating incidence, treated here as a latent parameter. The \(IFR\) and \(\pi(i)\) parameters may be calculated independently from external data or in a single stage, leveraging additional evidence from seroprevalence studies as illustrated in 4.
## 2 Epidemic Complexity Determination
The number of phases may be treated as a fixed but unknown integer or as a random quantity to be modelled and estimated from data. We describe two such models in the following two subsections.
### Deterministic Number of Phases
For the models described above'model complexity' refers to the number of epidemic phases. In Flaxman et al. (2020) the number of phases was a-priori selected and the times when the reproduction number \(R_{t}\) changed were also
predefined. The locations of these points were informed by the NPIs implemented by each government leading to a piece-wise constant reproduction number \(R_{t}\), effectively assuming immediate effect of those NPIs. We also consider that \(R_{t}\) is a piece-wise constant function and we amend this transmission mechanism by inferring the location and magnitude of \(R_{t}\) changes directly from the data. The number, \(K\), of epidemic phases is investigated using models with different \(K\) values and the best model is selected using the Watanabe-Akaike information criterion (WAIC) (Watanabe, 2013) and Leave-one-out cross-validation (LOO) (Vehtari et al., 2017). The model is defined as follows:
\[R_{t}=\left\{\begin{array}{l}r_{1},\quad t\leq T_{1}\\...\\ r_{j+1},\quad T_{j}<t\leq T_{j+1}\\...\\ r_{K},\quad T_{K-1}<t\leq T\\ \end{array}\right.\]
\[r_{j} \sim f(\cdot),\quad r_{j}\in(0,\infty),\quad j=1,...,K \tag{5}\] \[T_{i+1} =T_{i}+e_{i}\] \[T_{1} \sim\operatorname{Uniform}\left(3,T\right)\] \[e_{i} \sim\operatorname{Uniform}\left(0,100\right),\quad i=1,...,K-1\]
### Stochastic Number of Phases
Under the Bayesian paradigm, a natural but not trivial way is to treat the model complexity, here the number of epidemic phases K, as a parameter and learn its posterior distribution. The'reversible jump' algorithm (e.g., Richardson and Green, 1997) could be used to explore the joint space of K and within-K models. Here we adopt a different approach and model \(K\) as a characteristic of two stochastic models, the Poisson process (PP) and variations of the Dirichlet process (DP) (Ferguson, 1973). For both processes, we use the stick-breaking representation, see Miller and Harrison (2013) and Sethuraman (1994) for the PP and DP respectively, facilitating inference for \(K\). The directed acyclic graph (Figure 1) represents the general structure of our modelling framework.
Estimating the number of phases of the epidemic and the associated location and magnitude of the \(R_{t}\) changes can lead to identifiability problems for \(R_{t}\) and its generative quantities, notably the total number of infections. In order to overcome such issues we explore both a single and a multi-stage modelling procedure (e.g., Bhatt et al., 2020). In the latter, at the first stage, the latent disease cases are estimated using a Gaussian Process (GP) model and then the medians of these latent cases are treated as data with likelihood given in (3). The GP for the estimation of cases is presented in the supplementary material.
#### 2.2.1 Poisson Point Process-based Model
We consider that the arrival of new phases in the time horizon (0,T) is driven by a time-homogeneous Poisson process with rate \(\lambda\), with K growing linearly with time. Hence, following the first epidemic phase, the number, K-1, of new phases follows a Poisson distribution with rate \(\lambda\ast T\) while the duration of each phase a-priori follows an Exponential distribution with rate \(\lambda\). We follow Miller and Harrison (2013) and use the representation:
\[R_{t} =r_{z_{t}} \tag{6}\] \[r_{j} \sim f(\cdot),\quad r_{j}\in(0,\infty),\quad j=1,...,K\] \[z_{t} \sim\operatorname{Categorical}\left(\pi_{1:K}\right),\quad t=1,...,T\] \[\pi_{K} =1-\sum_{k=1}^{K}\pi_{k},\quad K=min\{j:\sum_{i=1}^{j}T_{i}\geq T\}\] \[\pi_{k} =\frac{T_{k}}{T},\quad k=1,...,K-1\] \[T_{i} \sim\operatorname{Exponential}\left(\lambda\right),\quad i=1,..., K_{max}\] \[\lambda \sim\operatorname{Gamma}\left(0.02,1\right)\]
truncating K at \(Kmax=100\), far higher than data-supported estimates.
#### 2.2.2 Dirichlet Process-based Model
An alternative model for the number of phases is based on the DP and its stick-breaking construction:
\[\begin{split} R_{t}&=r_{z_{t}}\\ r_{j}&\sim f(\cdot),\quad r_{j}\in(0,\infty),\quad j=1,...,L\\ z_{t}&\sim\mathrm{Categorical}\left(w_{1:L}\right), \quad t=1,...,T\\ w_{L}&=\prod_{k<L}(1-v_{k}),\quad K=\sum_{k=1}^{L}I \{w_{k}\geq 0\}\\ w_{l}&=v_{l}*\prod_{j=1}^{l-1}(1-v_{j}),\quad l=2, 3,...,L-1\\ w_{1}&=v_{1},\quad v_{i}\sim\mathrm{Beta}\left(1, \theta\right),\quad i=1,...,L-1\\ \theta&\sim\mathrm{Gamma}\left(1,1\right)\end{split} \tag{7}\]
where \(L\) is the truncation point of the DP, set here to 36. Here K is increasing with the scaling parameter \(\theta\).
## 3 Simulation Experiments
Simultaneously learning the parameters and the dimension of a model is typically a challenging statistical task. Here we adopt a simulation-based approach to inference whose details are given in the supplement. We assess the performance of our methods by simulating epidemics of various characteristics for 250 days. The epidemic model defined in (1) was used for simulating daily infections and deaths. The population size was set at \(10^{8}\) with \(IFR=2\%\). The discretized
Figure 1: Directed acyclic graph of the model. Ellipses denote parameters to be learned by the model. The number of phases K is estimated by the DP/PP model or via model selection criteria.
infectious period and the infection-to-death interval are described in the supplementary material. The epidemic was simulated with 5 distinct increasing/decreasing phases resembling the observed Covid 19 outbreaks. The time-varying reproduction number was set as follows:
\[R_{t}=\left\{\begin{array}{ll}1.5,&t\leq 60\\ 0.95,&60<t\leq 100\\ 1.35,&100<t\leq 150\\ 0.8,&150<t\leq 200\\ 1.8,&200<t\leq 250\end{array}\right.\]
Using the model in (5) and the daily deaths as data the lowest WAIC and LOO selected 5 changepoints. Models with varying (3, 4, 5 and 6) number of changepoints incorrectly identified the first 10 days of the simulation as a distinct phase (Figure 2). This can be attributed to the lack of information at the start, a common issue in epidemic models. Following this period the model with 5 changepoints correctly identifies the different epidemic phases, including their timing and magnitude of change. The total daily infections (Figure 2) are also accurately recovered. Inference was initiated the day that 10 cumulative deaths were observed. Plots for the other models may be found in the supplementary material.
In addition to the findings that the models correctly select the right complexity, it is interesting to summarise the model behaviour when investigating model misspecification. Broadly, these findings may be summarised as follows; when we fix the number of phases to be smaller than the true one then the model is correctly recovering the early ones while it is averaging the final phases leading to poorly fitted models. In contrast, when fixing \(K\) to be larger than the true one then we essentially recover the true patterns and get a good fit. Hence, slightly overestimating model complexity is not materially affecting the recovery of the true signal. A list of detailed results is outlined in the supplement.
When fitting the models with a stochastic number of phases to daily infections, both the PP and DP models are precisely estimating the number of epidemic phases, the time of change and the true \(R_{t}\) value (Figure 3). The model was run for 100000 iterations and 8 chains. The analysis based on observing deaths is included in the supplementary material. Briefly, the intermediate phases of the epidemic are well estimated while the first and final phases are recovered with noise. The level of smoothing introduced by the cubic spline affects the noisy estimation of the cases; the lower the degrees of freedom the smoother the estimation of cases and subsequently the reproduction number.
Figure 2: Simulation and estimates based on observing deaths
## 4 Real-data Application
### Data Description and Preprocessing
The models were fitted to daily reported deaths from two US states, California and New York and two European countries, the United Kingdom and Greece. The data are accessible from John Hopkins University and ECDC and the time horizon ran to the end of June 2021 when many NPIs were lifted. Due to a lack of data availability, the model does not account for reinfections. The age-standardized \(IFR\) for each country was informed by the meta-analysis from COVID-19 Forecasting Team (2022) accounting for time, geography and population characteristics. We allowed the \(IFR\) to vary over time, accounting for the age structure of those infected, the burden of health systems and amendments in treating the disease. The infection-to-death time and generation interval were given a Gamma distribution with (mean, standard deviation) set to (19, 8.5) and (6.5, 4.4) days respectively.
### Analyses and Results
_California_ was one of the first US states to report cases on the 26th of January, 2020. A state of emergency was declared on March 4, 2020, and mass/social gatherings were banned while a mandatory statewide stay-at-home order was issued on March 19, 2020. We fitted the model to daily deaths and using WAIC/LOO selected 6 changepoints. Figures 4 and 5 suggest that \(R_{e}(t)\) was reduced after imposing restrictions and fell below the critical value of 1 after April 2020 when school closure was decided for the remainder of the 2019-2020 academic year. The epidemic remained under control until the summer of 2020 when \(R_{e}(t)\) jumped slightly above 1 following a gradual relaxation of measures. On August 31, 2020, a new set of measures called 'Blueprint for a Safer Economy' was applied and all models show that they were effective, alongside the gained immunity of the population, at reducing the effective reproduction number below one and keeping the epidemic under control until the first half of October 2020. All models estimate a sharp increase in \(R_{e}(t)\), which resulted in an increase in the daily reported cases and deaths between November 2020 and January 2021. Nighttime curfew and regional stay-at-home orders were announced at the start of December 2020 whence \(R_{e}(t)\) remained stable and began declining. The initiation of the vaccination program on early 2021 brought the epidemic under control with \(R_{e}(t)\) remaining below 1.
_New York_ state had, by April 10 2020, more confirmed cases than any country outside the US and was heavily affected at the start of the pandemic, with daily recorded deaths reaching a thousand in April. On March 15 all New York City schools were closed and on March 20 state-wide stay-at-home order was declared. As a result, the models show a drop of \(R_{e}(t)\) below 1 from mid-March 2020 until August 2020 (Figures 4 and 5). The best-performing models based on WAIC and LOO had 7 changepoints (8 distinct phases). This model estimates that after the summer of 2020, \(R_{e}(t)\)
Figure 3: True (solid line) and estimated reproduction number \(R_{t}\) with 95% Cr.I. (dashed line) based on observing infections
remained above 1 up until the start of 2021 with a small increase during November and the holiday season. The DP and PP models show similar estimates for \(R_{e}(t)\) (Figure 5).
For the _United Kingdom_ a model with 8 changepoints was selected by WAIC and LOO. Until early March 2020, when a lockdown was imposed we estimate that \(R_{t}\approx 3.5\) (Figure 4). These measures were lifted in early June and during the lockdown \(R_{e}(t)\) remained below 1, and therefore under control. After the summer \(R_{e}(t)\) increased above 1 and the so-called rule of six was imposed while on November 5, 2020, the second lockdown was announced. The number of reported deaths was reduced after the initiation of the vaccination program on January 4 2021. Virtually identical estimates for the UK \(R_{e}(t)\) are inferred by the DP and PP models (Figures and additional details in the supplementary material).
We conducted an independent (or 'external') validation of the model performance based upon REACT-2, an antibody prevalence study conducted in the UK with the participation of more than 100000 adults (Ward et al., 2021). This is a unique opportunity as it took place on early July 2020 when waning immunity was unlikely and provides a reasonable
Figure 4: Estimation of Effective Reproduction Number \(R_{e}(t)\) with 50% CrI. (solid and dashed lines) based on observing deaths, fixed number of phases model.
estimate of the total disease burden up to that time. The estimated prevalence for the adult population (children were excluded) was 6.0% (95% CI: 5.8, 6.1) and our estimate for the whole population is 7.5% (95% Cr.I.: 5.7, 10.) (Figure 6) well compatible with that independent estimate.
For _Greece_ WAIC and LOO selected the 7-changepoint model. At the starting phase, we estimate \(R_{e}(t)=3.36\)\((sd=0.88)\) and a decrease below 1 on the first half of March 2020 (Figure 4). On March 10 the government suspended most activities, including educational, shopping and recreational while a week later all nonessential movement was restricted. The \(R_{e}(t)\) estimate remained below 1 until early June 2020 when it increased following the lifting of restrictions. During summer \(R_{e}(t)\) remained over 1 until November 2020 since a case spike on October led to new measures. Similar estimates for the \(R_{e}(t)\) are obtained by the DP and PP models (Supplementary material).
The computation time was similar for the PP and DP models with the DP being faster. More importantly, we get valuable insights on the effectiveness of the measures imposed by the governments. For New York and the UK it appears that the NPIs predate the reductions in transmissibility. California and Greece adopted the measures before a
Figure 5: Estimation of Effective Reproduction Number \(R_{e}(t)\) with 95% Cr.I. (solid and dashed lines) based on observing deaths, multi-stage approach.
large first wave, like other EU countries and US states. All regions were similar when these measures were relaxed: multiple epidemic waves emerged and the estimated \(R_{e}(t)\) remained above 1.
The results of our simulation experiments corroborate the findings of the application to real data from different areas. The time-ordering of the data facilitates avoiding label-switching problems typically encountered when fitting mixture models. By selecting the number of phases we capture mortality changes in all the real-world examples (Figure 7). The DP and PP models can infer a slightly higher number of phases but the conclusions are not materially affected. This observation is in line with Rousseau and Mengersen (2011) who show a generally stable behaviour of such so-called overfitted mixture models, theoretically verifying the robust behaviour of the developed models.
## 5 Discussion
In this article, we propose 3 models for the transmission mechanism of infectious diseases with multiple epidemic phases. We use freely available data to estimate the points in time when transmissibility changes and the realised magnitude of the NPI effects. We adopt this approach since many of these interventions coexist or overlap and identifiability issues can arise when disentangling individual effects and the associated time lags. Essentially, one may retrospectively
Figure 6: Cumulative sum of estimated daily infections with 95% Cr.I. (dashed lines) and the estimation of REACT-2 with 95% C.I. (solid lines) for the United Kingdom
assess the effect of the NPIs by comparing the changes in the reproduction number with the dates that these measures were imposed. Selecting the number of phases requires multiple runs and the computation time can be an issue when nowcasting is essential for decision-making. Estimating model complexity via the DP and PP models represents an alternative approach that is computationally efficient and statistically robust.
The DP and PP models can estimate more epidemic phases and this issue is discussed in detail in Rousseau and Mengersen (2011) and Miller and Harrison (2013). In our setting, this effect essentially relates to the start and end of the epidemic and the inherent challenges of limited information. At the start of the epidemic, such uncertainty dictates that estimates should be interpreted with caution. In the end, this is less of an issue and is mostly due to the time lag between cases and deaths. When one is working with the observed infections these issues are largely removed and inference is typically accurate throughout the duration of the data as indicated by our simulation experiments.
The models developed in this work are assuming a homogeneous and homogeneously mixing population, like most of the work studying SARS-CoV2 transmission. This may be appropriate for large populations such as working at the
Figure 7: Reported (triangles) and estimated deaths with 50% Cr.I. (solid and dashed lines) based on observing deaths, fixed number of phases model.
state or country level since functional central limit theorems can reasonably be thought of as applicable (e.g., Andersson and Britton, 2000). Our models can naturally be extended when more detailed information is available and this is the subject of current research.
## Acknowledgments
This article is part of the first author's doctoral thesis, co-financed by Greece and the European Union (European Social Fund-ESF) through the Operational Programme 'Human Resources Development, Education and Lifelong Learning' in the context of the Act 'Enhancing Human Resources Research Potential by undertaking a Doctoral Research' Sub-action 2: IKY Scholarship Programme for PhD candidates in the Greek Universities.
The authors are grateful to Kostas Kalogeropoulos and Petros Dellaportas for useful comments on an earlier version of this article.
|
2307.00524 | Large Language Models Enable Few-Shot Clustering | Unlike traditional unsupervised clustering, semi-supervised clustering allows
users to provide meaningful structure to the data, which helps the clustering
algorithm to match the user's intent. Existing approaches to semi-supervised
clustering require a significant amount of feedback from an expert to improve
the clusters. In this paper, we ask whether a large language model can amplify
an expert's guidance to enable query-efficient, few-shot semi-supervised text
clustering. We show that LLMs are surprisingly effective at improving
clustering. We explore three stages where LLMs can be incorporated into
clustering: before clustering (improving input features), during clustering (by
providing constraints to the clusterer), and after clustering (using LLMs
post-correction). We find incorporating LLMs in the first two stages can
routinely provide significant improvements in cluster quality, and that LLMs
enable a user to make trade-offs between cost and accuracy to produce desired
clusters. We release our code and LLM prompts for the public to use. | Vijay Viswanathan, Kiril Gashteovski, Carolin Lawrence, Tongshuang Wu, Graham Neubig | 2023-07-02T09:17:11Z | http://arxiv.org/abs/2307.00524v1 | # Large Language Models Enable Few-Shot Clustering
###### Abstract
Unlike traditional unsupervised clustering, semi-supervised clustering allows _users_ to provide meaningful structure to the data, which helps the clustering algorithm to match the user's intent. Existing approaches to semi-supervised clustering require a significant amount of feedback from an expert to improve the clusters. In this paper, we ask whether a large language model can _amplify_ an expert's guidance to enable query-efficient, _few-shot_ semi-supervised text clustering. We show that LLMs are surprisingly effective at improving clustering. We explore three stages where LLMs can be incorporated into clustering: before clustering (improving input features), during clustering (by providing constraints to the clusterer), and after clustering (using LLMs post-correction). We find incorporating LLMs in the first two stages can routinely provide significant improvements in cluster quality, and that LLMs enable a user to make trade-offs between cost and accuracy to produce desired clusters. We release our code and LLM prompts for the public to use.1
Footnote 1: [https://github.com/viswavi/few-shot-clustering](https://github.com/viswavi/few-shot-clustering)
## 1 Introduction
Unsupervised clustering aims to do an impossible task: organize data in a way that satisfies a domain expert's needs without any specification of what those needs are. Clustering, by its nature, is fundamentally an _underspecified_ problem. According to Caruana (2013), this underspecification makes clustering "probably approximately useless."
Semi-supervised clustering, on the other hand, aims to solve this problem by enabling the domain expert to guide the clustering algorithm (Bae et al., 2020). Prior works have introduced different types of interaction between an expert and a clustering algorithm, such as initializing clusters with hand-picked seed points (Basu et al., 2002), specifying pairwise constraints (Basu et al., 2004; Zhang et al., 2019), providing feature feedback (Dasgupta and Ng, 2010), splitting or merging clusters (Awasthi et al., 2013), or locking one cluster and refining the rest (Coden et al., 2017). These interfaces have all been shown to give experts control of the final clusters. However, they require significant effort from the expert. For example, in a simulation that uses split/merge, pairwise constraint, and lock/refine interactions (Coden et al., 2017), it took between 20 and 100 human-machine interactions to get _any_ clustering algorithm to produce clusters that fit the human's needs. Therefore, for large, real-world datasets with a large number of possible clusters, the feedback cost required by interactive clustering algorithms can be immense.
Building on a body of recent work that uses Large Language Models (LLMs) as noisy simulations of human decision-making (Fu et al., 2023; Horton, 2023; Park et al., 2023), we propose a different approach for semi-supervised text clustering. In particular, we answer the following research question: _Can an expert provide a few demonstrations of their desired interaction (e.g., pairwise constraints) to a large language model, then let the LLM direct the clustering algorithm?_
Figure 1: In traditional semi-supervised clustering, a user provides a large amount of feedback to the clusterer. In our approach, the user prompts an LLM with a small amount of feedback. The LLM then generates a large amount of pseudo-feedback for the clusterer.
We explore three places in the text clustering process where an LLM could be leveraged: before clustering, during clustering, and after clustering. We leverage an LLM _before clustering_ by augmenting the textual representation. For each example, we generate keyphrases with an LLM, encode these keyphrases, and add them to the base representation. We incorporate an LLM _during clustering_ by adding cluster constraints. Adopting a classical algorithm for semi-supervised clustering, pairwise constraint clustering, we use an LLM as a pairwise constraint pseudo-oracle. We then explore using an LLM _after clustering_ by correcting low-confidence cluster assignments using the pairwise constraint pseudo-oracle. In every case, the interaction between a user and the clustering algorithm is enabled by a prompt written by the user and provided to a large language model.
We test these three methods on five datasets across three tasks: canonicalizing entities, clustering queries by intent, and grouping tweets by topic. We find that, compared to traditional K-Means clustering on document embeddings, using an LLM to enrich each document's representation empirically improves cluster quality on every metric for all datasets we consider. Using an LLM as a pairwise constraint pseudo-oracle can also be highly effective when the LLM is capable of providing pairwise similarity judgements but requires a larger number of LLM queries to be effective. However, LLM post-correction provides limited upside. Importantly, LLMs can also approach the performance of _traditional semi-supervised clustering with a human oracle_ at a fraction of the cost.
Our work stands out from recent deep-learning-based text clustering methods (Zhang et al., 2021, 2023) in its remarkable simplicity. Using an LLM to expand documents' representation or correct clustering outputs can be added as a plug-in to _any text clustering algorithm_ using _any set of text features_, while our pseudo-oracle pairwise constraint clustering approach requires using K-Means as the underlying clustering algorithm. In our investigation of what aspect of the LLM prompt is most responsible for the clustering behavior, we find that just using an instruction alone (with no demonstrations) adds significant value. This can motivate future research directions for integrating natural language instructions with a clustering algorithm.
## 2 Methods to Incorporate LLMs
In this section, we describe the methods that we use to incorporate LLMs into clustering.
### Clustering via LLM Keyphrase Expansion
Before any cluster is produced, experts typically know what aspects of each document they wish to capture during clustering. Instead of forcing clustering algorithms to mine such key factors from scratch, it could be valuable to globally highlight these aspects (and thereby specify the task emphases) beforehand. To do so, we use an LLM to make every document's textual representation _task-dependent_, by enriching and expanding it with evidence relevant to the clustering need. Specifically, each document is passed through an LLM which generates keyphrases, these keyphrases are encoded by an embedding model, and the keyphrase embedding is then concatenated to the original document embedding.
We generate keyphrases using GPT-3 (specifically, gpt-3.5-turbo-0301). We provide a short prompt to the LLM, starting with an instruction (e.g. _"I am trying to cluster online banking queries based on whether they express the same intent. For each query, generate a comprehensive set of keyphrases that could describe its intent, as a JSON-formatted list."_). The instruction is followed by four demonstrations of keyphrases (example shown in Figure 2). Examples of full prompts are shown in Appendix B.
We then encode the generated keyphrases into a single vector, and concatenate this vector with the original document's text representation. To disentangle the knowledge from an LLM with the benefits of a better encoder, we encode the keyphrases using the same encoder as the original text.2
Figure 2: We expand document representations by concatenating them with keyphrase embeddings. The keyphrases are generated by a large language model.
Footnote 2: The _local_ clustering is not a problem, but it is not a problem.
### Pseudo-Oracle Pairwise Constraint Clustering
We explore the situation where a user conceptually describes which kinds of points to group together and wants to ensure the final clusters follow this grouping.
Arguably, the most popular approach to semi-supervised clustering is _pairwise constraint clustering_, where an oracle (e.g. a domain expert) selects pairs of points which _must_ be linked or _cannot_ be linked (Wagstaff and Cardie, 2000), such that more abstract clustering needs of experts can be implicitly induced from the concrete feedback.
We use this paradigm to investigate the potential of LLMs to amplify expert guidance during clustering, using an LLM as a _pseudo-oracle_.
To select pairs to classify, we take different strategies for entity canonicalization and for other text clustering tasks. For text clustering, we adapt the Explore-Consolidate algorithm (Basu et al., 2004) to first collect a diverse set of pairs from embedding space (to identify pairs of points that must be linked), then collect points that are nearby to already-chosen points (to find pairs of points that cannot be linked). For entity canonicalization, where there are so many clusters that very few pairs of points must be linked, we simply identify the closest distinct pairs of points in embedding space.
We prompt an LLM with a brief domain-specific instruction (provided in entirety in Appendix A), followed by up to 4 demonstrations of pairwise constraints, obtained from test set labels. We use these pairwise constraints to generate clusters with the PCKMeans algorithm of Basu et al. (2004). This algorithm applies penalties for cluster assignments that violate any constraints, weighted by a hyperparameter \(w\). Following prior work (Vashishth et al., 2018), we tune this parameter on each dataset's validation split.
### Using an LLM to Correct a Clustering
We finally consider the setting where one has an existing set of clusters, but wants to improve their quality with minimal local changes. We use the same pairwise constraint pseudo-oracle as in section 2.2 to achieve this, and we illustrate this procedure in Figure 3.
We identify the _low-confidence points_ by finding the \(k\) points with the least margin between the nearest and second-nearest clusters (setting \(k=500\) for our experiments). We textually represent each cluster by the entities nearest to the centroid of that cluster in embedding space. For each low-confidence point, we first ask the LLM whether or not this point is correctly linked to any of the representative points in its currently assigned cluster. If the LLM predicts that this point should not be linked to the current cluster, we consider the 4 next-closest clusters in embedding space as candidates for reranking, sorted by proximity. To rerank the current point, we ask the LLM whether this point should be linked to the representative points in each candidate cluster. If the LLM responds positively, then we reassign the point to this new cluster. If the LLM responds negatively for all alternative choices, we maintain the existing cluster assignment.
## 3 Tasks
### Entity Canonicalization
Task.In _entity canonicalization_, we must group a collection of noun phrases \(M=\{m_{i}\}_{1}^{N}\) into subgroups \(\{C_{j}\}_{1}^{K}\) such that \(m_{1}\in C_{j}\) and \(m_{2}\in C_{j}\) if and only if \(m_{1}\) and \(m_{2}\) refer to the same entity. For example, the noun phrases _President Biden (\(m_{1}\))_, _Joe Biden (\(m_{2}\))_ and _the 46th U.S. President (\(m_{3}\))_ should be clustered in one group (e.g., \(C_{1}\)). The set of noun phrases \(M\) are usually the nodes of an "open knowledge graph" produced by an OIE system.3 Unlike the related task of entity linking (Bunescu and Pasca, 2006; Milne and Witten,
Figure 3: After performing clustering, we identify low-confidence points. For these points, we ask an LLM whether the current cluster assignment is correct. If the LLM responds negatively, we ask the LLM whether this point should instead be linked to any of the top-5 nearest clusters, and correct the clustering accordingly.
2008), we do not assume that any curated knowledge graph, gazetteer, or encyclopedia contains all the entities of interests.
Entity canonicalization is valuable for motivating the challenges of semi-supervised clustering. Here, there are hundreds or thousands of clusters and relatively few points per cluster, making this a difficult clustering task that requires lots of human feedback to be effective.
Datasets.We experiment with two datasets:
* _OPIEC59k_(Shen et al., 2022) contains 22K noun phrases (with 2138 unique entity surface forms) belonging to 490 ground truth clusters. The noun phrases are extracted by MinIE (Gashteovski et al., 2017, 2019), and the ground truth entity clusters are anchor texts from Wikipedia that link to the same Wikipedia article.
* _ReVerb45k_(Vashishth et al., 2018) contains 15.5K mentions (with 12295 unique entity surface forms) belonging to 6700 ground truth clusters. The noun phrases are the output of the ReVerb (Fader et al., 2011) system, and the "ground-truth" entity clusters come from automatically linking entities to the Freebase knowledge graph. We use the version of this dataset from Shen et al. (2022), who manually removed samples containing labeling errors.
Canonicalization Metrics.We follow the standard metrics used by Shen et al. (2022):
* _Macro Precision and Recall_
* Prec: For what fraction of predicted clusters is every element in the same gold cluster?
* Rec: For what fraction of gold clusters is every element in the same predicted cluster?
* _Micro Precision and Recall_
* Prec: How many points are in the same gold cluster as the majority of their predicted cluster?
* Rec: How many points are in the same predicted cluster as the majority of their gold cluster?
* _Pairwise Precision and Recall_
* Prec: How many pairs of points predicted to be linked are truly linked by a gold cluster?
* Rec: How many pairs of points linked by a gold cluster are also predicted to be linked?
We finally compute the harmonic mean of each pair to obtain _Macro F1_, _Micro F1_, and _Pairwise F1_.
### Text Clustering
Task.We then consider the case of clustering short textual documents. This clustering task has been extensively studied in the literature (Aggarwal and Zhai, 2012).
Datasets.We use three datasets in this setting:
* _Bank77_(Casanueva et al., 2020) contains 3,080 user queries for an online banking assistant from 77 intent categories.
* _CLINC_(Larson et al., 2019) contains 4,500 user queries for a task-oriented dialog system from 150 intent categories, after removing "out-of-scope" queries (as in (Zhang et al., 2023).
* _Tweet_(Yin and Wang, 2016) contains 2,472 tweets from 89 categories.
Metrics.Following prior work (Zhang et al., 2021), we compare our text clusters to the ground truth using normalized mutual information and accuracy (obtained by finding the best alignment between ground truth and predicted clusters using the Hungarian algorithm (Kuhn, 1955)).
## 4 Baselines
### K-Means on Embeddings
We build our methods on top of a baseline of K-means clustering (Lloyd, 1982) over encoded data with k-means++ cluster initialization (Arthur and Vassilvitskii, 2007). We choose the features and number of cluster centers that we use by task, largely following previous work.
Entity CanonicalizationFollowing prior work (Vashishth et al., 2018; Shen et al., 2022), we clus
Figure 4: Using the CMVC architecture, we encode a knowledge graph-based “fact view” and a text-based “context-view” to represent each entity.
ter individual entity mentions (e.g. "ever since the ancient Greeks founded the city of _Marseille_ in 600 BC.") by representing unique surface forms (e.g. "Marseille") globally, irrespective of their particular mention context. After clustering unique surface forms, we compose this cluster mapping onto the individual mentions (extracted from individual sentences) to obtain mention-level clusters.
We build off of the "multi-view clustering" approach of Shen et al. (2022), and represent each noun phrase using textual mentions from the Internet and the "open" knowledge graph extracted from an OIE system, as shown in Figure 4. They use a BERT encoder (Devlin et al., 2019) to represent the textual context where an entity occurs (called the "context view"), and a TransE knowledge graph encoder (Bordes et al., 2013) to represent nodes in the open knowledge graph (called the "fact view"). They improve these encoders by fine-tuning the BERT encoder using weak supervision of coreferent entities and improving the knowledge graph representations using data augmentation on the knowledge graph. These two views of each entity are then combined to produce a representation.
In their original paper, they propose an alternating multi-view K-Means procedure where cluster assignments that are computed in one view are used to initialize cluster centroids in the other view. After a certain number of iterations, if the per-view clusterings do not agree, they perform a "conflict resolution" procedure to find a final clustering with low inertia in both views. One of our secondary contributions is a simplification of this algorithm. We find that by simply using their fine-tuned encoders, concatenating the representations from each view, and performing K-Means clustering with K-Means++ initialization (Arthur and Vassilvitskii, 2007) in a shared vector space, we can match their reported performance.
Finally, regarding the number of cluster centers, following the Log-Jump method of Shen et al. (2022), we choose 490 and 6687 clusters for OPIEC59k and ReVerb45k, respectively.
Intent ClusteringFor the Bank77 and CLINC datasets, we follow Zhang et al. (2023) and encode each user query using the Instructor encoder. We use a simple prompt to guide the encoder: "Represent utterances for intent classification". Again following previous work, we choose 150 and 77 clusters for CLINC and Bank77, respectively.
Tweet ClusteringFollowing Zhang et al. (2021), we encode each tweet using a version of DistilBERT (Sanh et al., 2019) finetuned for sentence similarity classification4(Reimers and Gurevych, 2019). We use 89 clusters (Zhang et al., 2021).
Footnote 4: This model is distilbert-base-nli-stsb-mean-tokens on HuggingFace.
### Clustering via Contrastive Learning
In addition to the methods described in Section 2, we also include two other methods for text clustering, where previously reported: SCCL (Zhang et al., 2021) and ClusterLLM (Zhang et al., 2023). Both use constrastive learning of deep encoders to improve clusters, making these significantly more complicated and compute-intensive than our proposed methods. SCCL combines deep embedding clustering (Xie et al., 2015) with unsupervised contrastive learning to learn features from text. ClusterLLM uses LLMs to improve the learned features. After running hierarchical clustering, they also use triplet feedback from the LLM ("is point A more similar to point B or point C?") to decide the cluster granularity from the cluster hierarchy and generate a flat set of clusters. To compare effectively with these approaches, we use the same encoders reported for SCCL and ClusterLLM in prior works: Instructor (Su et al., 2022) for Bank77 and CLINC and DistilBERT (finetuned for sentence similarity classification) (Sanh et al., 2019; Reimers and Gurevych, 2019) for Tweet.
## 5 Results
### Summary of Results
We summarize empirical results for entity canonicalization in Table 1 and text clustering in Table 2.5 We find that using the LLM to expand textual representations is the most effective, achieving state-of-the-art results on both canonicalization datasets and significantly outperforming a K-Means baseline for all text clustering datasets. Pairwise constraint K-means, when provided with 20,000 pairwise con
straints pseudo-labeled by an LLM, achieves strong performance on 3 of 5 datasets (beating the current state-of-the-art on OPIEC59k). Below, we conduct more in-depth analyses on what makes each method (in-)effective.
### LLMs excel at text expansion
In Table 1 and Table 2, we see that the "Keyphrase Clustering" approach is our strongest approach, achieving the best results on 3 of 5 datasets (and giving comparable performance to the next strongest method, pseudo-oracle PCKMeans, on the other 2 datasets). This suggests that LLMs are useful for expanding the contents of text to facilitate clustering.
What makes LLMs useful in this capacity? Is it the ability to specify task-specific modeling instructions, the ability to implicitly specify a similarity function via demonstrations, or do LLMs contain knowledge that smaller neural encoders lack?
We answer this question with an ablation study. For OPIEC59k and CLINC, we consider the "Keyphrase Clustering" technique but omit either the instruction or the demonstration examples from the prompt. For CLINC, we also compare with K-Means clustering on features from the Instructor model, which allows us to specify a short instruction to a small encoder. We find empirically that providing either instructions or demonstrations in the prompt to the LLM enables the LLM to improve cluster quality, but that providing both gives the most consistent positive effect. Qualitatively, providing instructions but omitting demonstrations
\begin{table}
\begin{tabular}{r r|c c c|c c c c} \hline \hline \multicolumn{2}{c|}{Dataset / Method} & \multicolumn{3}{c|}{OPIEC59k} & \multicolumn{3}{c}{ReVerb45k} \\ \cline{3-10} & **Macro F1** & **Micro F1** & **Pair F1** & _Avg_ & **Macro F1** & **Micro F1** & **Pair F1** & _Avg_ \\ \hline \hline \multicolumn{2}{c|}{Optimal Clust.} & 80.3 & 97.0 & 95.5 & 90.9 & 84.8 & 93.5 & 92.1 & 90.1 \\ \hline \multicolumn{2}{c|}{CMVC} & 52.8 & 90.7 & 84.7 & 76.1 & 66.1 & 87.9 & 89.4 & 81.1 \\ \hline \multicolumn{2}{c|}{KMeans} & 53.5 \(\pm 0.0\) & 91.0 \(\pm 0.0\) & 85.6 \(\pm 0.0\) & 76.7 & 69.6 \(\pm 0.0\) & 89.1 \(\pm 0.0\) & 89.3 \(\pm 0.0\) & 82.7 \\ \hline \multirow{2}{*}{
\begin{tabular}{c} \\ \end{tabular} } & PCKMeans & 58.7 \(\pm 0.0\) & 91.5 \(\pm 0.0\) & 86.1 \(\pm 0.0\) & 78.7 & 72.0 \(\pm 0.0\) & 88.5 \(\pm 0.0\) & 87.0 \(\pm 0.0\) & 82.5 \\ & LLM Correction & 58.7 & 91.5 & 85.2 & 78.4 & 69.9 & 89.2 & 88.4 & 82.5 \\ & Keyphrase Clust. & **60.3**\(\pm 0.0\) & **92.5**\(\pm 0.0\) & **87.3**\(\pm 0.0\) & **80.0** & **72.3**\(\pm 0.0\) & **90.2**\(\pm 0.0\) & **90.0**\(\pm 0.0\) & **84.2** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparing methods for integrating LLMs into entity canonicalization. “CMVC” refers to the multi-view clustering method of Shen et al. (2022), while “KMeans” refers to our simplified reimplementation of the same method. Where applicable, standard deviations are obtained by running clustering 5 times with different seeds.
\begin{table}
\begin{tabular}{r r|c c|c c|c c} \hline \hline \multicolumn{2}{c|}{Dataset / Method} & \multicolumn{2}{c|}{Bank77} & \multicolumn{2}{c|}{CLINC} & \multicolumn{2}{c}{Tweet} \\ \cline{3-8} & **Acc** & **NMI** & **Acc** & **NMI** & **Acc** & **NMI** \\ \hline \hline \multicolumn{2}{c|}{SCCL} & – & – & – & – & 78.2 & 89.2 \\ \multicolumn{2}{c|}{ClusterLLM} & 71.2 & – & 83.8 & – & – & – \\ \hline \multicolumn{2}{c|}{KMeans} & 64.0 \(\pm 0.0\) & 81.7 \(\pm 0.0\) & 77.7 \(\pm 0.0\) & 91.5 \(\pm 0.0\) & 57.5 \(\pm 0.0\) & 80.6 \(\pm 0.0\) \\ \hline \multirow{2}{*}{
\begin{tabular}{c} \\ \end{tabular} } & PCKMeans & 59.6 \(\pm 0.0\) & 79.6 \(\pm 0.0\) & **79.6**\(\pm 0.0\) & 92.1 \(\pm 0.0\) & **65.3**\(\pm 0.0\) & **85.1**\(\pm 0.0\) \\ & LLM Correction & 64.1 & 81.9 & 77.8 & 91.3 & 59.0 & 81.5 \\ & Keyphrase Clustering & **65.3**\(\pm 0.0\) & **82.4**\(\pm 0.0\) & 79.4 \(\pm 0.0\) & **92.6**\(\pm 0.0\) & 62.0 \(\pm 0.0\) & 83.8 \(\pm 0.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparing methods for integrating LLMs into text clustering. “SCCL” refers to Zhang et al. (2021) while “ClusterLLM” refers to Zhang et al. (2023). We use the same base encoders as those methods in our experiments. Where applicable, standard deviations are obtained by running clustering 5 times with different seeds.
\begin{table}
\begin{tabular}{r|c|c c} \hline \hline \multicolumn{2}{c|}{Dataset / Method} & \multicolumn{2}{c|}{OPIEC59k} & \multicolumn{2}{c}{CLINC} \\ \cline{3-4} & **Avg F1** & **Acc** & **NMI** \\ \hline \hline \multicolumn{2}{c|}{Keyphrase Clust.} & **80.0** & **79.4**\(\pm 0.0\) & **92.6**\(\pm 0.0\) \\ \multicolumn{2}{c|}{w/o Instructions} & 79.1 & 78.4 \(\pm 0.0\) & 92.7 \(\pm 0.0\) \\ \multicolumn{2}{c|}{w/o Demonstrations} & 79.8 & 78.7 \(\pm 0.0\) & 91.8 \(\pm 0.0\) \\ \hline \multicolumn{2}{c|}{Instructor-base} & \multicolumn{2}{c|}{––} & 74.8 \(\pm 0.0\) & 90.7 \(\pm 0.0\) \\ \multicolumn{2}{c|}{Instructor-large} & \multicolumn{2}{c|}{––} & 77.7 \(\pm 0.0\) & 91.5 \(\pm 0.0\) \\ \multicolumn{2}{c|}{Instructor-XL} & \multicolumn{2}{c|}{–––} & 77.2 \(\pm 0.0\) & 91.9 \(\pm 0.0\) \\ \multicolumn{2}{c|}{(Su et al., 2022)} & & & \\ \hline \multicolumn{2}{c|}{Instructor-XL} & \multicolumn{2}{c|}{–––} & 70.8 \(\pm 0.0\) & 88.6 \(\pm 0.0\) \\ \multicolumn{2}{c|}{(GPT-3.5 prompt)} & & & & \\ \hline \end{tabular}
\end{table}
Table 3: We compare the effect of LLM intervention without demonstrations or without instructions. We see that GPT-3.5-based Keyphrase Clustering outperforms instruction-finetuned encoders of different sizes, even when we provide the same prompt.
leads to a larger set of keyphrases with less consistency, while providing demonstrations without any instructions leads to a more focused group of keyphrases that sometimes fail to reflect the desired aspect (e.g. topic vs. intent).
Why is keyphrase clustering using GPT-3.5 in the instruction-only ("without demonstrations") setting better than Instructor, which is an instruction-finetuned encoder? While GPT-3.5's size is not published, GPT-3 contains 175B parameters, Instructor-base/large/xl contain 110M, 335M parameters, and 1.5B parameters, respectively. The modest scaling curve suggests that scale is not solely responsible.
Our prompts for Instructor are brief (e.g. "Represent utterances for intent classification"), while our prompts for GPT-3.5 (in Appendix B) are very detailed. Instructor-XL does not handle long prompts well; in the bottom row of Table 3, we see that Instructor-XL performs poorly when given the same prompt that we give to GPT-3.5. We speculate that today's instruction-finetuned encoders are insufficient to support the detailed, task-specific prompts that facilitate few-shot clustering.
### The limitations of LLM post-correction
LLM post-correction consistently provides small gains on datasets over all metrics - between 0.1 and 5.2 absolute points of improvement. In Table 4, we see that when we provide the top 500 most-uncertain cluster assignments to the LLM to reconsider, the LLM only reassigns points in a small minority of cases. Though the LLM pairwise oracle is usually accurate, the LLM is disproportionately inaccurate for points where the original clustering already had low confidence.
### How much does LLM guidance cost?
We've shown that using an LLM to guide the clustering process can improve cluster quality. However, large language models can be expensive; using a commercial LLM API during clustering imposes additional costs to the clustering process.
In Table 5, we summarize the pseudo-labeling cost of collecting LLM feedback using our three approaches. Among our three proposed approaches, pseudo-labeling pairwise constraints using an LLM (where the LLM must classify 20K pairs of points) incurs the greatest LLM API cost. While PCKMeans and LLM Correction both query the LLM the same number of times for each dataset, Keyphrase Correction's cost scales linearly with the size of the dataset, making this infeasible for clustering very large corpora.
### Using an LLM as a pseudo-oracle is cost-effective
Using large language models increases the cost of clustering. Does the improved performance justify this cost? By employing a human expert to guide the clustering process instead of a large language model, could one achieve better results at a comparable cost?
Since pseudo-labeling pairwise constraints requires the greatest API cost in our experiments, we take this approach as a case study. Given a sufficient amount of pseudo-oracle feedback, we see in Figure 5 that pairwise constraint K-means is able to yield an improvement in Macro F1 (suggesting better purity of clusters) without dramatically reducing Pairwise or Micro F1.
Is this cost reasonable? For the $41 spent on the OpenAI API for OPIEC59k (as shown in Table 5), one could hire a worker for 3.7 hours of labeling time, assuming an $11-per-hour wage Hara et al. (2017). We observe that an annotator can label roughly 3 pairs per minute. Then, $41 in worker wages would generate <700 human labels at the same cost as 20K GPT-3.5 labels.
Based on the feedback curve in Figure 5, we see
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset / Method & OPIEC59k & CLINC & Tweet \\ \hline \hline & \multicolumn{3}{c}{**Counts**} \\ \hline Data Size & 2138 & 4500 & 2472 \\ \# of LLM Reassignmts & 109 & 149 & 78 \\ Accuracy of Reassignments & 55.0 & 57.0 & 89.7 \\ \hline Overall Accuracy of Pairwise & 86.7 & 95.0 & 96.8 \\ Pseudo-Oracle & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: When re-ranking the top 500 points in each dataset, the LLM rarely disagrees from the original clustering, and when it does, it is frequently wrong.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline & \multicolumn{3}{c}{PCKMeans} & Correction & Keyphrase \\ \hline \hline
**Method** & **Data Size** & \multicolumn{3}{c}{**Cost in USD**} \\ \hline OPIEC59k & 2138 & \$42.03 & \$12.73 & \$2.24 \\ ReVerb45k & 12295 & \$33.81 & \$10.24 & \$10.66 \\ Bank77 & 3080 & \$10.25 & \$3.38 & \$1.23 \\ CLINC & 4500 & \$9.77 & \$2.80 & \$0.95 \\ Tweet & 2472 & \$11.28 & \$3.72 & \$0.99 \\ \hline \hline \end{tabular}
\end{table}
Table 5: We compare the pseudo-labeling costs of different LLM-guided clustering approaches. We used OpenAI’s gpt-3.5-turbo-0301 API in June 2023.
that GPT-3.5 is remarkably more effective than a true oracle pairwise constraint oracle at this price point; unless at least 2500 pairs labeled by a true oracle are provided, pairwise constraint KMeans fails to deliver any value for entity canonicalization. This suggests that if the goal is maximizing empirical performance, querying an LLM is more cost-effective than employing a human labeler.
## 6 Conclusion
We find that using LLMs in simple ways can provide consistent improvements to the quality of clusters for a variety of text clustering tasks. We find that LLMs are most consistently useful as a means of enriching document representations, and we believe that our simple proof-of-concept should motivate more elaborate approaches for document expansion via LLMs.
## 7 Acknowledgements
This work was supported by a fellowship from NEC Research Laboratories. We are grateful to Wiem Ben Rim, Saujas Vaduguru, and Jill Fain Lehman for their guidance. We also thank Chenyang Zhao for providing valuable feedback on this work.
|
2305.00349 | Causal effects of intervening variables in settings with unmeasured
confounding | We present new results on average causal effects in settings with unmeasured
exposure-outcome confounding. Our results are motivated by a class of
estimands, e.g., frequently of interest in medicine and public health, that are
currently not targeted by standard approaches for average causal effects. We
recognize these estimands as queries about the average causal effect of an
intervening variable. We anchor our introduction of these estimands in an
investigation of the role of chronic pain and opioid prescription patterns in
the opioid epidemic, and illustrate how conventional approaches will lead
unreplicable estimates with ambiguous policy implications. We argue that our
altenative effects are replicable and have clear policy implications, and
furthermore are non-parametrically identified by the classical frontdoor
formula. As an independent contribution, we derive a new semiparametric
efficient estimator of the frontdoor formula with a uniform sample boundedness
guarantee. This property is unique among previously-described estimators in its
class, and we demonstrate superior performance in finite-sample settings.
Theoretical results are applied with data from the National Health and
Nutrition Examination Survey. | Lan Wen, Aaron L. Sarvet, Mats J. Stensrud | 2023-04-29T22:05:17Z | http://arxiv.org/abs/2305.00349v3 | # Causal effects of intervening variables in settings with unmeasured confounding
###### Abstract.
We present new results on average causal effects in settings with unmeasured exposure-outcome confounding. Our results are motivated by a class of estimands, e.g., frequently of interest in medicine and public health, that are currently not targeted by standard approaches for average causal effects. We recognize these estimands as queries about the average causal effect of an _intervening_ variable. We anchor our introduction of these estimands in an investigation of the role of chronic pain and opioid prescription patterns in the opioid epidemic, and illustrate how conventional approaches will lead unreplicable estimates with ambiguous policy implications. We argue that our altenative effects are replicable and have clear policy implications, and furthermore are non-parametrically identified by the classical frontdoor formula. As an independent contribution, we derive a new semiparametric efficient estimator of the frontdoor formula with a uniform sample boundedness guarantee. This property is unique among previously-described estimators in its class, and we demonstrate superior performance in finite-sample settings. Theoretical results are applied with data from the National Health and Nutrition Examination Survey.
_Key words: Causal inference; Double robustness; Estimands; Frontdoor formula; Intervening variable; Separable effect_
## 1 Introduction
Unmeasured confounding and ill-defined interventions are major contributing factors to the replication crisis for policy-relevant parameters, e.g., in medical research. When there is unmeasured confounding, standard covariate-adjustment approaches will lead to biases that would likely differ across studies. Further, when an analysis is based on variables that do not correspond to well-defined interventions, it is nearly impossible for future analyses and experiments to ensure that these variables are operationalized identically with the original study. In each case, replication is nearly impossible.
To counteract these challenges, there exists a diverse set of strategies to confront unmeasured confounding between exposure and outcome, which leverage the measurement of auxiliary variables, including instruments and other proxies (Angrist et al., 1996; Lipsitch et al., 2010; Tchetgen Tchetgen et al., 2020), as well as mediators (Pearl, 2009; Fulcher et al., 2020). At the same time, there is a long history of calls for an "interventionist" approach to causal analyses in statistics (Holland, 1986; Dawid, 2000; Richardson and Robins, 2013; Robins et al., 2022) wherein investigators focus on the effects of manipulable, or _intervenable_, variables. Use of such variables ensures that the targets of inference are clearly defined by variables representing interventions that can be implemented in principle (see e.g., Hernan, 2005; Hernan and VanderWeele, 2011; Galea, 2013). Furthermore, the seemingly disparate challenges of ill-defined interventions and unmeasured exposure-outcome confounding are often considered separately, but systematically co-occur in practice: when an exposure variable does not correspond to a well-defined intervention, investigators will often face exceptional challenges in sufficiently describing and measuring the causes of that exposure. In such cases, investigators will often also have little confidence in the assumption of no unmeasured confounding (Hernan and Taubman, 2008).
In this article we consider jointly the twin challenges of ill-defined interventions and unmeasured confounding. Building on new results for a generalized theory of separable effects (Robins et al., 2022; Robins and Richardson, 2010; Stensrud et al., 2021), our contributions concern effects of an _intervening_ variable: a manipulable descendant of a (possibly ill-defined) exposure (or treatment) that precedes the outcome. We argue that such average causal effects, rather than those of (possibly ill-defined) exposures, are frequently of interest in practice. We derive new results on their interpretation, identification and estimation. In doing so, we develop a novel semiparametric efficient estimator for the canonical front-door functional with superior finite sample performance properties.
### Related approaches
Our results are related to previous work on identification of causal effects in the presence of unmeasured confounding. As we expound in Section 3, causal effect of an _intervening_ variable may be identified by the frontdoor formula, which coincidentally also allows non-parametric identification of the average causal effect (ACE) of exposure on outcome, even in the presence of unmeasured exposure-outcome confounding (Pearl, 1993, 2009). Yet meaningful applications of the frontdoor criterion to study the ACE of exposure on outcome have been scarce. One problem is that conventional identification of the ACE by the frontdoor criterion requires a strict exclusion restriction, which is often infeasible: an investigator must measure at least one mediator intersecting each causal pathway from the exposure to the outcome. In contrast to conventional approaches, however, the definition of _intervening_ variables often renders such exclusion restrictions uniquely plausible.
Our results are also related to the work by Fulcher et al. (2020), who gave conditions under which the frontdoor formula identifies the so-called Pure Intervention Indirect Effect (PIIE). Fulcher et al. (2020) interpreted the PIIE as a "contrast between the observed outcome mean for the population and the population outcome mean if contrary to fact the mediator had taken the value that it would have in the absence of exposure." Thus, unlike the conventional identification result for the ACE, the frontdoor formula can be used to identify the PIIE
even in the presence of a direct effect of the exposure on the outcome not mediated by an intermediate variable (or intermediate variables). We describe the assumptions for the two aforementioned estimands in Section 4. To fix ideas about the relation to the previous work on the frontdoor formula, we introduce the following running example.
**Example 1** (Chronic pain and opioid use, Inoue et al., 2022).: Chronic pain is associated with use and overdosing of opioids, which subsequently can lead to death. Moreover, chronic pain can also affect mortality outside of its effect on opioid use (Dowell et al., 2016), e.g., by causing long-term stress and undesirable lifestyle changes. Inoue et al. (2022) studied the effect of chronic pain (exposure to pain versus no pain) on mortality (outcome) mediated by opioid use. Chronic pain is notoriously treatment-resistant, and unmeasured confounders between chronic pain and mortality may include social, physiologic, and psychological factors (Inoue et al., 2022). Using data from the National Health and Nutrition Examination Survey (NHANES) from 1999-2004 with linkage to mortality databases through 2015, investigators studied a causal effect related to the PIIE. Specifically, Inoue et al. (2022) considered a "path-specific frontdoor effect" that they interpreted as "the change in potential outcomes that follows a change in the mediator (opioid) which was caused by changing the exposure."
As indicated by Inoue et al. (2022) there likely exist unmeasured common causes of the exposure and the outcome in our example. However, it is unclear how one could intervene on the exposure variable, chronic pain, or even whether any intervention on chronic pain can possibly be well-defined. Therefore, the estimands in our example have dubious public health implications (Holland, 1986). In contrast, we will suggest effects of _intervening_ variables, which do correspond to interventions that are feasible to implement in practice. To motivate our intervention, consider a doctor who determines if a patient should receive opioids to relieve symptoms. The doctor's decision could be affected by whether the patient has chronic pain. However, the current guidelines by the Centers for Disease Control and Prevention (CDC) (Dowell et al., 2016; Inoue et al., 2022) suggests that chronic pain status should no longer be used to determine opioid assignment. Thus, there is interest in evaluating outcomes
under a modified prescription policy (the intervention of interest), such that the doctor disregards a patient's chronic pain in their decisions on opioid prescriptions. To implement this modified prescription policy, doctors could e.g., be asked to not consider a patient as having chronic pain in the prescription process. This modified policy does not require us to conceptualize interventions on a patient's chronic pain - an intervention that would have been hard, or impossible, to specify. However, an analyst with access to observational data might reasonably assume a deterministic relation between a patient's chronic pain status and the _doctor's perception_ of the chronic pain status: an _intervening_ variable that in turn determines the doctor's prescription in the observed data.
The remainder of the articles is organized as follows. In Section 2 we describe the observed and counterfactual data structure. In Section 3 we precisely define our interventionist estimand and derive identification results. In Section 4 we relate our identification results to non-parametric conditions that allow identification of relevant estimands that have been proposed in the past, and discuss the plausibility of these conditions. In Section 5 we present a new semiparametric estimator of the frontdoor formula. In Section 6 we give simulation results and illustrate that our estimator performs well in finite sample settings. In Section 7, we apply our new results to study the effect of opioid prescription policies on chronic pain using data from NHANES. We discuss the practical implication of our work in Section 8.
## 2. Observed and counterfactual data structure
Consider a study of \(n\) iid individuals randomly sampled from a large superpopulation. Let \(A\) denote the observed binary exposure taking values \(a^{\dagger}\) or \(a^{\circ}\) (e.g., chronic pain). \(M\) (e.g., opioid usage), a mediator variable and \(Y\), an outcome of interest which can be binary, categorical or continuous (e.g., probability of survival at 3 years or 5 years). Furthermore, suppose that \(L\) is a vector of pre-exposure covariates measured at baseline, which can include common
causes of \(A,\,M\) and \(Y\). To simplify the presentation, we will assume throughout that all covariates are discrete, that is, the variables have distributions that are absolutely continuous with respect to a counting measure. However, our arguments extend to settings with continuous covariates and the Lebesgue measure. We indicate counterfactuals in superscripts. In particular, let \(M^{a}\) and \(Y^{a}\) denote the counterfactual mediator and outcome variables if, possibly contrary to fact, the exposure had taken a value \(a\) for \(a{\in}\{a^{\dagger},a^{\circ}\}\). Extensions to discrete exposure variables with more than two levels are discussed in Web Appendix H.
## 3. Effects of the intervening variable
In the analysis of the chronic pain example, Inoue et al. (2022) concluded that their results are relevant to pain management, arguing that the findings "highlight the importance of careful guideline-based chronic pain management to prevent death from possibly inappropriate opioid prescriptions driven by chronic pain." However, this argument does not translate to an intervention on chronic pain \(A\); rather, we interpret their policy concern as one that directly involves a modifiable _intervening_ variable: a care provider's _perception_ of the patient's chronic pain in standard-of-care pain management decisions, say \(A_{M}\). If we define \(A_{M}\) as binary (taking values \(a^{\dagger}\) or \(a^{\circ}\)), and assume that a care provider's natural consideration corresponds exactly with a patient's chronic pain experience, then in this setting, \(A{=}A_{M}\) with probability one in the observed data. Despite this feature of the observed data, we could nevertheless conceive an intervention that modifies this _intervening_ variable \(A_{M}\) without changing the non-modifiable exposure \(A\), and thus preserve the investigators direct policy concerns at this causal estimand-formulation stage of the analysis. Our consideration of the chronic pain example motivates estimands under interventions on the modifiable _intervening_ variable \(A_{M}\), not \(A\), where (i) \(A_{M}\) is deterministically equal to \(A\) in the observed data, and (ii) \(A_{M}\) captures the effects of \(A\) on \(Y\) through \(M\). These two features of our
estimand are also features of separable effects (Robins and Richardson, 2010; Robins et al., 2020; Stensrud et al., 2021a).
**Definition 1** (The intervening variable estimand).: _The intervening variable estimand is defined as the expected counterfactual outcome \(E(Y^{a_{M}})\), where the intervening variable \(a_{M}\) can take values \(a^{\dagger}\) or \(a^{\circ}\) in the sample space of \(A\)._
As such, we can define a causal effect of an _intervening_ variable \(A_{M}\) on an outcome \(Y\) as a contrast between \(E(Y^{a_{M}=a^{\dagger}})\) and another causal estimand such as \(E(Y^{a_{M}=a^{\circ}})\) or \(E(Y)\). As in the separable effects literature, the conceptual elaboration in the chronic pain example is amenable to graphical representation. Consider an extended causal directed acyclic graph (DAG) (Robins and Richardson, 2010), which not only includes \(A\) but also the _intervening_ variable \(A_{M}\). Figure 1a shows this extended DAG with node set \(V\)=(\(U\),\(L\),\(A\),\(A_{M}\),\(M\),\(Y\)). The bold arrow from \(A\) to \(A_{M}\) in Figure 1a indicates a deterministic relationship in the observed data. The deterministic relationship encodes that, in the observed data, with probability one under \(f(v)\), either \(A\)=\(A_{M}\)=\(a^{\dagger}\) or \(A\)=\(A_{M}\)=\(a^{\circ}\). These graphs will be used to illustrate our subsequent results, where we give formal conditions under which the effects of an intervention that sets \(A_{M}\) to \(a_{M}\), as represented in the Single World Interventions Graph (SWIG) Figure 1b, can be identified. Note that the absence of an arrow in a SWIG encodes the absence of individual level effects as described in Richardson and Robins (2013). Causal effects of intervening variables are of interest beyond our running chronic pain example. We illustrate another application in the following example considering race (an exposure often under-theorized in statistical analyses) and job interview discrimination. In Web Appendix A, we also consider an obstetrics example from Fulcher et al. (2020).
**Example 2** (Race and job interview discrimination).: Consider the extended graph in Figure 1a, where \(A\) denotes the race of an individual, \(A_{M}\) denotes the race that an individual indicates on a job application for a company, \(M\) denotes the indicator that the individual receives an interview, and \(Y\) denotes whether an individual is hired for the job (1 if the individual is hired and 0 otherwise). Further, the unmeasured variable \(U\) includes complex
historical processes that ultimately affect both a person's perceived or declared race on the job market and may influence whether or not they are hired by the company. Several studies have documented that resume 'whitening' (e.g., deleting any references or connotations to a non-White race) can increase an applicant's chance of receiving an interviews (Kang et al., 2016; Gerdeman, 2017). However, it is unclear if this strategy leads to a higher chance of actually being hired, as e.g., unconscious bias can also affect whether or not an individual is hired, even if the candidate's background and qualifications are the same as other candidates. Thus, in a group of individuals with the same qualifications, the difference between \(E(Y)\) and \(E(Y^{a_{M}=\text{white}})\) could indicate the effect among non-White candidates of resume whitening in the screening process on the probability of being hired.
To formally state our identifiability conditions of an intervening variable estimand, we first invoke the assumption of a deterministic relationship between the exposure and the intervening variable in the observed data.
**Assumption 1** (Intervening variable determinism).: \(A\)=\(A_{M}\) _w.p.1._
We also invoke a positivity condition which only involves observable laws and requires that for all joint values of \(L\), there is a positive probability of observing \(A\)=\(a\) and \(M\)=\(m\), \(\forall a\),\(m\).
**Assumption 2** (Positivity).: \(f(m,a|l)\)\(>\)0_, \(\forall(m,\;a,\;l)\)\(\in\)_supp\((M,\;A,\;L)\)._
Further, we use a consistency assumption stating that the interventions on the intervening variable \(A_{M}\) is well-defined.
**Assumption 3** (Consistency).: _If \(A_{M}\)=\(a_{M}\), then \(M^{a_{M}}\)=\(M\),\(Y^{a_{M}}\)=\(Y\), \(\forall\)\(a_{M}\)\(\in\)supp\((A)\)._
Following Stensrud et al. (2021), let "\((G)\)" refer to a future trial where \(A_{M}\) is randomly assigned, and consider the following dismissible component conditions.
**Assumption 4** (Dismissible component conditions).: \[Y(G)\,\hbox to 0.0pt{$\perp$}\kern-1.0pt\perp\,A_{M}(G)\,|\,A(G),L(G),M(G),\] (1) \[M(G)\,\hbox to 0.0pt{$\perp$}\kern-1.0pt\perp\,A(G)\,|\,A_{M}(G),L(G).\] (2)
Assumption 4 can fail either when conditional independence (1) fails (e.g., when \(A_{M}\) exerts effects on \(Y\) not intersected by \(M\)) or when conditional independence (2) fails (e.g., when \(A\) exerts effects on \(M\) not intersected by \(A_{M}\)). Furthermore, Assumption 4 will require that \(L\) includes certain common causes of other variables; it is sufficient that \(L\) captures all common causes of \(M\) and \(Y\) and all common causes of \(A\) and \(M\). Note that the dismissible component conditions imply so-called partial isolation conditions (see Stensrud et al., 2021a).
**Theorem 1**.: _The average counterfactual outcome under an intervention on \(A_{M}\) is identified by the frontdoor formula (4) under Assumptions 1-4. That is,_
\[\Psi\,\hbox to 0.0pt{$\cdot$}\kern-1.0pt\mathrel{\mathop{=}\limits} \,E(Y^{a_{M}=a^{\dagger}}) \tag{3}\] \[= \sum_{m,l}f(m|\,a^{\dagger},l)f(l)\sum_{a}E(Y|\,L\,=l,A\,=a,M\,=m) f(a|\,l).\]
As we formally show in Web Appendix D, our causally manipulable estimand \(E(Y^{a_{M}=a^{\dagger}})\) is identified by (3) even when \(A\) is a direct cause of \(Y\), not mediated by \(M\). Henceforth, we refer to the right-hand-side of (3) as the generalized frontdoor formula as it allows for some baseline covariates \(L\). To the best of our knowledge, Pearl's original frontdoor formula (Pearl, 1993, 2009) did not consider \(L\). Note that Fulcher et al. (2020) used the term generalized frontdoor criterion (not formula) to describe the identification conditions for the PIIE, which we discuss in Section 4. They argued that their "identification criterion generalizes Judea Pearl's front door criterion as it does not require no direct effect of exposure not mediated by the intermediate variable". However, the cross-world exchangeability Assumption 9 is necessary to identify PIIE, but it is not necessary to identify the ACE. Thus, Fulcher et al.
(2020) added an (untestable) assumption that is unnecessarily restrictive when the aim is to identify the ACE.
To motivate our estimation results in Section 5, we emphasize that, in the absence of covariates \(L\), the frontdoor formula can be expressed as
\[\Psi \coloneqq E(Y^{a_{M}=a^{\dagger}}){=}\sum_{m}f(m|a^{\dagger}){\sum_{a}}E(Y|A {=}a,M{=}m)f(a) \tag{4}\] \[= P(A{=}a^{\dagger})E(Y|A{=}a^{\dagger})+P(A{=}a^{\circ}){\sum_{m}}E (Y|A{=}a^{\circ},M{=}m)f(m|a^{\dagger}).\]
Thus, \(E(Y^{a_{M}=a^{\dagger}})\) is a weighted average of a conditional mean, \(E(Y|A{=}a^{\dagger})\), and a sum that appears in known identification formulae for both natural (pure) and separable effects of exposure or treatment on \(Y\)(Robins and Richardson, 2010; Robins et al., 2020; Stensrud et al., 2021a, 2022a). The decomposition of the frontdoor formula in (4) is instrumental in formulating new semiparametric estimators (see Web Appendix B for further details). In Web Appendix C, we give identification and estimation results for our new causally manipulable estimand in absence of \(L\), and in Web Appendix D, we provide further intuition for the identification results based on a weighted decomposition of the frontdoor formula.
## 4. Other estimands that can be identified using the frontdoor formula
To give context to our identification results, we review sufficient assumptions ensuring that the frontdoor formula identifies the ACE and the PIIE (Pearl, 2009; Fulcher et al., 2020). For exposure levels \(a^{\dagger}\) and \(a^{\circ}\), the ACE and PIIE are given respectively by
\[E(Y^{a^{\dagger}})\text{ vs. }E(Y^{a^{\circ}}),\text{ \ and \ }E(Y)\text{ vs. }E(Y^{M^{a^{ \dagger}},A}).\]
Beyond the Assumption 2 (Positivity), consider the following assumptions.
**Assumption 5** (Consistency).: _If \(A{=}a\) and \(M{=}m\), then_
\[M^{a}{=}M,\quad Y^{a}{=}Y,\quad Y^{a,m}{=}Y,\;\forall(m,\;a){\in}\mathrm{supp}( M,\;A).\]
**Assumption 6** (Exposure - Mediator Exchangeability).: \(M^{a}\,{\perp\!\!\!\perp}\,A{\mid}L\)_, \(\forall\;a\,\in\,\mathrm{supp}(A)\)._
Consistency Assumption 5 requires well-defined interventions on both \(A\) and \(M\), which seems to be implausible in our example, and Exchangeability Assumption 6 ensures that the exposure-mediator association is unconfounded, which holds for instance in the SWIG in Figure 2c. Both assumptions are used in the identification of the ACE and the PIIE in Fulcher et al. (2020), but are not strictly necessary for frontdoor identification of the ACE (see Didelez, 2018 for assumptions without intervention on \(M\)).
### Identification of the ACE
In addition to Assumptions 2, 5 and 6, the following two conditions are sufficient for the identification of the ACE using the generalized frontdoor formula (3).
**Assumption 7** (No Direct Effect).: \(Y^{a,m}{=}Y^{m}\).
**Assumption 8** (Mediator-Outcome Exchangeability).: \(Y^{a,m}\,{\perp\!\!\!\perp}\,M^{a}{\mid}L\)_,\(A\)._
Assumption 7 ensures that \(A\) only affects \(Y\) through \(M\) which holds, for instance, if the green arrow is removed from the graphs in Figure 2a-2c. Assumption 8 ensures that the mediator-outcome association is unconfounded which holds, for instance, in the SWIG in Figure 2c. Together, Assumptions 2 and 5-8 allow unmeasured confounders between exposure and outcome given measured covariates such that \(Y^{a}\,{\perp\!\!\!\perp}\,A{\mid}L\) as illustrated in a Directed Acyclic Graph (DAG) in Figure 2a and SWIGs in Figures 2b-2c (see Web Appendix B for details).
### Identification of the Pure Intervention Indirect Effect
Fulcher et al. (2020) introduced the PIIE, which they defined as a contrast between the observed mean outcome and the counterfactual outcome if, possibly contrary to fact, the mediator had taken the value that it would have when exposure equals \(a^{\dagger}\). An example of such a contrast is \(E(Y)-E(Y^{M^{a^{\dagger}},A})\). Inoue et al. (2022) studied a closely related estimand, \(E(Y^{M^{a^{\dagger}},A})-E(Y^{M^{a^{\dagger}},A})\), which they called the path specific frontdoor effect.
Instead of imposing Assumptions 7-8, Fulcher et al. (2020) only relied on Assumptions 2, 5-6 and the following cross-world exchangeability assumption to identify the PIIE:
**Assumption 9** (Cross-world exchangeability).: \(Y^{a,m}\operatorname{\hbox{\hbox to 0.0pt{$\perp$}\hbox{$\perp$}}}M^{a^{ \dagger}}\,|\,A,L\) _for all values of \(a,a^{\dagger},m\)._
Assumption 6 is a statement about the absence of discernible single-world confounding between \(M\) and \(A\) given \(L\), while Assumption 9 is a statement about the absence of indiscernible cross-world confounding between \(Y\) and \(M\) conditional on \(A\) and \(L\).
While the identification strategy proposed by Fulcher et al. (2020) permits the presence of unmeasured common causes of exposure and the outcome, and a direct effect of the exposure on the outcome, their remaining assumptions are not innocuous. First, consistency (Assumption 5) is likely to be violated in our running example; for instance, it is not clear how to imagine well-defined interventions on chronic pain. Second, cross-world independence assumptions like Assumption 9 are controversial because they are untestable even in principle (Robins and Richardson, 2010; Dawid, 2000). The untestable cross-world assumption is needed because of the more fundamental fact that the PIIE is a cross-world estimand: a counterfactual quantity that cannot be realized in a single world (Richardson and Robins, 2013). Because the PIIE can never be directly observed, it is not clear how this estimand can be used as a justification for real-world decisions. This is reflected in our difficulty in finding any correspondence between the PIIE and the substantive questions that were of direct interest to the investigators in our example. More generally, in settings where cross-world estimands often are advocated, other estimands seem to better correspond to questions of
practical interest. In particular, Robins and Richardson (2010) illustrated that the practical relevance of natural (pure) effects tend to be justified by different estimands - defined by interventions on modified treatments - using an example on smoking and nicotine. Several other examples have since been given in different settings where other causal effects (e.g., principal stratum) have been advocated in the past (Didelez, 2018; Stensrud et al., 2022a).
Analogous to Robins and Richardson (2010)'s interventionist estimands, which often are appropriate when pure (natural) effects are purportedly of interest, our identification result for \(E(Y^{a_{M}})\) can be interpreted as interventionist estimands that are appropriate when the PIIE is purportedly of interest: instead of assuming well-defined interventions on an unmodifiable exposure (Assumption 5) for a parameter only identified under empirically unverifiable cross-world assumptions (Assumption 9), we consider an intervention on a modifiable exposure \(A_{M}\) that is identifiable under conditions that, in principle, are empirically testable.
New estimators based on new representations of the efficient influence function of the frontdoor formula
Because our proposed estimand \(E(Y^{a_{M}})\) is identified by the generalized frontdoor formula, we can apply existing estimators for the generalized frontdoor formula, such as the Augmented Inverse Probability Weighted (AIPW) semiparametric estimator in Fulcher et al. (2020). However, an issue with the existing semiparametric estimator is that it does not guarantee that the estimate is bounded by the parameter space, which often results in poor performance in finite samples. In particular, this is a concern our example, where the outcome is a binary indicator of mortality. Hence, here we develop new semiparametric estimators that are guaranteed to be bounded by the parameter space.
Suppose that the observed data \(\mathcal{O}\)=(\(L\),\(A\),\(M\),\(Y\)) follow a law \(P\) which is known to belong to \(\mathcal{M}\)={\(P_{\theta}\):\(\theta\)\(\in\)\(\Theta\)}, where \(\Theta\) is the parameter space. The efficient influence function \(\varphi^{\text{eff}}(\mathcal{O})\) for
a causal parameter \(\Psi\!\equiv\!\Psi(\theta)\) in a non-parametric model \(\mathcal{M}_{\text{np}}\) that imposes no restrictions on the law of \(\mathcal{O}\) other than positivity is given by \(d\Psi(\theta_{t})/dt|_{t=0}\!=\!E\{\varphi^{\text{eff}}(\mathcal{O})S(\mathcal{O })\}\), where \(d\Psi(\theta_{t})/dt|_{t=0}\) is known as the pathwise derivative of the parameter \(\Psi\) along any parametric submodel of the observed data distribution indexed by \(t\), and \(S(\mathcal{O})\) is the score function of the parametric submodel evaluated at \(t\)=0 (Newey, 1994; Van Der Vaart, 2000).
We will first re-express our causal estimand - identified by the generalized frontdoor formula - as a weighted average of two terms given by \(\psi_{2}\) and \(\psi_{3}\):
\[\Psi\!=\!P(A\!=\!a^{\dagger})\underbrace{E(Y\!\mid\!A\!=\!a^{\dagger})}_{\psi_{ 2}}\!+\!P(A\!=\!a^{\circ})\underbrace{\sum_{m,l}}_{\forall\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_where in the equation, \(b_{0}(M,L)\)\(=\)\(E(Y|M,L,A\)\(=\)\(a^{\circ})\), \(h_{\dagger}(L)\)\(=\)\(E(b_{0}(M,L)|A\)\(=\)\(a^{\dagger},L)\) and the Expression in blue is the efficient influence function for \(\psi_{3}\). The efficient influence function (5) can also be re-expressed as_
\[\varphi^{\text{eff}}(\mathcal{O}) = I(A\text{$=$}a^{\dagger})Y\text{$+$}I(A\text{$=$}a^{\circ})\psi_ {3}\text{$+$}\frac{I(A\text{$=$}a^{\circ})f(M|A\text{$=$}a^{\dagger},L)}{f(M|A \text{$=$}a^{\circ},L)}\{Y\text{$-$}b_{0}(M,L)\}+ \tag{6}\] \[\frac{I(A\text{$=$}a^{\dagger})P(A\text{$=$}a^{\circ}|L)}{P(A \text{$=$}a^{\dagger}|L)}\{b_{0}(M,L)\text{$-$}h_{\dagger}(L)\}\text{$+$}I(A \text{$=$}a^{\circ})\{h_{\dagger}(L)\text{$-$}\psi_{3}\}\text{$-$}\Psi,\]
A proof for the efficient influence function can be found in Web Appendix E. After some algebra, it can be shown that (6) has an alternative representation given by Equation (5) in Theorem 1 in Fulcher et al. (2020). As such, any regular and asymptotically linear estimator \(\hat{\Psi}\) of \(\Psi\) in \(\mathcal{M}_{\text{np}}\) will satisfy the following property: \(\sqrt{n}(\hat{\Psi}\text{$-$}\Psi)\)\(=\)\(n^{-1/2}\)\(\sum_{i=1}^{n}\)\(\varphi^{\text{eff}}(\mathcal{O}_{i})\)\(+\)\(o_{p}(1)\). Furthermore, all regular and asymptotically linear estimators in \(\mathcal{M}_{\text{np}}\) are asymptotically equivalent and attain the semiparametric efficiency bound (Bickel et al., 1998).
### Semiparametric estimators for the generalized frontdoor formula
Writing the efficient influence function for the generalized frontdoor formula (\(\Psi\)) given in Expressions (5) or (6) motivates estimators that guarantee sample-boundedness. A weighted iterative conditional expectation (Weighted ICE) estimator that guarantees sample-boundedness is presented in the following algorithm. In what follows, we let \(\mathbb{P}_{n}(X)\)\(=\)\(n^{-1}\sum_{i=1}^{n}X_{i}\) and let \(g^{-1}\) denote a known inverse link function satisfying \(\inf(\mathbf{Y})\)\(\leq\)\(g^{-1}(u)\)\(\leq\)\(\sup(\mathbf{Y})\), for all \(u\), where \(\mathbf{Y}\) is the sample space of \(Y\) (e.g., a logit link for dichotomous \(Y\)).
In Algorithm 1, steps 3, 4 and 5 ensure that the estimates for \(\psi_{3}\)\(=\)\(E\{h_{\dagger}(L)|A\)\(=\)\(a^{\circ}\}\) are sample bounded. Moreover, in Step 6 it is clear that \(\hat{\Psi}_{WICE}\) is a convex combination of \(Y\) and estimates for \(\psi_{3}\), both of which are bounded by the range of the outcome \(Y\). Thus, \(\hat{\Psi}_{WICE}\) will also be sample-bounded. In Web Appendix F, we prove that the proposed
estimator, which is based on the efficient influence function given by (5) and (6), is robust against 3 classes of model misspecification scenarios.
**Theorem 3**.: _Under standard regularity conditions, the weighted ICE estimator \(\hat{\Psi}_{WICE}\) where a model for \(P(A{=}a\left|M,L\right)\) is specified will be consistent and asymptotically normal under the union model \(\mathcal{M}_{union}{=}\mathcal{M}_{1}\cup\mathcal{M}_{2}\cup\mathcal{M}_{3}\) where we define:_
1. _Model_ \(\mathcal{M}_{1}\)_: working models for_ \(P(A{=}a\left|M,L\right)\) _and_ \(P(A{=}a\left|L\right)\) _are correctly specified._
2. _Model_ \(\mathcal{M}_{2}\)_: working models for_ \(b_{0}(M,L)\) _and_ \(h_{\dagger}(L)\) _are correctly specified._
3. _Model_ \(\mathcal{M}_{3}\)_: working models for_ \(b_{0}(M,L)\) _and_ \(P(A{=}a\left|L\right)\) _are correctly specified._
_The weighted ICE estimator \(\hat{\Psi}_{WICE}\) where a model for \(P(M{=}m\left|A,L\right)\) is specified will be consistent and asymptotically normal under the union model \(\mathcal{M}_{union}{=}\mathcal{M}_{1}\cup\mathcal{M}_{2}\cup\mathcal{M}_{3}\) where we define (1) Model \(\mathcal{M}_{1}\): working models for \(P(M{=}m\left|A,L\right)\) and \(P(A{=}a\left|L\right)\) are correctly specified; (2) Model \(\mathcal{M}_{2}\): working models for \(b_{0}(M,L)\) and \(h_{\dagger}(L)\) are correctly specified; (3) Model \(\mathcal{M}_{3}\): working models for \(b_{0}(M,L)\) and \(P(A{=}a\left|L\right)\) are correctly specified.Moreover, \(\hat{\Psi}_{WICE}\) is locally efficient in the sense that it achieves the semiparametric efficiency bound for \(\Psi\) in \(\mathcal{M}_{np}\), i.e., \(E\left[\varphi^{\text{eff}}(\mathcal{O})^{2}\right]\), at the intersection model given by \(\mathcal{M}_{intersection}{=}\mathcal{M}_{1}\cap\mathcal{M}_{2}\cap\mathcal{M}_{3}\)._
Note that it is possible that the weighted ICE estimator is consistent when the models for \(b_{0}(M,L)\) and \(E(M\left|A,L\right)\) are correctly specified. For instance, this would be the case when \(Y\) and \(M\) are continuous, in which case the model \(R(L;\eta)\) for \(h_{\dagger}(L)\) can also be correctly specified in this model specification scenario.
Unlike the estimator proposed by Fulcher et al. (2020), the estimator proposed here requires specification of four models instead of three, and thus our proposed estimator is in that sense less robust against model misspecification compared to that of the AIPW estimator in Fulcher et al. (2020). Consequently, the AIPW estimator in Fulcher et al. (2020), which requires specification of only three (vs. four) working models, is robust against two (vs. three) classes of model misspecification scenarios. Specifically, it will be consistent when
at least (1) the models for \(b_{0}(M,L)\) and \(P(A{=}a|\,L)\) are correctly specified, _or_ (2) only the model for \(P(M{=}m|\,A,L)\) is correctly specified.
We view the extra model specification as a trade-off between robustness and sample-boundedness. Moreover, when (i) \(M\) is a continuous variable, as in the 'Safer deliveries' application in Fulcher et al. (2020), and/or (ii) there are multiple mediator variables (\(M_{1}\), \(M_{2}\), \(M_{3}\dots\)), the model for the mediator variable is difficult to correctly specify. In Web Appendix F, we prove that in the absence of \(L\) our weighted ICE estimator based on the efficient influence function for the (non-generalized) frontdoor formula is doubly robust in the sense that it will be consistent as long as the model for \(P(A{=}a|\,M)\) - or \(P(M{=}m|\,A)\), depending on the representation - or the model for \(E(Y|\,A{=}a^{\circ},M)\) is correctly specified. As such, for the (non-generalized) frontdoor formula in the absence of \(L\), the double robustness property of our estimator is the same as that of Fulcher et al. (2020).
In Web Appendix G, we describe a targeted maximum likelihood estimator (TMLE) for \(\Psi\), which is a variation of the weighted ICE estimator given above. The TMLE can accommodate complex machine learning algorithms for estimating all nuisance functions (Van der Laan and Rose, 2011; Chernozhukov et al., 2018; Wen et al., 2021). The advantage of using TMLE with machine learning algorithms for the nuisance functions is that the estimator is still consistent and asymptotically normal as long as the nuisance functions converge to the truth at rates faster than \(n^{-1/4}\)(Robins et al., 2008; Chernozhukov et al., 2018) (see Web Appendix J). Nevertheless, in real world applications there is no guarantee that such rates of convergence can be attained when more flexible algorithms such as neural network or random forest are used. Moreover, there is no guarantee that these machine learning methods will exhibit more or less bias compared with parametric models (Liu et al., 2020).
Simulation study
We conducted a simulation study to demonstrate that (1) our estimand, like the PIIE, is robust to unmeasured confounding between exposure and outcome, (2) our proposed estimator is more robust to model misspecification compared with estimators such as the Inverse Probability Weighted (IPW) estimator and the ICE estimator (described in Web Appendix G), and (3) unlike the AIPW estimator of Fulcher et al. (2020) our estimator can also ensure sample boundedness in any finite sample size setting.
The simulation study was based on 1000 simulated data sets of sample sizes \(n\)=100, 250 and 500. We compared the bias, standardized bias and efficiency of IPW, ICE, AIPW and weighted ICE estimators. Standardized bias is 100\(\times\)[(Average Estimate\(-\)True Parameter)/empirical standard deviation of the parameter estimates]. It has been suggested that for \(n\)=500, anything greater than an absolute standardized bias of approximately 40% will have a 'noticeable adverse impact on efficiency, coverage, and error rates' (Collins et al., 2001). The true value of \(\Psi\)=0.0144 was calculated by generating a Monte Carlo sample of size \(10^{7}\). We intentionally chose a rare outcome to compare the performance of the estimators where the AIPW estimator had a non-trivial chance of falling outside of the parameter space.
The data-generating mechanism for our simulations and model specifications are provided in Table 1. We consider four scenarios to illustrate the robustness of our proposed estimator to model misspecification: (1) all models are approximately correctly specified, (2) only the models for \(b_{0}(M,L)\) and \(h_{\dagger}(L)\) are approximately correctly specified, (3) only the models for \(b_{0}(M,L)\) and \(P(A\)=\(a\)\(|\)\(L)\) are approximately correctly specified, and (4) only the model for \(P(M\)=\(m\)\(|\)\(A,L)\) is correctly specified _and_ the model for \(P(A\)=\(a\)\(|\)\(L)\) is approximately correctly specified. The correct mediator model in the specification scenarios is the one used in the data generation process, and the exposure and outcome models are approximately correctly specified by including pairwise interactions between all the variables to ensure flexibility.
Table 2 shows the results from the simulation study. Consistent with our theoretical derivations, when all of the working models are (approximately) correctly specified, all of the estimators become nearly unbiased as the sample size increases. For \(n\)=500, the AIPW estimator and our proposed weighted ICE estimator are also nearly unbiased in the three model misspecification settings whereas the IPW and ICE estimators are not all unbiased. In Web Appendix I, we show an additional simulation study where the variables are all binary and thus the correct exposure and outcome models are saturated models that cannot be misspecified. The results further show the robustness of our estimator in model misspecification scenarios.
Finally, ICE, AIPW and weighted ICE have comparable standard errors when all of the models are correctly specified, but all three had lower standard errors compared with IPW. However, our results also show that AIPW estimator has poorer finite sample performance when \(n\)=100 compared with all other estimators. Moreover, for \(n\)=100, there were 90, 59 and 88 simulated data sets where estimates fell below 0 in scenarios 1, 3 and 4, respectively.
## 7. Application
Motivated by the chronic pain example, we applied our results to study the causal effect of modified prescription policies for opioids on mortality in patients with chronic pain. The intervention of interest is a new prescription policy, in which the doctor is set to ignore (i.e., consider as absent, possibly counter to fact) each patient's chronic pain for the purpose of opioid prescription decisions, and otherwise use measured covariates as usual (according to standard treatment guidelines). Thus, this policy is an intervention on a _doctor's perception_
of chronic pain \(A_{M}\)._ This intervention is consistent with a recently proposed policy by the CDC that implores practicing physicians to no longer consider chronic pain as an indication for opioid therapy. While the doctors' perceptions are not explicitly measured in the observed data, we may reasonably assume that (in the absence of intervention) such perceptions perfectly correspond with the medical reality of the patient (that is, \(A\)=\(A_{M}\) almost surely).
We used the dataset from Inoue et al. (2022), which includes observations from the US NHANES that are linked to a national mortality database (National Death Index). The NHANES study consists of a series of in-depth in-person interviews, medical and physical examinations, and laboratory tests aimed at understanding various emerging needs in public health and nutrition. Sample data are from 1999-2004 and include information on individuals' chronic pain statuses (\(A\)), opioid prescriptions (\(M\)), mortality (\(Y\)), and covariates (\(L\)) including age, sex assigned at birth (male and female), race (non-Hispanic White, non-Hispanic Black, Mexican-American, or others), education levels (less than high school, high school or General Education Degree, or more than high school), poverty-income ratio (the ratio of household income to the poverty threshold), health insurance coverage, marital status, smoking, alcohol intake, and anti-depressant medication prescription. Let \(A_{M}\) denote the _intervening_ variable: the doctor's perception of the chronic pain of the patient.
Our sample included 12037 individuals. Following Inoue et al. (2022), an individual is considered to have chronic pain if they reported pain for at least three months by the International Association for the Study of Pain criteria (Merskey and Bogduk, 1994). Moreover, data on prescription medications for pain relief used in the past 30 days were collected in the in-person interview. Opioids identified through the process include codeine, fentanyl, oxycodone, pentazocine and morphine. About 16% of the individuals in the sample experienced chronic pain and approximately 5% of the individuals in the sample reported using opioids. Detailed data description and summary statistics can be found in Inoue et al. (2022).
We estimated the cumulative incidence \(E(Y^{a_{M}=0})\) and the causal contrast \(E(Y)-E(Y^{a_{M}=0})\) using ICE, IPW and our weighted ICE estimator by specifying logistic regression models for
the outcome, mediator and exposure. We used the same logistic regression models as that of Inoue et al. (2022) by adjusting for all the previously-listed measured covariates as potential confounders. All 95% confidence intervals were based on the 2.5 and 97.5 percentiles of a non-parametric bootstrap procedure with 1000 bootstrapped samples.
The ICE procedure estimated the \(E(Y^{a_{M}=0})\) to be 4.76% (SE=0.20, 95%=(4.36, 5.16)) in 3 years and 8.55% (SE=0.26, 95%=(8.04, 9.09)) in 5 years. The IPW procedure estimated this cumulative incidence to be 4.95% (SE=0.20, 95%=(4.57, 5.35)) in 3 years and 8.82% (SE= 0.26, 95%=(8.30, 9.35)) in 5 years. The weighted ICE procedure estimated this cumulative incidence to be 4.76% (SE=0.20, 95%=(4.36, 5.16)) in 3 years and 8.55% (SE=0.26, 95%=(8.04, 9.09)) in 5 years.
Moreover, the ICE procedure estimated \(E(Y)-E(Y^{a_{M}=0})\) to be 0.22% (SE=0.29, 95%=(\(-\)0.32, 0.82)) in 3 years and 0.25% (SE=0.08, 95%=(0.09, 0.40)) in 5 years. The IPW procedure estimated this causal contrast to be 0.02% (SE=0.20, 95%=(\(-\)0.38, 0.40)) in 3 years and -0.03% (SE=0.26, 95%=(\(-\)0.55, 0.49)) in 5 years. The weighted ICE procedure estimated this causal contrast to be 0.22% (SE=0.29, 95%=(\(-\)0.33, 0.82)) in 3 years and 0.25% (SE=0.08, 95%=(0.09,0.40)) in 5 years.
The three estimators produced similar results. As such, our analysis suggests that under an intervention on the doctor's perception of chronic pain when making decisions about opioid treatment, the cumulative incidence of death is almost identical to the cumulative incidence in the observed data after 3 years, but may decrease risk very slightly after 5 years.
## 8. Discussion
We have derived identification results that justify the use of the frontdoor formula in new settings, reflecting questions of practical interest, where unmeasured confounding is a serious
concern. Our identification results do not rely on ill-defined interventions or cross-world assumptions. Specifically, we proposed an estimand defined by an intervention on a modifiable descendant of an exposure or treatment, which we call an _intervening_ variable. Like the previously proposed PIIE, our proposed estimand is identified by the frontdoor formula even in the presence of a direct effect of the exposure on the outcome not mediated by an intermediate variable. But unlike the PIIE, our estimand is identifiable under conditions that, in principle, are empirically testable. In addition, we presented an example in which our proposed estimand is practically relevant. In this example, the exposure variable - chronic pain - was difficult, if not impossible, to intervene on. However, we argued that interventions on the intervening variable, rather than the exposure, are often of practical interest in settings where interventions on exposures are ill-defined.
When our proposed estimand is identified by the frontdoor formula in the absence of \(L\), our proposed estimator and the existing AIPW estimator of Fulcher et al. (2020) are both doubly robust in the sense that they are consistent as long as (1) the model for \(P(M{=}m\left|A\right)\) is correctly specified or (2) the model for \(b_{0}(M)\) is correctly specified. For the generalized frontdoor formula, our proposed estimator is triply robust when (1) the models for \(P(M{=}m\left|A,L\right)\)_and_\(P(A{=}a\left|L)\) are correctly specified, (2) the models for \(b_{0}(M,L)\) and \(h_{\dagger}(L)\) are correctly specified, or (3) the models for \(b_{0}(M,L)\) and \(P(A{=}a\left|L\right)\) are correctly specified. Compared with existing AIPW estimator of Fulcher et al. (2020), there is one disadvantage of our estimator for the generalized g-formula; our estimator requires specification of four models instead of three in the AIPW estimator. As a result, the AIPW estimator is doubly robust when (1) the models for \(b_{0}(M,L)\) and \(P(A{=}a\left|L\right)\) are correctly specified, _or_ (2) only the model for \(P(M{=}m\left|A,L)\) is correctly specified. However, we view this as a trade-off to guarantee sample-boundedness, which is a desirable property in estimators based on the efficient influence function. Indeed, as stated in Robins et al. (2007), "With highly variable weights, 'boundedness' trumps unbiasedness." A key advantage of our semiparametric estimator is that estimates of \(\Psi\) are ensured to be bounded by the parameter space, regardless of the sample size and variability of inverse probability weights.
In practice it is advised to define richly parameterized models for \(P(A{=}a\lfloor L)\) and \(h_{\dagger}(L)\) to ameliorate model incompatibility issues between \(P(A{=}a\lfloor L)\) and \(P(A{=}a\lfloor M,L)\) and between \(b_{0}(M,L)\) and \(h_{\dagger}(L)\)(Vansteelandt et al., 2007; Tchetgen Tchetgen and Shpitser, 2014). However, similar to the AIPW estimator in Fulcher et al. (2020), our proposed estimator can also accommodate machine learning algorithms through trivial modifications, which in principle can achieve \(\sqrt{n}\)-consistency as long as the nuisance functions converge at sufficiently fast rates. Moreover, when the mediator variable is continuous, the AIPW estimator of Fulcher et al. (2020) involves a preliminary estimator of a density ratio. Direct estimation of the density ratio is often cumbersome as correct specification of the probability density of \(M\) is difficult. This is particularly challenging task when data-adaptive estimators are used for estimating high-dimensional nuisance parameters. Our proposal allows for an alternative estimation procedure to accommodate continuous mediator variables by modeling the propensity score of exposure/treatment instead. Finally, our semiparametric estimator can be applied whenever the frontdoor formula identifies the parameter of interest, which e.g., could be the ACE, PIIE or our interventionist estimand. Our results also motivate future methodological work. In particular, we aim to generalize our results to longitudinal settings, involving time-varying treatments.
## Acknowledgement
The authors thank Dr. Kosuke Inoue for the access to the NHANES dataset used in the data application. Lan Wen is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant [RGPIN-2023-03641]. Mats J. Stensrud and Aaron L. Sarvet are supported by the Swiss National Science Foundation, grant 200021_207436.
## References
* Angrist et al. (1996) Angrist, J. D., Imbens, G. W., and Rubin, D. B. (1996). Identification of causal effects using instrumental variables. _Journal of the American statistical Association_, 91(434):444-455.
* Bickel et al. (1998) Bickel, P. J., Klaassen, C. A., Ritov, Y., and Wellner, J. A. (1998). _Efficient and adaptive estimation for semiparametric models_, volume 2. New York: Springer.
* Chernozhukov et al. (2018) Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., and Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters. _The Econometrics Journal_, 21:C1-C68.
* Collins et al. (2001) Collins, L., Schafer, J., and Kam, C. (2001). A comparison of inclusive and restrictive strategies in modern missing data procedures. _Psychological Methods_, 6:330-351.
* Dawid (2000) Dawid, A. P. (2000). Causal inference without counterfactuals. _Journal of the American statistical Association_, 95(450):407-424.
* Didelez (2018) Didelez, V. (2018). Causal concepts and graphical models. In _Handbook of graphical models_, pages 353-380. CRC Press.
* Dowell et al. (2016) Dowell, D., Haegerich, T. M., and Chou, R. (2016). Cdc guideline for prescribing opioids for chronic pain--united states, 2016. _Jama_, 315(15):1624-1645.
* Fulcher et al. (2020) Fulcher, I. R., Shpitser, I., Marealle, S., and Tchetgen Tchetgen, E. J. (2020). Robust inference on population indirect causal effects: the generalized front door criterion. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, 82:199-214.
* Galea (2013) Galea, S. (2013). An argument for a consequentialist epidemiology. _American journal of epidemiology_, 178(8):1185-1191.
* Gerdeman (2017) Gerdeman, D. (2017). Minorities who 'whiten' job resumes get more interviews. _Harvard Business School: Working Knowledge_.
* Hernan (2005) Hernan, M. A. (2005). Invited commentary: hypothetical interventions to define causal effects--afterthought or prerequisite? _American journal of epidemiology_, 162(7):618-620.
* Hernan and Taubman (2008) Hernan, M. A. and Taubman, S. L. (2008). Does obesity shorten life? the importance of well-defined interventions to answer causal questions. _International journal of obesity_, 32(3):S8-S14.
* Hernan and VanderWeele (2011) Hernan, M. A. and VanderWeele, T. J. (2011). Compound treatments and transportability of causal inference. _Epidemiology (Cambridge, Mass.)_, 22(3):368.
* Holland (1986) Holland, P. W. (1986). Statistics and causal inference. _Journal of the American statistical Association_, 81(396):945-960.
* Ichimura and Newey (2022) Ichimura, H. and Newey, W. K. (2022). The influence function of semiparametric estimators. _Quantitative Economics_, 13:29-61.
* Inoue et al. (2022) Inoue, K., Ritz, B., and Arah, O. A. (2022). Causal effect of chronic pain on mortality through opioid prescriptions: Application of the front-door formula. _Epidemiology_, 33(4):572-580.
* Kang et al. (2016) Kang, S. K., DeCelles, K. A., Tilcsik, A., and Jun, S. (2016). Whitened resumes: Race and self-presentation in the labor market. _Administrative Science Quarterly_, 61:469-502.
* Kennedy (2022) Kennedy, E. H. (2022). Semiparametric doubly robust targeted double machine learning: a review. _arXiv preprint arXiv:2203.06469_.
* Lipsitch et al. (2010) Lipsitch, M., Tchetgen, E. T., and Cohen, T. (2010). Negative controls: a tool for detecting confounding and bias in observational studies. _Epidemiology (Cambridge, Mass.)_, 21(3):383.
* Liu et al. (2020) Liu, L., Mukherjee, R., and Robins, J. M. (2020). On nearly assumption-free tests of nominal confidence interval coverage for causal parameters estimated by machine learning. _Statistical Science_, 35:518-539.
* Merskey and Bogduk (1994) Merskey, H. and Bogduk, N. (1994). Classification of chronic pain, iasp task force on taxonomy.
* Miao et al. (2018) Miao, W., Geng, Z., and Tchetgen Tchetgen, E. J. (2018). Identifying causal effects with proxy variables of an unmeasured confounder. _Biometrika_, 105(4):987-993.
* Newey (1994) Newey, W. K. (1994). The asymptotic variance of semiparametric estimators. _Econometrica: Journal of the Econometric Society_, pages 1349-1382.
* Pearl (1993) Pearl, J. (1993). Mediating instrumental variables. _Technical report_.
* Pearl (2009) Pearl, J. (2009). _Causality_. Cambridge university press.
* Richardson and Robins (2013) Richardson, T. S. and Robins, J. M. (2013). Single world intervention graphs (swigs): A unification of the counterfactual and graphical approaches to causality. _Center for the Statistics and the Social Sciences, University of Washington Series. Working Paper_, 128(30):2013.
* Robins et al. (2008) Robins, J., Li, L., Tchetgen, E., and van der Vaart, A. (2008). Higher order influence functions and minimax estimation of nonlinear functionals. _Probability and statistics: essays in honor of David A. Freedman_, 2:335-421.
* Robins et al. (2007) Robins, J., Sued, M., Lei-Gomez, Q., and Rotnitzky, A. (2007). Comment: Performance of double-robust estimators when "inverse probability" weights are highly variable. _Statistical Science_, 22:544-559.
* Robins and Richardson (2010) Robins, J. M. and Richardson, T. S. (2010). Alternative graphical causal models and the identification of direct effects. _Causality and psychopathology: Finding the determinants of disorders and their cures_, 84:103-158.
* Robins et al. (2020) Robins, J. M., Richardson, T. S., and Shpitser, I. (2020). An interventionist approach to mediation analysis. _arXiv preprint arXiv:2008.06019_.
* Robins et al. (2022) Robins, J. M., Richardson, T. S., and Shpitser, I. (2022). An interventionist approach to mediation analysis. In _Probabilistic and Causal Inference: The Works of Judea Pearl_, pages 713-764.
* Stensrud and Dukes (2022) Stensrud, M. J. and Dukes, O. (2022). Translating questions to estimands in randomized clinical trials with intercurrent events. _Statistics in Medicine_.
* Stensrud et al. (2021a) Stensrud, M. J., Hernan, M. A., Tchetgen Tchetgen, E. J., Robins, J. M., Didelez, V., and Young, J. G. (2021a). A generalized theory of separable effects in competing event settings. _Lifetime data analysis_, 27:588-631.
* Stensrud et al. (2022a) Stensrud, M. J., Robins, J. M., Sarvet, A., Tchetgen Tchetgen, E. J., and Young, J. G. (2022a). Conditional separable effects. _Journal of the American Statistical Association_, pages 1-29.
Stensrud, M. J., Young, J. G., Didelez, V., Robins, J. M., and Hernan, M. A. (2022b). Separable effects for causal inference in the presence of competing events. _Journal of the American Statistical Association_, 117(537):175-183.
* Stensrud et al. (2021b) Stensrud, M. J., Young, J. G., and Martinussen, T. (2021b). Discussion on "Causal mediation of semicompeting risks" by Yen-Tsung Huang. _Biometrics_, 77(4):1160-1164.
* Tchetgen and Shpitser (2014) Tchetgen Tchetgen, E. J. and Shpitser, I. (2014). Estimation of a semiparametric natural direct effect model incorporating baseline covariates. _Biometrika_, 101:849-864.
* Tchetgen et al. (2020) Tchetgen Tchetgen, E. J., Ying, A., Cui, Y., Shi, X., and Miao, W. (2020). An introduction to proximal causal learning. _arXiv preprint arXiv:2009.10982_.
* Van der Laan and Rose (2011) Van der Laan, M. J. and Rose, S. (2011). _Targeted learning: causal inference for observational and experimental data_, volume 10. Springer.
* Van der Laan and Rose (2018) Van der Laan, M. J. and Rose, S. (2018). _Targeted learning in data science_. Springer.
* Van Der Vaart (2000) Van Der Vaart, A. W. (2000). _Asymptotic statistics_, volume 3. Cambridge university press.
* Vansteelandt et al. (2007) Vansteelandt, S., Rotnitzky, A., and Robins, J. (2007). Estimation of regression models for the mean of repeated outcomes under nonignorable nonmonotone nonresponse. _Biometrika_, 94:841-860.
* Wen et al. (2021) Wen, L., Marcus, J., and Young, J. (2021). Intervention treatment distributions that depend on the observed treatment process and model double robustness in causal survival analysis. _arXiv preprint arXiv:2112.00807_.
Figure 1. DAG and SWIGs of modified extended graph.
Figure 2. DAG and SWIGs of random variables under which average counterfactual outcome can be identified.
```
1:Non-parametrically compute \(P(A{=}a^{\circ})\) and \(P(A{=}a^{\dagger})\).
2:Compute the maximum likelihood estimate (MLE) \(\hat{\kappa}\) of \(\kappa\) from the observed data for the exposure or treatment model \(P(A{=}a\left|L;\kappa\right)\). In addition, compute the MLE \(\hat{\alpha}\) of \(\alpha\) from the observed data for the exposure or treatment model \(P(A{=}a\left|M,L;\alpha\right)\), or compute the MLE \(\hat{\gamma}\) of \(\gamma\) from the observed data for the mediator model \(P(M{=}m\left|A,L;\gamma\right)\).
3:In the individuals whose \(A{=}a^{\circ}\), fit a regression model \(Q(M,L;\theta){=}g^{-1}\{\theta^{T}\phi(M,L)\}\) for \(b_{0}(M,L){=}E(Y\left|M,L,A{=}a^{\circ})\) where the score function for each observation is weighted by \(\hat{W}_{1}\). Here, \(\hat{W}_{1}\) equals \[\frac{P(A{=}a^{\circ}\left|L;\hat{\kappa})P(A{=}a^{\dagger}\left|M,L;\hat{ \alpha}\right)}{P(A{=}a^{\dagger}\left|L;\hat{\kappa})P(A{=}a^{\circ}\left|M,L; \hat{\alpha}\right)}\] if \(\hat{\alpha}\) was estimated in the previous step, or \(\hat{W}_{1}\) equals \[\frac{f(M\left|A{=}a^{\dagger},L;\hat{\gamma})}{f(M\left|A{=}a^{\circ},L;\hat{ \gamma}\right)}\] if \(\hat{\gamma}\) was estimated in the previous step, and \(\phi(M,L)\) is a known function of \(M\) and \(L\). More specifically, we solve for \(\theta\) in the following estimating equations: \[\mathbb{P}_{n}\left[I(A{=}a^{\circ})\phi(M,L)\hat{W}_{1}\left\{Y-Q(M,L;\theta )\right\}\right]{=}0.\]
4:In those whose \(A{=}a^{\dagger}\), fit a model \(R(L;\eta){=}g^{-1}\{\eta^{T}\Gamma(L)\}\) for \(h_{\dagger}(L){=}E(b_{0}(M,L)\left|L,A{=}a^{\dagger})\) where the score function for each observation is weighted by \[\hat{W}_{2}{=}\frac{P(A{=}a^{\circ}\left|L;\hat{\kappa})}{P(A{=}a^{\dagger} \left|L;\hat{\kappa}\right)}.\] Here, \(\Gamma(L)\) is a known function of \(L\). More specifically, we solve for \(\eta\) in the following estimating equations: \[\mathbb{P}_{n}\left[I(A{=}a^{\dagger})\Gamma(L)\hat{W}_{2}\left\{Q(M,L;\hat{ \theta})-R(L;\eta)\right\}\right]{=}0.\]
5:In those whose \(A{=}a^{\circ}\), fit an intercept-only model \(T(\beta){=}g^{-1}(\beta)\) for \(\psi_{3}{=}E\{h_{\dagger}(L)\left|A{=}a^{\circ}\right\}\). More specifically, we solve for \(\beta\) in the following estimating equations: \[\mathbb{P}_{n}[I(A{=}a^{\circ})\left\{R(L;\hat{\eta})-T(\beta)\right\}]{=}0.\]
6:Compute \(\hat{T}{\equiv}T(\hat{\beta})\) for all observations.
7:Estimate \(\hat{\Psi}_{WICE}{=}\mathbb{P}_{n}\{I(A{=}a^{\dagger})Y+I(A{=}a^{\circ})\hat{T}\}\).
```
**Algorithm 1** Algorithm for Weighted ICE (generalized frontdoor formula)
\begin{table}
\begin{tabular}{||l l||} \hline \hline \multicolumn{3}{||c||}{Data generating mechanism} \\ \hline \(U\!\sim\) & Ber(0.5) \\ \(L_{1}\!\sim\) & Normal(0,1) \\ \(L_{2}\!\sim\) & Ber\{expit(1\!+\!2L_{1})\} \\ \(A\!\sim\) & Ber\{expit(-1\!-\!3L_{1}\!+\!L_{2}\!+\!5L_{1}L_{2}\!+\!2U)\} \\ \(M\!\sim\) & Ber\{expit(1\!-\!A\!-\!2L_{1}\!+\!2L_{2}\!+\!3L_{1}L_{2})\} \\ \(Y\!\sim\) & Ber\{expit(-4\!+\!2A\!+\!M\!-\!2AM\!+\!2L_{1}\!-\!2L_{2}\!-\!5L_{1}L_{2}\!-\! U)\} \\ \hline \multicolumn{3}{||c||}{Model misspecification} \\ \hline Scenario 2 & \(P(M\!\!=\!\!m|L,\!A;\!\gamma)\!=\!\!\expit(\gamma_{0}\!+\!\gamma_{1}A\!+\! \gamma_{2}L_{1}\!+\!\gamma_{3}L_{2})\) \\ & \(P(A\!\!=\!\!a|L;\!\kappa)\!=\!\!\expit(\kappa_{0}\!+\!\kappa_{1}L_{1}\!+\! \kappa_{2}L_{1}^{2})\) \\ Scenario 3 & \(P(M\!\!=\!\!m|L,\!A;\!\gamma)\!=\!\expit(\gamma_{0}\!+\!\gamma_{1}A\!+\! \gamma_{2}L_{2})\) \\ & \(R(L;\!\theta)\!=\!\expit(\eta_{0}\!+\!\eta_{1}L_{2})\) \\ Scenario 4 & \(Q(M,\!L;\!\theta)\!=\!\expit(\theta_{0}\!+\!\theta_{1}M\!+\!\theta_{2}L_{1}\! +\!\theta_{3}L_{2})\) \\ & \(R(L;\!\theta)\!=\!\expit(\eta_{0}\!+\!\eta_{1}L_{2}(1\!-\!L_{1}))\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. Data generating mechanism and model misspecifications for scenarios in simulation study.
\begin{table}
\begin{tabular}{|l|c c c|c c c|c c c|c c c|} \hline & \multicolumn{3}{c|}{Scenario 1} & \multicolumn{3}{c|}{Scenario 2} & \multicolumn{3}{c|}{Scenario 3} & \multicolumn{3}{c|}{Scenario 4} \\ \hline & Bias & SE & Bias\({}_{s}\) & Bias & SE & Bias\({}_{s}\) & Bias & SE & Bias\({}_{s}\) & Bias & SE & Bias\({}_{s}\) \\ \hline \multicolumn{11}{|c|}{\(n\)=100} \\ \hline IPW & 0.21 & 2.19 & 9.52 & -0.23 & 1.38 & -16.67 & 0.21 & 2.19 & 9.52 & 3.13 & 2.12 & 147.8 \\ ICE & 0.15 & 1.74 & 8.42 & 0.15 & 1.74 & 8.42 & -0.28 & 1.36 & -20.7 & 0.55 & 1.93 & 28.53 \\ AIPW & 0.49 & 1.94 & 25.03 & - & - & - & 0.61 & 2.19 & 27.71 & 0.31 & 1.72 & 18.31 \\ WICE & 0.20 & 1.81 & 10.82 & 0.15 & 1.75 & 8.57 & 0.17 & 1.96 & 8.551 & 0.23 & 1.72 & 13.52 \\ \hline \multicolumn{11}{|c|}{\(n\)=250} \\ \hline IPW & 0.12 & 1.33 & 9.05 & -0.33 & 0.86 & -38.04 & 0.12 & 1.33 & 9.05 & 3.45 & 1.33 & 259.62 \\ ICE & 0.09 & 1.05 & 8.97 & 0.09 & 1.05 & 8.97 & -0.36 & 0.92 & -38.93 & 0.39 & 1.13 & 34.17 \\ AIPW & 0.16 & 1.07 & 14.89 & - & - & - & 0.16 & 1.18 & 13.16 & 0.07 & 0.92 & 7.26 \\ WICE & 0.09 & 1.02 & 9.06 & 0.08 & 1.03 & 7.92 & 0.09 & 1.22 & 7.38 & 0.01 & 0.88 & 1.26 \\ \hline \multicolumn{11}{|c|}{\(n\)=500} \\ \hline IPW & 0.07 & 0.94 & 7.75 & -0.40 & 0.54 & -74.8 & 0.07 & 0.94 & 7.75 & 3.60 & 0.98 & 365.37 \\ ICE & 0.04 & 0.68 & 6.34 & 0.04 & 0.68 & 6.34 & -0.45 & 0.54 & -83.93 & 0.36 & 0.72 & 49.71 \\ AIPW & 0.04 & 0.68 & 5.73 & - & - & - & 0.05 & 0.88 & 5.72 & 0.03 & 0.57 & 4.89 \\ WICE & 0.05 & 0.68 & 6.96 & 0.04 & 0.67 & 6.51 & 0.04 & 0.83 & 5.33 & 0.01 & 0.55 & 1.02 \\ \hline \end{tabular}
\end{table}
Table 2: Simulation results: Bias, standard error (SE), and standardized bias (Bias\({}_{s}\)) are multiplied by 100, and true value was \(\Psi\)=0.0144. For \(n\)=100, 90, 59 and 88 simulated data sets had estimates that fell below 0 in scenarios 1, 3 and 4, respectively. For \(n\)=250, 16, 38 and 14 simulated data sets had estimates that fell below 0 in scenarios 1, 3 and 4, respectively. For \(n\)=500, 5, 31, and 1 simulated data set(s) had estimates that fell below 0 in scenarios 1, 3 and 4, respectively.
# Web Appendix for "Causal Effects of Intervening Variables in Settings With Unmeasured Confounding"
A. Example from Fulcher et al. (2020)
**Example 1** (The safer deliveries program, (Fulcher et al., 2020)).: The 'Safer deliveries' program was designed to reduce the relatively high rates of maternal and neonatal mortality in Zanzibar, Tanzania. The program provided counselling to pregnant women preparing for delivery. Women deemed to be in a high pregnancy "risk category", based on a mobile device algorithm, were instructed to deliver at a referral hospital, a specialty healthcare resource that generally incurred higher expenses for the women's family. Then, given the recommended delivery location, the mobile device algorithm also calculated an amount that they recommended the family should save in anticipation of future obstetric care costs (i.e. a "tailored savings recommendation"). At a later point in the study, the amount that the families actually saved for this purpose was recorded in the data (i.e. the "actual savings").
Using data from the 'Safer deliveries' program, Fulcher et al. (2020) aimed to evaluate "the effectiveness of this tailored savings recommendation by risk category on actual savings". They reported estimates of the PIIE of delivery risk (high risk versus low/medium risk exposure) on actual savings at delivery (outcome), mediated by a recommended savings amount calculated by the mobile device algorithm. As noted in Fulcher et al. (2020), there may be unmeasured confounding between a participant's recommended risk category and actual savings at delivery, for example by socioeconomic factors and individual's health-seeking behaviour.
Fulcher et al. (2020) argued that the PIILs in Example A.1 was an appropriate estimand to study "the effectiveness of this tailored savings recommendation" for pregnant women. However, it is not clear that the plain English justification translates to a PIIE defined by interventions on a woman's recommended risk category, an exposure that is non-interveneable or of limited scientific interest. To interpret the results of their analysis we either have to consider:
1. obstetric risk category to be _defined_ as a composite of various embodied socio-demographic and clinical features, in which case intervention on obstetric risk category can only be defined as interventions on the constituent components used to characterize the exposure. Such an intervention would be difficult to imagine as all of these embodied socio-demographic and clinical features may be hard to identify; or
2. obstetric risk category to be _defined_ as a conceptually distinct feature from the measured socio-demographic and clinical features used to compute it, and thus possibly manipulable separately from these features. In this case the "risk category" variable would simply be no more or less than the computed "risk category" that appears on the screens of mobile devices, and these risk categories could be manipulated simply by intervening on the software run on these devices. However, such interventions would be of a substantively different nature with profound differences in interpretation and will have different implications for policy-makers. Moreover, the exposure "risk category" would not be susceptible to unmeasured confounding.
Considering effects of intervening variables ameliorates this ambiguity and also clarifies assumptions. Specifically, an intervention that avoids these difficulties would be to fix the _output_ value from the algorithm, so that it recommends a delivery location as usual, but the patient's recommended savings amount is based on the delivery location the original algorithm would recommend if that patient had been deemed to be at low risk for obstetric complications. We can explicitly define the algorithm's computed risk category as \(A_{M}\) (a modifiable _intervening_ variable) that is distinct from the patient's non-modifiable embodied risk category (\(A\)). In the observed data, \(A\)=\(A_{M}\) with probability 1. However, we could conceive an intervention that modifies this intervening variable \(A_{M}\) without changing the exposure \(A\).
Proofs of identification the Frontdoor Formula
The proof for the frontdoor identifying formula (4) is given as follows:
\[E(Y^{a^{\dagger}}) = \sum_{m}E(Y^{a^{\dagger}}|M^{a^{\dagger}}\!=\!m)P(M^{a^{\dagger}}\!=\! m)\] \[= \sum_{m}E(Y^{a^{\dagger}}|M^{a^{\dagger}}\!=\!m)P(M\!=\!m|A\!=\!a^{ \dagger})\ \ (\mbox{By A5, A6})\] \[= \sum_{m}E(Y^{a^{\dagger},m}|M^{a^{\dagger}}\!=\!m)p(m|a^{\dagger}) \ \ (\mbox{By A6})\] \[= \sum_{a,m}E(Y^{a^{\dagger},m}|A\!=\!a,M^{a^{\dagger}}\!=\!m)p(a)p( m|a^{\dagger})\ \ (\mbox{By A5})\] \[= \sum_{m}p(m|a^{\dagger})\sum_{a}E(Y|A\!=\!a,M\!=\!m)p(a)\ \ ( \mbox{By A6, A7, A8})\]
Alternatively, we also provide a slightly different proof:
\[E(Y^{a^{\dagger}}) = \sum_{u}E(Y^{a^{\dagger}}|U\!=\!u)f(u)\] \[= \sum_{u}E(Y^{a^{\dagger}}|A\!=\!a^{\dagger},U\!=\!u)f(u)\] \[= \sum_{u}E(Y|A\!=\!a^{\dagger},U\!=\!u)f(u)\] \[= \sum_{u,m}E(Y|A\!=\!a^{\dagger},U\!=\!u,M\!=\!m)f(m|a^{\dagger},u )f(u)\] \[= \sum_{m}f(m|a^{\dagger})\!\sum_{u}E(Y|A\!=\!a^{\dagger},U\!=\!u,M \!=\!m)f(u)\] \[= \sum_{m}f(m|a^{\dagger})\!\sum_{u}E(Y|U\!=\!u,M\!=\!m)f(u).\]
Aside from probability laws, we note the following conditions that are used in the proof above: line 2 follows by conditional exchangeability of \(Y^{a^{\dagger}}\) and \(A\) conditional on \(U\) seen in the SWIG in Figure 2b and follows from Assumption 61; line 3 follows by consistency that is implied from recursive substitution of underlying NPSEMs; line 5 follows from Assumption 62; line 6 follows from Assumptions 7 and 8 and can be seen from the
conditional independence of \(Y\) and \(A\) given \(U\) and \(M\) as seen in DAG 2a3; and line 7 follows from the SWIG in Figure 2c where it can be seen that \(E(Y^{m})\)=\(\sum_{u}E(Y|U\)=\(u\),\(M\)=\(m)f(u)\)=\(\sum_{a}E(Y|A\)=\(a\),\(M\)=\(m)f(a)\) as \(U\) or \(A\) blocks the backdoor back from \(M\) to \(Y^{m}\). Alternatively, line 7 holds by algebraically by realizing the following:
Footnote 3: Assumption 7 ensures that there is no direct path from \(A\) to \(Y\) not mediated by \(M\). In addition, since there is no unmeasured common causes of mediator-outcome by Assumption 8, the only path from \(A\) to \(Y^{a^{\dagger}}\) is a backdoor path via \(U\) and a frontdoor path via \(M\).
\[\sum_{a}E(Y|A\!=\!a,M\!=\!m)f(a) =\sum_{a,u}E(Y|A\!=\!a,M\!=\!m,U\!=\!u)f(u|a,m)f(a)\] \[=\sum_{u}E(Y|U\!=\!u,M\!=\!m)\sum_{a}f(u|a)f(a)\] \[=\sum_{u}E(Y|U\!=\!u,M\!=\!m)f(u).\]
**Remark 1** (The front door formula is a weighted average).: _Consider a binary treatment \(A\) taking values \(a^{\dagger}\) and \(a^{\circ}\). It can be trivially shown that \(E(Y^{a^{\dagger}})\) is a weighted average of \(E(Y|A\!=\!a^{\dagger})\) and a separable estimand of treatment on \(Y\) denoted by \(E(Y^{a_{M}=a^{\dagger},a_{Y}=a^{\circ}})\) that can be identified from the observed data that e.g., follow an extended DAG seen in Figure 2a. This extended DAG results from splitting the treatment node \(A\) into two sub-components, namely \(A_{M}\) and \(A_{Y}\). The bolded arrows from \(A\) to \(A_{M}\) and \(A_{Y}\) indicate a deterministic relationship.4 More specifically, the aforementioned two estimands are weighted by the probability of receiving treatment \(a^{\dagger}\) and \(a^{\circ}\) such that \(E(Y^{a^{\dagger}})\) equals_
Footnote 4: Specifically in the observed data, \(A\)=\(A_{M}\)=\(A_{Y}\)=1 with probability 1, and \(A\)=\(A_{M}\)=\(A_{Y}\)=0 with probability 1.
\[P(A\!=\!a^{\dagger})E(Y|A\!=\!a^{\dagger})+P(A\!=\!a^{\circ})\underbrace{\sum _{m}E(Y|A\!=\!a^{\circ},M\!=\!m)f(m|a^{\dagger})}_{E(Y^{a_{M}=a^{\dagger},a_{Y }=a^{\circ}})},\] (B.1)
_as stated in the main text (Equation (C.2)). We utilize this decomposition in deriving efficient influence functions for the frontdoor formula. We note that that both estimands \(E(Y|A\!=\!a^{\dagger})\) and \(E(Y^{a_{M}=a^{\dagger},a_{Y}=a^{\circ}})\) allow for a direct path from \(A\) to \(Y\) not mediated through \(M\). Moreover, \(E(Y|A\!=\!a^{\dagger})\) is identified even if there are unmeasured common causes of \(A\) and \(Y\). This decomposition implies that when all individuals in the observed data take treatment \(a^{\dagger}\), then \(E(Y^{a^{\dagger}})\!=\!E(Y|A\!=\!a^{\dagger})\) and when all individuals in the observed data take treatment \(a^{\circ}\), then \(E(Y^{a^{\dagger}})\!=\!E(Y^{a_{M}=a^{\dagger},a_{Y}=a^{\circ}})\) (although this would not be identified from observed data unless \(f(m|a^{\dagger})\) is known a priori for all \(m\)). Note that when \(E(Y^{a^{\dagger}})\)=\(E(Y^{a_{M}=a^{\dagger},a_{Y}=a^{\circ}})\), \(A\)=\(A_{Y}\)=\(a^{\circ}\) for all observations, and thus intervening on \(A_{Y}\) (and creating a mediated path from \(A\!\longrightarrow\!A_{Y}\!\rightarrow\!Y\)) is unnecessary._
Identification and estimation of new causally manipulable estimand in absence of \(L\)
In this section, we will assume that \(L\) is the empty set.
**Theorem C.1**.: _The average counterfactual outcome under an intervention on \(A_{M}\) is identified by the frontdoor formula 4 under Assumptions 2, 1 and 4, that is,_
\[E(Y^{a_{M}=a^{\dagger}})\!=\!\sum_{m}\!P(M\!=\!m|A\!=\!a^{\dagger})\underbrace{ \sum_{a}E(Y|A\!=\!a,M\!=\!m)P(A\!=\!a).}_{(**)}\]
Suppose that the observed data \(\mathcal{O}\!=\!(A,M,Y)\) follow a law \(P\) which is known to belong to \(\mathcal{M}\!=\!\{P_{\theta}\!:\!\theta\!\in\!\Theta\}\), where \(\Theta\) is the parameter space. The efficient influence function \(\varphi^{\mathrm{eff}}(\mathcal{O})\) for a causal parameter \(\Psi\!\equiv\!\Psi(\theta)\) in a non-parametric model \(\mathcal{M}_{\mathrm{np}}\) that imposes no restrictions on the law of \(\mathcal{O}\) other than positivity is given by \(d\Psi(\theta_{t})/dt|_{t=0}\!=\!E\{\varphi^{\mathrm{eff}}(\mathcal{O})S( \mathcal{O})\}\), where \(d\Psi(\theta_{t})/dt|_{t=0}\) is known as the pathwise derivative of the parameter \(\Psi\) along any parametric submodel of the observed data distribution indexed by \(t\), and \(S(\mathcal{O})\) is the score function of the parametric submodel evaluated at \(t\!=\!0\) (Newey, 1994; Van Der Vaart, 2000).
The frontdoor formula can be re-expressed as a weighted average,
\[\Psi\!=\!P(A\!=\!a^{\dagger})E(Y|A\!=\!a^{\dagger})\!+\!P(A\!=\!a^{\circ}) \!\sum_{m}\!E(Y|A\!=\!a^{\circ},M\!=\!m)f(m|a^{\dagger}),\] (C.2)
and thus the efficient influence function can be broken into two components. Using the chain rule, the efficient influence function of \(\Psi\!=\!\sum_{m}\!f(m|a^{\dagger})\!\sum_{a}E(Y|A\!=\!a,M\!=\!m)f(a)\) can be derived by finding the efficient influence function of (1) \(\psi_{1}\!=\!P(A\!=\!a)\), (2) \(\psi_{2}\!=\!E(Y|A\!=\!a^{\dagger})\) and (3) \(\psi_{3}\!=\!\sum_{m}\!E(Y|A\!=\!a^{\circ},M\!=\!m)f(m|a^{\dagger})\). We will use that \(\psi_{3}\) is an established identifying formula for \(E(Y^{A_{Y}=a^{\dagger},A_{D}=a^{\circ}})\) a term in established identification formula for separable effect, which is equal to the same functional of the observed data law \(p(o)\) as \(E(Y^{a^{\circ}}|A\!=\!a^{\dagger})\) in the average treatment effect on the treated (ATT) if \(a^{\circ}\!=\!0\) and \(a^{\dagger}\!=\!1\) (Tchetgen and Shpitser, 2012).
**Theorem C.2**.: _The efficient influence function \(\varphi^{\mathrm{eff}}(\mathcal{O})\) of the frontdoor formula in \(\mathcal{M}_{\mathrm{np}}\) is given by_
\[\varphi^{\mathrm{eff}}(\mathcal{O})\!=\!\left[I(A\!=\!a^{\dagger}) \!-\!P(A\!=\!a^{\dagger})\right]\!\psi_{2}\!+\!P(A\!=\!a^{\dagger})\frac{I(A\! =\!a^{\dagger})}{P(A\!=\!a^{\dagger})}(Y\!-\!\psi_{2})+\] \[\frac{P(A\!=\!a^{\circ})}{P(A\!=\!a^{\dagger})}\!\left[I(A\!=\!a^ {\circ})\frac{P(A\!=\!a^{\dagger}|M)}{P(A\!=\!a^{\circ}|M)}\{Y\!-\!b_{0}(M)\} \!+\!I(A\!=\!a^{\dagger})\{b_{0}(M)\!-\!\psi_{3}\}\right]\!+\] \[\left[I(A\!=\!a^{\circ})\!-\!P(A\!=\!a^{\circ})]\psi_{3},\]
_where the terms in red are the efficient influence function for \(P(A{=}a^{\dagger})\psi_{2}\), the terms in blue is the efficient influence function for \(P(A{=}a^{\circ})\psi_{3}\), and \(b_{0}(M){=}E(Y|A{=}a^{\circ},M)\). The efficient influence function can be reduced to the following,_
\[\varphi^{\rm eff}(\mathcal{O}){=}I(A{=}a^{\dagger})Y+I(A{=}a^{ \circ})\psi_{3}+\\ \frac{P(A{=}a^{\circ})}{P(A{=}a^{\dagger})}\bigg{[}I(A{=}a^{ \circ})\frac{P(A{=}a^{\dagger}|\,M)}{P(A{=}a^{\circ}|\,M)}\{Y-b_{0}(M)\}+I(A{=}a ^{\dagger})\{b_{0}(M)-\psi_{3}\}\bigg{]}-\Psi.\] (C.3)
_which can be re-expressed as:_
\[\varphi^{\rm eff}(\mathcal{O}){=}I(A{=}a^{\dagger})Y+I(A{=}a^{ \circ})\psi_{3}+\\ \bigg{[}I(A{=}a^{\circ})\frac{f(M|\,a^{\dagger})}{f(M|\,a^{ \circ})}\{Y-b_{0}(M)\}+\frac{I(A{=}a^{\dagger})P(A{=}a^{\circ})}{P(A{=}a^{ \dagger})}\{b_{0}(M)-\psi_{3}\}\bigg{]}-\Psi,\] (C.4)
_by realizing that \(I(A{=}a^{\circ})P(A{=}a^{\circ})P(A{=}a^{\dagger}|\,M)\{P(A{=}a^{\dagger})P(A{= }a^{\circ}|\,M)\}^{-1}{=}I(A{=}a^{\circ})f(M|\,a^{\dagger})\{f(M|\,a^{\circ}) \}^{-1}\)._
After some algebra, it can be shown that (C.4) can be written as the form of the efficient influence function for \(\Psi\) given by Equation (5) in Theorem 1 in Fulcher et al. (2020) with \(C{=}\emptyset\). Following Theorem 1 in Fulcher et al. (2020), the semiparametric efficiency bound for \(\Psi\) in \(\mathcal{M}_{\rm np}\) is given by \(\mathrm{var}\big{(}\varphi^{\rm eff}\big{)}\).
#### c.0.1 On semiparametric estimators for the frontdoor formula
Writing the efficient influence function for the frontdoor formula (\(\Psi\)) given by in Expressions (C.3) or (C.4) allows us to construct estimators that guarantee sample-boundedness. A weighted iterative conditional expectation (Weighted ICE) estimator that guarantee sample-boundedness is given in the following algorihtm. In what follows, we let \(\mathbb{P}_{n}(X){=}n^{-1}{\sum_{i=1}^{n}}X_{i}\) and let \(g^{-1}\) denote a known inverse link function5.
Footnote 5: For instance, if \(Y\) is dichotomous, the \(g\) is the logit link function
```
1:Non-parametrically compute \(P(A{=}a^{\circ})\) and \(P(A{=}a^{\dagger})\).
2:Compute the MLEs \(\hat{\alpha}\) of \(\alpha\) from the observed data for the treatment model \(P(A{=}a|\,M;\alpha)\), or compute the MLEs \(\hat{\gamma}\) of \(\gamma\) from the observed data for the mediator model \(P(M{=}m|\,A;\gamma)\).
3:In the individuals whose \(A{=}a^{\circ}\), fit a regression model \(Q(M;\theta){=}g^{-1}\{\theta^{T}\phi(M)\}\) for \(b_{0}(M){=}E(Y|\,M,A{=}a^{\circ})\) where the score function for each observation is weighted by \(\hat{W}_{1}\) where \(\hat{W}_{1}\) equals \[\frac{P(A{=}a^{\circ})P(A{=}a^{\dagger}|\,M;\hat{\alpha})}{P(A{=}a^{\dagger})P( A{=}a^{\circ}|\,M;\hat{\alpha})}\]
if \(\hat{\alpha}\) was estimated in the previous step, or \(\hat{W}\) equals
\[\frac{f(M\!\mid\!A\!=\!\!a^{\dagger}\!;\hat{\gamma})}{f(M\!\mid\!A\!=\!\!a^{ \circ}\!;\!\hat{\gamma})}\]
if \(\hat{\gamma}\) was estimated in the previous step. Moreover, \(\phi(M)\) is a known function of \(M\). More specifically, we solve for \(\theta\) in the following estimating equations:
\[\mathbb{P}_{n}\left[I(A\!\!=\!\!a^{\circ})\phi(M)\hat{W}_{1}\{Y\!-\!Q(M;\! \theta)\}\right]\!\!=\!\!0\]
4: In those whose \(A\!\!=\!\!a^{\dagger}\), fit an intercept-only model \(T(\beta)\!\!=\!\!g^{-1}(\beta)\) for \(\psi_{3}\!\!=\!\!E\{b_{0}(M)\!\mid\!A\!\!=\!\!a^{\dagger}\}\), where the score function for each observation is weighted by
\[\hat{W}_{2}\!\!=\!\!\frac{P(A\!\!=\!\!a^{\circ})}{P(A\!\!=\!\!a^{\dagger})}.\]
More specifically, we solve for \(\beta\) in the following estimating equations:
\[\mathbb{P}_{n}\left[I(A\!\!=\!\!a^{\dagger})\hat{W}_{2}\!\left\{Q(M;\!\hat{ \theta})\!-\!T(\beta)\right\}\right]\!\!=\!\!0\]
5: Compute \(\hat{T}\!\equiv\!\!T(\hat{\beta})\) for all observations.
6: Estimate \(\hat{\Psi}_{WICE}\!\!=\!\!\mathbb{P}_{n}\{I(A\!\!=\!\!a^{\dagger})Y\!+\!I(A\! \!=\!\!a^{\circ})\hat{T}\}\)
Steps 3 and 4 ensure that the estimates for \(\psi_{3}\!\!=\!\!E\{b_{0}(M)\!\mid\!A\!\!=\!\!a^{\dagger}\}\) are sample bounded. Step 6 confirms that \(\hat{\Psi}_{WICE}\) is a convex combination of \(Y\) and estimates for \(\psi_{3}\), both of which are bounded by the range of the outcome \(Y\). Thus, \(\hat{\Psi}_{WICE}\) will also be sample-bounded. For instance if the outcome is binary, then \(\hat{\Psi}_{WICE}\) will always be bounded between 0 and 1. Note that estimators based on Expression (C.3) are more convenient to construct than estimators based on Expression (C.4) when (1) \(M\) is continuous, and/or (2) there are multiple mediator variables (\(M_{1}\), \(M_{2}\), \(M_{3}\dots\)).
In Web Appendix F, we prove that an estimator based on the efficient influence function given by (C.3) is doubly robust in the sense that it will be consistent as long as the model for \(P(A\!\!=\!\!a\!\mid\!M)\) or the model for \(b_{0}(M)\!\!=\!\!E(Y\!\mid\!A\!\!=\!\!a^{\circ}\!,\!M)\) is correctly specified, and an estimator based on the efficient influence function given by (C.4) is doubly robust in the sense that it will be consistent as long as the model for \(P(M\!\!=\!\!m\!\mid\!A)\) or the model for \(b_{0}(M)\) is correctly specified.
## Appendix D Interventionist identification with the frontdoor formula
Consider an extended causal DAG, which includes \(A\) and also the variable \(A_{M}\), where the bold arrow from \(A\) to \(A_{M}\) indicates a deterministic relationship. That is, Figure J.2a is the extended DAG of such a split with \(V\!\!=\!\!(U,\!A,\!A_{M},\!M,\!Y)\), and in the observed data, with probability one under \(f(v)\), either \(A\!\!=\!A_{M}\!\!=\!\!1\) or \(A\!\!=\!A_{M}\!\!=\!\!A_{Y}\!\!=\!\!0\).
Here and henceforth we use of "\((G)\)" to indicate that the variables refer to the hypothetical trial where \(A_{M}\) is randomly assigned, as illustrated in J.2b. Consider an intervention that sets \(A_{M}(G)\) to \(a_{M}\!\!=\!\!a^{\dagger}\). The average counterfactual outcome under such an intervention is indeed identified as shown in the following:
\[E(Y^{a_{M}=a^{\dagger}})\!\!=\!\!E(Y(G)^{a_{M}=a^{\dagger}})\] \[=\!\sum_{a}\!E(Y(G)^{a_{M}=a^{\dagger}}\!\mid\!A_{Y}(G)\!\!=\!\!a )P(A_{Y}(G)\!\!=\!\!a)\] \[=\!\sum_{a}\!E(Y(G)^{a_{M}=a^{\dagger}}\!\mid\!A_{Y}(G)\!\!=\!\!a,\!A_{M}(G)\!\!=\!\!a^{\dagger})P(A_{Y}(G)\!\!=\!\!a)\] \[=\!\sum_{a}\!E(Y(G)\!\mid\!A_{Y}(G)\!\!=\!\!a,\!A_{M}(G)\!\!=\!\!a ^{\dagger})P(A_{Y}(G)\!\!=\!\!a)\] \[=\!\sum_{a,m}\!E(Y(G)\!\mid\!A_{Y}(G)\!\!=\!\!a,\!A_{M}(G)\!\!=\! \!a^{\dagger},\!M(G)\!\!=\!\!m)\] \[\qquad P(M(G)\!\!=\!\!m\!\mid\!A_{Y}(G)\!\!=\!\!a,\!A_{M}(G)\!\! =\!\!a^{\dagger})P(A_{Y}(G)\!\!=\!\!a)\] \[=\!\sum_{m}\!P(M(G)\!\!=\!\!m\!\mid\!A\!=\!\!a^{\dagger})\!\sum_ {a}\!E(Y\!\mid\!A\!\!=\!\!a,\!M\!=\!\!m)P(A\!\!=\!\!a)\]
Aside from probability laws, we note the following conditions that are used in the proof above: equality 1 follows from consistency that is implied from recursive substitution; equality 3 follows by definition of \(G\) and can be seen via d-separation that follows from intervention on \(A_{M}(G)\) in Figure J.2b; equality 4 holds by consistency; equality 6 holds by the conditional independence \(M(G)\!\!\perp\!\!\!\perp\!A_{Y}(G)\!\mid\!A_{M}(G)\) and by the conditional independence \(Y(G)\!\!\perp\!\!\!\perp\!\!A_{M}(G)\!\mid\!A_{Y}(G),\!M(G)\), both of which follows from d-separation in Figure J.2b; and equality 7 holds by definition of \(G\), consistency and determinism such that the event \(\{A_{Y}(G)\!\!=\!\!a\}\) is the same as the event \(\{A(G)\!\!=\!\!a\}\) in \(G\) and that \(\{A_{Y}\!\!=\!\!a,\!A_{M}\!\!=\!\!a\}\) is the same as the event \(\{A\!\!=\!\!a\}\) in the observed data. Of course, since we are not intervening on \(A_{Y}(G)\), we can remove it from the extended DAG as below. This will only require
us to define one particular variable \(A_{M}\) that is deterministically equal \(A\) in the observed data.
[Figure 1 about here.]
[Figure 2 about here.]
### Removing \(A_{y}\) from the extended DAG
The above results still hold if we remove \(A_{Y}\) from the extended DAG given in Figure J.2a. The modified extended DAG with \(A_{Y}\) removed is presented in Figure 1a, where again we use of "\((G)\)" to indicate that the variables refer to the hypothetical trial where \(A_{M}\) is randomly assigned. In particular, conditioning sets that include \(A(G)\)=\(a\),\(A_{M}(G)\)=\(a^{\dagger}\) refer to the hypothetical trial \(G\) in which \(A_{M}\) is randomly assigned6. Then, the proof proceeds as follows:
Footnote 6: or imagine a trial where the only arrow into \(A_{M}\) is from \(A\) (which may not be deterministic).
\[E(Y^{a_{M}=a^{\dagger}}) = E(Y(G)^{a_{M}=a^{\dagger}})\] \[= \sum_{a}E(Y(G)^{a_{M}=a^{\dagger}}|A(G)\text{=}a)P(A(G)\text{=}a)\] \[= \sum_{a}E(Y(G)^{a_{M}=a^{\dagger}}|A(G)\text{=}a,A_{M}(G)\text{=}a ^{\dagger})P(A(G)\text{=}a)\] \[= \sum_{a}E(Y(G)|A(G)\text{=}a,A_{M}(G)\text{=}a^{\dagger},M(G)\text {=}m)\] \[\qquad P(M(G)\text{=}m|A(G)\text{=}a,A_{M}(G)\text{=}a^{\dagger} )P(A(G)\text{=}a)\] \[= \sum_{a,m}P(M(G)\text{=}m|A(G)\text{=}a^{\dagger},A_{M}(G)\text {=}a^{\dagger})\] \[\qquad E(Y(G)|A(G)\text{=}a,A_{M}(G)\text{=}a,M(G)\text{=}m)P(A (G)\text{=}a)\] \[= \sum_{m}P(M\text{=}m|A\text{=}a^{\dagger})\sum_{a}E(Y\,|A\text{=} a,M\text{=}m)P(A\text{=}a)\]
Aside from probability laws, we note the following conditions that are used in the proof above: equality 1 holds by consistency that follows by recursive substitution; equality 3 holds by definition of \(G\) and can be seen via d-separation - \(Y(G)^{a_{M}=a^{\dagger}}\,\hbox to 0.0pt{\perp}\mskip 2.0mu {\perp}\,A_{M}(G)\,|\,A(G)\) - that follows by the intervention on \(A_{M}(G)\) in DAG for \(G\); equality 4 holds by consistency; equality 6 holds by d-separation following Assumption (4) where \(Y(G)\,\hbox to 0.0pt{\perp}\mskip 2.0mu {\perp}\,A_{M}(G)\,|\)\(A(G)\),\(M(G)\) and \(M(G)\,\hbox to 0.0pt{\perp}\mskip 2.0mu {\perp}\,A(G)\,|\,A_{M}(G)\) (since we are assuming for now that \(L(G)\text{=}\emptyset\));
and equality 7 holds by definition of \(G\), consistency and determinism in the observed data such that the event {\(A\)=\(a\),\(A_{M}\)=\(a\)} is the same as the event {\(A\)=\(a\)}.
A causally manipulable interpretation of the Pure Intervention Indirect Effect in the general scenario
As before, consider splitting the treatment node \(A\) on a DAG given in Figure 2a into two sub-components, namely \(A_{M}\) and \(A_{Y}\). Figure 3a is the extended DAG of such a split with \(V\)=(\(L\),\(U\),\(A\),\(A_{M}\),\(A_{Y}\),\(M\),\(Y\)). This extended DAG is analogous to the extended DAG described in Section 3 but generalized to include measured common causes (\(L\)) of \(A\), \(M\) and \(Y\). Analogous to Section D.1, we can also remove \(A_{Y}\) in the extended DAG and the results above would still hold to identify the average counterfactual outcome under an intervention on \(A_{M}\) that sets it equal to \(a^{\dagger}\). Again, consider an intervention on \(A_{M}(G)\) that sets it equal to \(a_{M}\)=\(a^{\dagger}\) as shown in Figure 3b7. The average counterfactual outcome under such an intervention is indeed identified in the following:
Footnote 7: we can also imagine a trial where there are non-deterministic arrows from \(A\) and \(L\) into \(A_{M}\).
\[E(Y^{a_{M}=a^{\dagger}}) = E(Y(G)^{a_{M}=a^{\dagger}})\] \[= \sum_{l,a}E(Y(G)^{a_{M}=a^{\dagger}}|L(G)=l,A(G)=a)P(A(G)=a|L(G)=l) P(L(G)=l)\] \[= \sum_{l,a}E(Y(G)^{a_{M}=a^{\dagger}}|A_{M}(G)=a^{\dagger},L(G)=l, A(G)=a)P(A(G)=a|L(G)=l)P(L(G)=l)\] \[= \sum_{l,a}E(Y(G)|\,A_{M}(G)=a^{\dagger},L(G)=l,A(G)=a)P(A(G)=a|L(G)=l) P(L(G)=l)\] \[= \sum_{m,l,a}E(Y(G)|\,M(G)=m,A_{M}(G)=a^{\dagger},L(G)=l,A(G)=a)\] \[\qquad P(M(G)=m|\,A_{M}(G)=a^{\dagger},A(G)=a,L(G)=l)P(A(G)=a|\,L (G)=l)P(L(G)=l)\] \[= \sum_{m,l}P(M(G)=m|\,A_{M}(G)=a^{\dagger},A(G)=a^{\dagger},L(G)=l )P(L(G)=l)\] \[\qquad\sum_{a}E(Y(G)|\,M(G)=m,L(G)=l,A(G)=a,A_{M}(G)=a)P(A(G)=a|\,L (G)=l)\] \[= \sum_{m}P(M=m|\,A=a^{\dagger},L=l)P(L=l)\sum_{a}E(Y|\,M=m,L=l,A= a)P(A=a|\,L=l)\]
Aside from probability laws, we note the following conditions that are used in the proof above: equality 1 holds by consistency that follows by recursive substitution; equality 3 follows by definition of \(G\) and can be seen via d-separation that follows from intervention on \(A_{M}(G)\) in \(G\) shown in Figure 3b (which also holds if there are arrows from \(L(G)\)
to \(A_{M}(G)\) and \(A(G)\) to \(A_{M}(G)\)); equality 4 holds by consistency; equality 6 holds by the the dismissible component conditions such that conditional independence \(M(G)\,\mbox{\rm$\perp\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Derivation of efficient influence function in Section 5
Proof.: Note that for binary treatment variables, the generalized frontdoor formula \(\Psi\!=\!\sum_{m}\!f(m|a^{\dagger})\!\sum_{a}\!E(Y|A\!\!=\!\!a,\!M\!\!=\!\!m)f(a)\) is equivalent to the following:
\[\Psi\!=\!P(A\!\!=\!\!a^{\dagger})E(Y|A\!\!=\!\!a^{\dagger})+P(A\!\!=\!\!a^{ \circ})\sum_{m,l}\!E(Y|M\!\!=\!\!m,\!L\!=\!\!l,\!A\!=\!\!a^{\circ})f(m|a^{ \dagger},\!l)f(l|a^{\circ}).\]
The efficient influence function in the nonparametric model \(\mathcal{M}_{NP}\) is defined as the unique mean zero, finite variance random variable \(\varphi^{\text{eff}}(\mathcal{O})\) such that
\[\frac{d\Psi(\theta_{t})}{dt}\Big{|}_{t=0}\!=\!E\{\varphi^{\text{eff}}(\mathcal{ O})S(\mathcal{O})\}\]
where \(\mathcal{O}\!\!=\!\!(L,\!A,\!M,\!Y)\), \(d\Psi(\theta_{t})/dt|_{t=0}\) is known as the pathwise derivative of parameter \(\Psi\) along a parametric submodel indexed by \(t\), and \(S(\mathcal{O})\) is the score function of the parametric submodel evaluated at \(t\!\!=\!\!0\).
The efficient influence function of \(\Psi\) can be realized by finding the efficient influence function of (1) \(\psi_{1}\!\!=\!\!P(A\!\!=\!\!a)\), (2) \(\psi_{2}\!\!=\!\!E(Y|A\!\!=\!\!a^{\dagger})\) and (3) \(\psi_{3}\!\!=\!\!\sum_{m,l}\!E(Y|M\!\!=\!\!m,\!L\!=\!\!l,\!A\!=\!\!a^{\circ})f (m|a^{\dagger},\!l)f(l|a^{\circ})\). In particular, using differentiation rules, the efficient influence function is given by:
\[\varphi^{\text{eff}}(\mathcal{O})\!\!=\!\underbrace{[I(A\!\!=\!\!a^{\dagger} )\!-\!P(A\!\!=\!\!a^{\dagger})]}_{(*)}\!\psi_{2}\!+\!\psi_{2}^{\text{eff}}P(A \!\!=\!\!a^{\dagger})\!+\!\psi_{3}^{\text{eff}}P(A\!\!=\!\!a^{\circ})\!+\! \underbrace{[I(A\!\!=\!\!a^{\circ})\!-\!P(A\!\!=\!\!a^{\circ})]}_{(**)}\!\psi_ {3}\]
where expression \((*)\) is the efficient influence function of \(P(A\!\!=\!\!a^{\dagger})\) and expression \((**)\) is the efficient influence function of \(P(A\!\!=\!\!a^{\circ})\). Moreover,
\[\frac{d\psi_{2}(\theta_{t})}{dt}\Big{|}_{t=0}\!=\!E\{YS(Y|A\!\!=\!\!a^{ \dagger})|A\!\!=\!\!a^{\dagger}\}\] \[=\!E\big{[}\big{\{}Y\!-\!E(Y|A\!\!=\!\!a^{\dagger})\big{\}}S(Y|A \!\!=\!\!a^{\dagger})|A\!\!=\!\!a^{\dagger}\big{]}\] \[=\!E\bigg{[}\frac{I(A\!\!=\!\!a^{\dagger})(Y\!\!-\!\psi_{2})}{P( A\!\!=\!\!a^{\dagger})}S(\mathcal{O})\bigg{]}\]
and
\[\frac{d\psi_{3}(\theta_{t})}{dt}\Big{|}_{t=0}\!=\!\underbrace{E \big{(}E\big{[}E\big{[}E\big{\{}YS(Y|A\!\!=\!\!a^{\circ},\!M,\!L)|A\!\!=\!\!a^ {\circ},\!M,\!L\}|A\!\!=\!\!a^{\dagger},\!L\big{]}|A\!\!=\!\!\!a^{\circ}\big{)} }_{\bigtriangleup}+\] \[\underbrace{E\big{[}E\big{\{}E(Y|A\!\!=\!\!a^{\circ},\!M,\!L)S(M |A\!\!=\!\!a^{\dagger},\!L)|A\!\!=\!\!a^{\dagger},\!L\big{\}}|A\!\!=\!\!a^{ \circ}\big{]}}_{\bigtriangleup}+\] \[\underbrace{E\big{[}E\big{\{}E(Y|A\!\!=\!\!a^{\circ},\!M,\!L)|A\! =\!\!a^{\dagger},\!L\big{\}}S(L|A\!\!=\!\!a^{\circ})|A\!\!=\!\!\!a^{\circ} \big{]}}_{\bigtriangleup}\]
We look at each expression separately. First, we consider Expression \(\raisebox{-1.0pt}{\text{\textregistered}}\):
\[\raisebox{-1.0pt}{\text{\textregistered}}= E\left(E\left[E\{YS(Y\!\mid\!A\!\!=\!\!a^{\circ},\!M,\!L)\! \mid\!A\!\!=\!\!a^{\circ},\!M,\!L\}\!\mid\!A\!\!=\!\!a^{\dagger},\!L\right]\! \mid\!A\!\!=\!\!a^{\circ}\right)\] \[= E\left(E\left[E\{Y\frac{I(A\!\!=\!\!a^{\circ})}{P(A\!\!=\!\!a^{ \circ}\!\mid\!L,\!M)}S(Y\!\mid\!A,\!M,\!L)\!\mid\!M,\!L\right\}\!\frac{I(A\! \!=\!\!a^{\dagger})}{P(A\!\!=\!\!a^{\dagger}\!\mid\!L)}\!\mid\!L\right]\frac{ I(A\!\!=\!\!a^{\circ})}{P(A\!\!=\!\!a^{\circ})}\right)\] \[= E\left(E\left[E\{Y\frac{I(A\!\!=\!\!a^{\circ})}{P(A\!\!=\!\!a^{ \circ}\!\mid\!L,\!M)}S(Y\!\mid\!A,\!M,\!L)\!\mid\!M,\!L\right]\!\frac{P(A\! \!=\!\!a^{\dagger}\!\mid\!L,\!M)}{P(A\!\!=\!\!a^{\dagger}\!\mid\!L)}\!\mid\! L\right]\frac{P(A\!\!=\!\!a^{\circ}\!\mid\!L)}{P(A\!\!=\!\!a^{\circ})}\right)\] \[= E\left(E\left[E\left\{\Big{(}Y\!-\!b_{0}(M,\!L)\Big{)}\frac{I(A \!\!=\!\!a^{\circ})}{P(A\!\!=\!\!a^{\circ}\!\mid\!L,\!M)}S(Y\!\mid\!A,\!M,\!L )\!\mid\!M,\!L\right\}\!\frac{P(A\!\!=\!\!\!a^{\dagger}\!\mid\!L,\!M)}{P(A\! \!=\!\!a^{\dagger}\!\mid\!L)}\!\mid\!L\right]\frac{P(A\!\!=\!\!a^{\circ}\!\mid \!L)}{P(A\!\!=\!\!a^{\circ})}\right)\] \[= E\left[\Big{\{}Y\!-\!b_{0}(M,\!L)\Big{\}}\frac{I(A\!\!=\!\!a^{ \circ})P(A\!\!=\!\!\!a^{\dagger}\!\mid\!L,\!M)P(A\!\!=\!\!a^{\circ}\!\mid\! L)}{P(A\!\!=\!\!a^{\circ}\!\mid\!L,\!M)P(A\!\!=\!\!a^{\dagger}\!\mid\!L)P(A\!\!=\!\!a^{ \circ})}S(\mathcal{O})\right]\]
which also equals
\[E\left[\Big{\{}Y\!-\!b_{0}(M,\!L)\Big{\}}\frac{I(A\!\!=\!\!a^{ \circ})f(M\!\mid\!A\!\!=\!\!a^{\dagger},\!L)}{P(A\!\!=\!\!a^{\circ})f(M\! \mid\!A\!\!=\!\!a^{\circ},\!L)P(A\!\!=\!\!a^{\dagger}\!\mid\!L)}S(\mathcal{ O})\right]\!.\]
Next, we consider \(\raisebox{-1.0pt}{\text{\textregistered}}\):
\[\raisebox{-1.0pt}{\text{\textregistered}}= E\left[E\left\{\underbrace{E(Y\!\mid\!A\!\!=\!a^{\circ},\!M,\!L)}_{b_{ 0}(M,\!L)}S(M\!\mid\!A\!\!=\!\!a^{\dagger},\!L)\!\mid\!A\!\!=\!\!a^{\dagger}, \!L\right\}\!\mid\!A\!\!=\!\!a^{\circ}\right]\] \[= E\left[E\left\{b_{0}(M,\!L)\frac{I(A\!\!=\!\!a^{\dagger})}{P(A \!\!=\!\!a^{\dagger}\!\mid\!L)}S(M\!\mid\!A,\!L)\!\mid\!L\right\}\!\frac{P(A\! \!=\!\!a^{\circ}\!\mid\!L)}{P(A\!\!=\!\!a^{\circ})}\right]\] \[= E\left[\Big{\{}b_{0}(M,\!L)\!-\!E\Big{(}b_{0}(M,\!L)\!\mid\!L,\! A\Big{)}\Big{\}}\frac{I(A\!\!=\!\!a^{\dagger})P(A\!\!=\!\!a^{\circ}\!\mid\!L)}{P(A \!\!=\!\!a^{\dagger}\!\mid\!L)P(A\!\!=\!\!a^{\circ})}S(\mathcal{O})\right]\]
Finally, we consider \(\raisebox{-1.0pt}{\text{\textregistered}}\):
\[\raisebox{-1.0pt}{\text{\textregistered}}= E\left[\underbrace{E\big{\{}E(Y\!\mid\!A\!\!=\!\!a^{ \circ},\!M,\!L)\!\mid\!A\!\!=\!\!a^{\dagger},\!L\big{\}}}_{h_{\uparrow}(L)}S(L \!\mid\!A\!\!=\!\!a^{\circ})\!\mid\!A\!\!=\!\!\!a^{\circ}\right]\] \[= E\left\{h_{\uparrow}(L)\frac{I(A\!\!=\!\!a^{\circ})}{P(A\!\!=\!\! \!a^{\circ})}S(L\!\mid\!A)\right\}\] \[= E\left\{\Big{(}h_{\uparrow}(L)\!-\!\!\psi_{3}\Big{)}\frac{I(A\!\! =\!\!a^{\circ})}{P(A\!\!=\!\!a^{\circ})}S(\mathcal{O})\right\}\]
Thus, putting everything together and after some further algebraic simplification, we can see that the efficient influence function is indeed given by:
\[\varphi^{\text{eff}}(\mathcal{O}) = I(A{=}a^{\dagger})Y+I(A{=}a^{\dagger})\psi_{3}+\] \[\frac{P(A{=}a^{\circ})}{P(A{=}a^{\dagger})}\bigg{[}I(A{=}a^{\circ} )\frac{P(A{=}a^{\dagger}|M)}{P(A{=}a^{\circ}|M)}\{Y-b_{0}(M)\}+I(A{=}a^{ \dagger})\{b_{0}(M)-\psi_{3}\}\bigg{]}-\Psi\]
It is trivial to realize that this Expression of the efficient influence function can also be re-expressed as the following:
\[\varphi^{\text{eff}}(\mathcal{O}) = I(A{=}a^{\dagger})Y+I(A{=}a^{\dagger})\psi_{3}+\] \[\bigg{[}I(A{=}a^{\circ})\frac{f(M|a^{\dagger})}{f(M|a^{\circ})}\{ Y-b_{0}(M)\}+\frac{I(A{=}a^{\dagger})P(A{=}a^{\circ})}{P(A{=}a^{\dagger})}\{b_{0}(M) -\psi_{3}\}\bigg{]}-\Psi\]
## Appendix F Robustness against model misspecification
### Non-generalized front door formula
We show that an estimator based on Equation (C.3) is doubly robust in the sense that it will be consistent as long as
1. the model for \(P(A{=}a\!\mid\!M)\) is correctly specified, or
2. the model for \(E(Y\!\mid\!A{=}a^{\circ},M)\) is correctly specified.
and that an estimator based on Equation (C.4) is doubly robust in that it will be consistent as long as
1. the model for \(P(M{=}m\!\mid\!A)\) is correctly specified, or
2. the model for \(E(Y\!\mid\!A{=}a^{\circ},M)\) is correctly specified.
We consider an estimator based on Equation (C.3). Suppose that \(\alpha^{*}\), \(\theta^{*}\) and \(\beta^{*}\) are probability limits of \(\alpha\), \(\theta\) and \(\beta\), respectively. Furthermore, let \(b_{0}^{*}(M){=}Q(M;\theta^{*})\) where as before \(Q(M;\!\theta)\) is a model for \(b_{0}(M){=}E(Y\!\mid\!M,A{=}a^{\circ})\), and let \(\psi_{3}^{*}{=}T(\beta^{*})\) where \(T(\beta)\) is a non-parametric model for \(\psi_{3}{=}E\{b_{0}(M)\!\mid\!A{=}a^{\dagger}\}\). Under Equation (C.3) suffices to show that
\[E\Bigg{(}I(A{=}a^{\dagger})Y{+}I(A{=}a^{\circ})\psi_{3}^{*}{+}\] \[\frac{P(A{=}a^{\circ})}{P(A{=}a^{\dagger})}\bigg{[}I(A{=}a^{\circ })\frac{P(A{=}a^{\dagger}\!\mid\!M;\!\alpha^{*})}{P(A{=}a^{\circ}\!\mid\!M;\! \alpha^{*})}\{Y{-}b_{0}^{*}(M)\}{+}I(A{=}a^{\dagger})\{b_{0}^{*}(M){-}\psi_{3} ^{*}\}\bigg{]}{-}\Psi\Bigg{)}{=}0\]
under scenario **(1)** where \(\alpha^{*}{=}\alpha\) and thus \(P(A{=}a^{\dagger}\!\mid\!M;\!\alpha^{*}){=}P(A{=}a^{\dagger}\!\mid\!M)\), **or** under scenario **(2)** where \(\theta^{*}{=}\theta\) and thus \(b_{0}^{*}(M){=}b_{0}(M)\) and \(\psi_{3}^{*}{=}\psi_{3}\).
Proof.: Suppose first that only the model for \(P(A{=}a\!\mid\!M)\) is correctly specified. Then,
\[E\Bigg{(}I(A{=}a^{\dagger})Y{+}I(A{=}a^{\circ})\psi_{3}^{*}{+}\] \[\frac{P(A{=}a^{\circ})}{P(A{=}a^{\dagger})}\bigg{[}I(A{=}a^{ \circ})\frac{P(A{=}a^{\dagger}\!\mid\!M;\!\alpha^{*})}{P(A{=}a^{\circ}\!\mid \!M;\!\alpha^{*})}\{Y{-}b_{0}^{*}(M)\}{+}I(A{=}a^{\dagger})\{b_{0}^{*}(M){-} \psi_{3}^{*}\}\bigg{]}{-}\Psi\Bigg{)}\] \[=E\Bigg{(}P(A{=}a^{\dagger}\!\mid\!M)E(Y\!\mid\!A{=}a^{\dagger}, M){+}P(A{=}a^{\circ}\!\mid\!M)\psi_{3}^{*}{+}\] \[\frac{P(A{=}a^{\circ})}{P(A{=}a^{\dagger})}\bigg{[}\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\[\frac{P(A{=}a^{\circ})}{P(A{=}a^{\dagger})}\Big{[}P(A{=}a^{\dagger}|M{ =}m;\alpha^{*})b_{0}(m){-}P(A{=}a^{\dagger}|M{=}m)\psi_{3}^{*}\big{]}\Big{]}P(M{=}m){-}\Psi\] \[= E(Y|A{=}a^{\dagger})P(A{=}a^{\dagger}){+}\sum_{m}P(A{=}a^{\circ} )P(M{=}m|A{=}a^{\dagger})b_{0}(M){-}\Psi\] \[= 0\]
Next, suppose that only the model for \(E(Y|A{=}a^{\circ},M)\) is correctly specified. Then,
\[E\Bigg{(}I(A{=}a^{\dagger})Y{+}I(A{=}a^{\circ})\psi_{3}^{*}{+}\] \[\frac{P(A{=}a^{\circ})}{P(A{=}a^{\dagger})}\bigg{[}I(A{=}a^{ \circ})\frac{P(A{=}a^{\dagger}|M;\alpha^{*})}{P(A{=}a^{\circ}|M;\alpha^{*})} \{Y{-}b_{0}^{*}(M)\}{+}I(A{=}a^{\dagger})\{b_{0}^{*}(M){-}\psi_{3}^{*}\}\bigg{]} {-}\Psi\Bigg{)}\] \[= E\Bigg{(}P(A{=}a^{\dagger}|M)E(Y|A{=}a^{\dagger},M){+}P(A{=}a^{ \circ}|M)\psi_{3}^{*}{+}\] \[\frac{P(A{=}a^{\circ})}{P(A{=}a^{\dagger})}\Bigg{[}P(A{=}a^{ \circ}|M)\frac{P(A{=}a^{\dagger}|M;\alpha^{*})}{P(A{=}a^{\circ}|M;\alpha^{*})} \{\underbrace{b_{0}(M){-}b_{0}^{*}(M)}_{=0}\}{+}P(A{=}a^{\dagger}|M)\{b_{0}^{* }(M){-}\psi_{3}^{*}\}\Bigg{]}{-}\Psi\Bigg{)}\] \[= \sum_{m}\Bigg{(}P(A{=}a^{\dagger}|M{=}m)E(Y|A{=}a^{\dagger},M{=}m ){+}P(A{=}a^{\circ}|M{=}m)\psi_{3}^{*}{+}\] \[\frac{P(A{=}a^{\circ})}{P(A{=}a^{\dagger})}\big{[}P(A{=}a^{ \dagger}|M{=}m)\{b_{0}^{*}(M){-}\psi_{3}^{*}\}\big{]}\Bigg{)}P(M{=}m){-}\Psi\] \[= E(Y|A{=}a^{\dagger})P(A{=}a^{\dagger}){+}\sum_{m}P(A{=}a^{\circ} )P(M{=}m|A{=}a^{\dagger})b_{0}(m){-}\Psi\] \[= 0\]
The proof of double robustness for estimators based on Equation (C.4) follows analogously as the proof shown above.
### Generalized front door formula
Our proposed estimator based on the efficient influence function given by (5) and (6) is robust against 3 classes of model misspecification scenarios. Specifically, the weighted ICE estimator where a model for \(P(A{=}a|M,L)\) is specified will be consistent when at least one of the following holds:
1. the models for \(P(A{=}a|M,L)\)_and_\(P(A{=}a|L)\) are correctly specified, or
2. the models for \(b_{0}(M,L)\) and \(h_{\dagger}(L)\) are correctly specified, or
3. the models for \(b_{0}(M,L)\) and \(P(A{=}a|L)\) are correctly specified.
The weighted ICE estimator where a model for \(P(M{=}m\,|A,L)\) is specified will be consistent when at least one of the following holds
1. the models for \(P(M{=}m\,|A,L)\)_and_\(P(A{=}a\,|L)\) are correctly specified, or
2. the models for \(b_{0}(M,L)\) and \(h_{\dagger}(L)\) are correctly specified, or
3. the models for \(b_{0}(M,L)\) and \(P(A{=}a\,|L)\) are correctly specified.
We will prove robustness for an estimator based on (6) where a model for \(P(M{=}m\,|A,L)\) is specified. Proof of robustness for estimator based on (5) where a model for \(P(A{=}a\,|M,L)\) is specified follows analogously.
Suppose that \(\gamma^{*}\), \(\kappa^{*}\), \(\theta^{*}\), \(\eta^{*}\) and \(\beta^{*}\) are probability limits of \(\gamma\),\(\kappa\), \(\theta\)\(\eta\) and \(\beta\), respectively. Furthermore, let \(b_{0}^{*}(M,L){=}Q(M,L;\theta^{*})\) where as before \(Q(M,L;\theta)\) is a model for \(b_{0}(M){=}\)\(E(Y\,|M,L,A{=}a^{\circ})\), let \(h_{\dagger}^{*}(L){=}R(L;\eta^{*})\) where \(R(L;\eta)\) is a model for \(h_{\dagger}(L)\), and let \(\psi_{3}^{*}{=}\)\(T(\beta^{*})\) where \(T(\beta)\) is a non-parametric model for \(\psi_{3}{=}E\{h_{\dagger}(L)\,|A{=}a^{\circ}\}\).
Under Equation (5) suffices to show that
\[E\Biggl{(}I(A{=}a^{\dagger})Y+I(A{=}a^{\circ})\psi_{3}^{*}+\frac{ I(A{=}a^{\circ})f(M\,|A{=}a^{\dagger},L;\gamma^{*})}{f(M\,|A{=}a^{\circ},L; \gamma^{*})}\{Y-b_{0}^{*}(M,L)\}+\] \[\frac{I(A{=}a^{\dagger})P(A{=}a^{\circ}\,|L;\kappa^{*})}{P(A{=}a^ {\dagger}\,|L;\kappa^{*})}\{b_{0}^{*}(M,L)-h_{\dagger}^{*}(L)\}+I(A{=}a^{\circ} )\{h_{\dagger}^{*}(L)-\psi_{3}^{*}\}-\Psi\Biggr{)}{=}0\]
under scenario **(1)** where \(\gamma^{*}{=}\gamma\) and \(\kappa^{*}{=}\kappa\) and thus \(P(M{=}m\,|A,L;\gamma^{*}){=}P(M{=}m\,|A,L)\) and \(P(A{=}a^{\dagger}\,|L;\kappa^{*}){=}P(A{=}a^{\dagger}\,|L)\), **or** under scenario **(2)** where \(\theta^{*}{=}\theta\) and \(\eta^{*}{=}\eta\) and thus \(b_{0}^{*}(M,L){=}b_{0}(M,L)\), \(h_{\dagger}^{*}(L){=}h_{\dagger}(L)\) and \(\psi_{3}^{*}{=}\psi_{3}\), **or** under scenario **(3)** where \(\theta^{*}{=}\theta\) and \(\kappa^{*}{=}\kappa\) and thus \(b_{0}^{*}(M,L){=}b_{0}(M,L)\) and \(P(A{=}a^{\dagger}\,|L;\kappa^{*}){=}P(A{=}a^{\dagger}\,|L)\).
Proof.: Suppose first that only the models for \(P(M{=}m\,|A,L)\)_and_\(P(A{=}a\,|L)\) are correctly specified. Then,
\[E\Biggl{(}I(A{=}a^{\dagger})Y+I(A{=}a^{\circ})\psi_{3}^{*}+\frac{ I(A{=}a^{\circ})f(M\,|A{=}a^{\dagger},L;\gamma^{*})}{f(M\,|A{=}a^{\circ},L; \gamma^{*})}\{Y-b_{0}^{*}(M,L)\}+\] \[\frac{I(A{=}a^{\dagger})P(A{=}a^{\circ}\,|L;\kappa^{*})}{P(A{=}a^ {\dagger}\,|L;\kappa^{*})}\{b_{0}^{*}(M,L)-h_{\dagger}^{*}(L)\}+I(A{=}a^{\circ })\{h_{\dagger}^{*}(L)-\psi_{3}^{*}\}-\Psi\Biggr{)}\] \[= E\Biggl{(}\sum_{m}P(A{=}a^{\dagger}\,|L)E(Y\,|A{=}a^{\dagger},M{=} m,L)f(m\,|a^{\dagger},L)+P(A{=}a^{\circ}\,|L)\psi_{3}^{*}+\] \[\sum_{m}\frac{P(A{=}a^{\circ}\,|L)f(m\,|A{=}a^{\dagger},L;\gamma^{* })}{f(m\,|A{=}a^{\circ},L;\gamma^{*})}\{b_{0}(m,L)-b_{0}^{*}(m,L)\}\underline{ f(m\,|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\! \!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!|\!|\!|\!\!|\!|\!|\!|\!|\!|\! \!|\!|\!\!|\!|\!|\!|\!|\!|\!\!|\!|\!\!|\!|\!\!|\!|\!\!|\!|\!\!|\!|\!|\!\!|\!|\!|\!\!|\!|\! \!|\!|\!|\!\!|\!|\!\!|\!\!|\!|\!|\!|\!\!|\!|\!\!|\!\!|\!|\!|\!\!|\!|\!\!|\!\!|\!|\!|\!\!|\! \!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\! \!|\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!|\!\!|\! \!|\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\! \!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!|\!\!\!|\!\!|\!\!\!|\!\!\!|\!\!|\!\!|\!\!|\!\!\!|\!\!|\!\!|\!\!\!|\!\
\[P(A{=}a^{\circ}|L)\{h^{*}_{\dagger}(L){-}\psi^{*}_{3}\}{-}\Psi\}\] \[= E\Bigg{(}\sum_{m}P(A{=}a^{\dagger}|L)E(Y|A{=}a^{\dagger},M{=}m,L)f(m |a^{\dagger},L){+}\sum_{m}P(A{=}a^{\circ}|L)f(m|A{=}a^{\dagger},L;\gamma^{*})b_{ 0}(m,L)\Bigg{)}\] \[= P(A{=}a^{\dagger})E(Y|A{=}a^{\dagger})+P(A{=}a^{\circ})\sum_{m, l}b_{0}(m,l)f(m|a^{\dagger},l)f(l|a^{\circ})\] \[= 0\]
Next, suppose that only the models for \(b_{0}(M,L)\) and \(P(A{=}a|L)\) are correctly specified. Then,
\[E\Bigg{(}I(A{=}a^{\dagger})Y+I(A{=}a^{\circ})\psi^{*}_{3}{+}\frac {I(A{=}a^{\circ})f(M|A{=}a^{\dagger},L;\gamma^{*})}{f(M|A{=}a^{\circ},L;\gamma^ {*})}\{Y-b^{*}_{0}(M,L)\}{+}\] \[\frac{I(A{=}a^{\dagger})P(A{=}a^{\circ}|L;\kappa^{*})}{P(A{=}a^{ \dagger}|L;\kappa^{*})}\{b^{*}_{0}(M,L)-h^{*}_{\dagger}(L)\}{+}I(A{=}a^{\circ })\{h^{*}_{\dagger}(L)-\psi^{*}_{3}\}{-}\Psi\Bigg{)}\] \[= E\Bigg{(}\sum_{m}P(A{=}a^{\dagger}|L)E(Y|A{=}a^{\dagger},M{=}m,L )f(m|a^{\dagger},L)+\underline{P(A{=}a^{\circ}|L)\overline{\psi}^{*}_{3}}{+}\] \[\sum_{m}\frac{P(A{=}a^{\circ}|L)f(m|A{=}a^{\dagger},L;\gamma^{*}) }{f(m|A{=}a^{\circ},L;\gamma^{*})}\{\underbrace{b_{0}(m,L){-}b^{*}_{0}(m,L)}_{ \rightleftharpoons 0}f(m|a^{\circ},L)+\] \[\frac{P(A{=}a^{\dagger}|L)P(A{=}a^{\circ}|L;\kappa^{*})}{P(A{=}a^ {\dagger}|L;\kappa^{*})}\left\{\underbrace{\sum_{m}b^{*}_{0}(m,L)f(m|a^{\dagger },L){-}h^{*}_{\dagger}(L)}_{=0}\right\}+\] \[P(A{=}a^{\circ}|L)\{h^{*}_{\dagger}(L){-}\not{\psi}^{*}_{3}\}{-} \Psi\Bigg{)}\] \[= E\Bigg{(}\sum_{m}P(A{=}a^{\dagger}|L)E(Y|A{=}a^{\dagger},M{=}m,L )f(m|a^{\dagger},L)+\sum_{m}P(A{=}a^{\circ}|L)f(m|A{=}a^{\dagger},L;\gamma^{*} )b_{0}(m,L)\Bigg{)}\] \[= P(A{=}a^{\dagger})E(Y|A{=}a^{\dagger})+P(A{=}a^{\circ})\sum_{m, l}b_{0}(m,l)f(m|a^{\dagger},l)f(l|a^{\circ})\] \[= 0\]
Finally, suppose that only the models for \(b_{0}(M,L)\) and \(P(A{=}a|L)\) are correctly specified. Then,
\[E\Bigg{(}I(A{=}a^{\dagger})Y+I(A{=}a^{\circ})\psi^{*}_{3}{+}\frac {I(A{=}a^{\circ})f(M|A{=}a^{\dagger},L;\gamma^{*})}{f(M|A{=}a^{\circ},L;\gamma^ {*})}\{Y-b^{*}_{0}(M,L)\}{+}\]
\[\frac{I(A{=}a^{\dagger})P(A{=}a^{\circ}|L;\kappa^{*})}{P(A{=}a^{ \dagger}|L;\kappa^{*})}\{b_{0}^{*}(M,L)-h_{\dagger}^{*}(L)\}+I(A{=}a^{\circ})\{h_{ \dagger}^{*}(L)-\psi_{3}^{*}\}-\Psi\}\] \[= E\Bigg{(}\sum_{m}P(A{=}a^{\dagger}|L)E(Y|A{=}a^{\dagger},M{=}m,L )f(m|a^{\dagger},L)+\underline{P(A{=}a^{\circ}|L)\overline{\psi}_{3}^{*}}+\] \[\sum_{m}\frac{P(A{=}a^{\circ}|L)f(m|A{=}a^{\dagger},L;\gamma^{*}) }{f(m|A{=}a^{\circ},L;\gamma^{*})}\{\underbrace{b_{0}(m,L)-b_{0}^{*}(m,L)}_{ =0}\}f(m|a^{\circ},L)+\] \[\underbrace{\underline{P(A{=}a^{\dagger}|L)P(A{=}a^{\circ}|L; \kappa^{*})}}_{\underline{P(A{=}a^{\dagger}|L;\overline{\kappa^{*}})}}\Bigg{\{} \sum_{m}b_{0}^{*}(m,L)f(m|a^{\dagger},L)-h_{\dagger}^{*}(L)\Bigg{\}}+\] \[P(A{=}a^{\circ}|L)\{h_{\dagger}^{*}(L)-\not{\psi}_{3}^{*}\}-\Psi \Bigg{)}\] \[= E\Bigg{(}\sum_{m}P(A{=}a^{\dagger}|L)E(Y|A{=}a^{\dagger},M{=}m,L )f(m|a^{\dagger},L)+\] \[P(A{=}a^{\circ}|L;\kappa^{*})\Bigg{\{}\sum_{m}b_{0}^{*}(m,L)f(m|a^ {\dagger},L)-h_{\dagger}^{*}(L)\Bigg{\}}+\underline{P(A{=}a^{\circ}|L)\not{h} _{\dagger}^{*}(L)}-\Psi\Bigg{)}\] \[= P(A{=}a^{\dagger})E(Y|A{=}a^{\dagger})+P(A{=}a^{\circ})\sum_{m,l }b_{0}(m,l)f(m|a^{\dagger},l)f(l|a^{\circ})\] \[= 0\]
Other relevant estimators
### Inverse probability weighted estimator
We describe one class of inverse probability weighted estimator that was used in the simulation and data analysis (see Fulcher et al., 2020 for other inverse probability weighted estimators). Specifically, we can solve for \(\Psi_{IPW}\) in the following IPW estimator to estimate \(\Psi\):
\[\mathbb{P}_{n}\Bigg{[}\frac{I(A{=}a^{\dagger})}{f(A\left|L;\hat{\kappa})}\left\{ \sum_{a}E(Y\left|A{=}a,M,L;\hat{\theta})f(a\left|L;\hat{\kappa}\right)-\Psi_{ IPW}\right\}\right]{=}0\]
where \(\mathbb{P}_{n}(X){=}n^{-1}{\sum_{i=1}^{n}}X_{i}\) and \(E(Y\left|A,M,L;\hat{\theta})\) is an estimate of \(E(Y\left|A,M,L\right)\) such that \(E(Y\left|A{=}a^{\circ},M,L\right){=}b_{0}(M,L)\).
### Iterative conditional expectation estimator
The ICE estimator that was used in the simulation and data analysis follows from the weighted ICE procedure, whereby we set \(\hat{W}{=}1\) for all regression steps.
### Targeted maximum likelihood estimator
```
1:Non-parametrically compute \(P(A{=}a^{\circ})\) and \(P(A{=}a^{\dagger})\).
2:Compute the MLEs \(\hat{\kappa}\) of \(\kappa\) from the observed data for the treatment model \(P(A{=}a\left|L;\kappa\right)\). In addition, compute the MLEs \(\hat{\alpha}\) of \(\alpha\) from the observed data for the treatment model \(P(A{=}a\left|M,L;\alpha\right)\), or compute the MLEs \(\hat{\gamma}\) of \(\gamma\) from the observed data for the mediator model \(P(M{=}m\left|A,L;\gamma\right)\)
3:(A) In the individuals whose \(A{=}a^{\circ}\), fit a regression model \(Q(M,L;\theta){=}\)\(g^{-1}\{\theta^{T}\phi(M,L)\}\) for \(b_{0}(M,L){=}E(Y\left|M,L,A{=}a^{\circ}\right)\).
4: Update the previous regression. Specifically, in the individuals whose \(A{=}a^{\circ}\), fit a intercept-only regression model \(Q^{*}(M,L;\hat{\theta},\delta){=}g^{-1}\{\hat{\theta}^{T}\phi(M,L){+}\delta\}\) where the score function for each observation is weighted by \(\hat{W}_{1}\) (defined previously). More specifically, we solve for \(\delta\) in the following estimating equations: \[\mathbb{P}_{n}\Big{[}I(A{=}a^{\circ})\hat{W}_{1}\Big{\{}Y{-}Q^{*}(M,L;\hat{ \theta},\delta)\Big{\}}\Big{]}{=}0\]
4:(A) In those whose \(A{=}a^{\dagger}\), fit a regression model \(R(L;\eta){=}g^{-1}\{\eta^{T}\Gamma(L)\}\) for \(h_{\dagger}(L){=}\)\(E(b_{0}(M,L)\left|L,A{=}a^{\dagger})\) using \(Q^{*}(M,L;\hat{\theta},\hat{\delta})\) (as outcome) from the last step.
5: Update the previous regression. Specifically, in those whose \(A{=}a^{\dagger}\), fit an intercept-only regression model \(R^{*}(L;\nu){=}g^{-1}\{\hat{\eta}^{T}\Gamma(L){+}\nu\}\) where the score function for each observation is weighted by \(\hat{W}_{2}\). More specifically, we solve for \(\nu\) in
the following estimating equations:
\[\mathbb{P}_{n}\Big{[}I(A{=}a^{\dagger})\hat{W}_{2}\Big{\{}Q(M,L;\hat{\delta}){-}R^ {*}(L;\nu)\Big{\}}\Big{]}{=}0\]
5: In those whose \(A{=}a^{\circ}\), fit another regression model \(T(\beta){=}g^{-1}(\beta)\) for \(\psi_{3}{=}E\{h_{\dagger}(L)|\,A{=}\,\,a^{\circ}\}\) with just an intercept. More specifically, we solve for \(\beta\) in the following estimating equations:
\[\mathbb{P}_{n}[I(A{=}a^{\circ})\{R(L;\hat{\nu}){-}T(\beta)\}]{=}0\]
6: Compute \(\hat{T}{\equiv}T(\hat{\beta})\) for all observations.
7: Estimate \(\hat{\Psi}_{WICE}{=}\mathbb{P}_{n}\{I(A{=}a^{\dagger})Y{+}I(A{=}a^{\circ})\hat {T}\}\)
Machine learning algorithms can be incorporated into steps 3 and 4. In step 3(A), we can alternatively estimate \(b_{0}(M,L)\) using some function \(Q(M,L;\theta)\) with parameters \(\theta\) (possibly infinitely dimensional) obtained via some machine learning procedure. In step 3(B), instead of treating \(\hat{\theta}^{T}\phi(M,L)\) as an offset, we can set logit(\(Q(M,L;\hat{\theta})\)) as an offset. Analogous modification can be applied to step 4.
Extensions to discrete exposure variables with more than two levels
Extensions to discrete exposure variables with more than two levels is straightforward. To see this, we can show that our estimand can be written as follows:
\[\Psi\!=\!P(A\!=\!a^{\dagger})E(Y\!\mid\!A\!=\!a^{\dagger})\!+\!\!\sum_{\forall a ^{\circ}\neq a^{\dagger}}\!P(A\!=\!a^{\circ})\!\sum_{m,l}\!E(Y\!\mid\!M\!=\!m,L \!=\!l,A\!=\!a^{\circ})f(m\!\mid\!a^{\dagger},\!l)f(l\!\mid\!a^{\circ}).\]
It then follows that in this extension, the efficient influence function for \(\Psi\) is given by: The efficient influence function \(\varphi^{\rm eff}(\mathcal{O})\) for \(A_{Y}\!=\!(L,A,\!M,\!Y)\) is given by
\[\varphi^{\rm eff}(\mathcal{O})\!=\!I(A\!=\!a^{\dagger})Y\!+\!\! \sum_{\forall a^{\circ}\neq a^{\dagger}}\!I(A\!=\!a^{\circ})\psi_{3}+\] \[\sum_{\forall a^{\circ}\neq a^{\dagger}}\!\left[\frac{I(A\!=\!a^{ \circ})P(A\!=\!a^{\dagger}\!\mid\!M,\!L)P(A\!=\!a^{\circ}\!\mid\!L)}{P(A\!=\!a ^{\circ}\!\mid\!M,\!L)P(A\!=\!a^{\dagger}\!\mid\!L)}\{Y\!-\!b_{0}(M,\!L)\}+\right.\] \[\left.\frac{I(A\!=\!a^{\dagger})P(A\!=\!a^{\circ}\!\mid\!L)}{P(A\! =\!a^{\dagger}\!\mid\!L)}\{b_{0}(M,\!L)\!-\!h_{\dagger}(L)\}+\right.\] \[\left.I(A\!=\!a^{\circ})\{h_{\dagger}(L)\!-\!\psi_{3}\}\right]\!-\!\Psi,\]
which can also be re-expressed as
\[\varphi^{\rm eff}(\mathcal{O})\!=\!I(A\!=\!a^{\dagger})Y\!+\!\! \sum_{\forall a^{\circ}\neq a^{\dagger}}\!I(A\!=\!a^{\circ})\psi_{3}+\] \[\sum_{\forall a^{\circ}\neq a^{\dagger}}\!\left[\frac{I(A\!=\!a^{ \circ})f(M\!\mid\!A\!=\!a^{\dagger},\!L)}{f(M\!\mid\!A\!=\!a^{\circ},\!L)}\{Y \!-\!b_{0}(M,\!L)\}+\right.\] \[\left.\frac{I(A\!=\!a^{\dagger})P(A\!=\!a^{\circ}\!\mid\!L)}{P(A \!=\!a^{\dagger}\!\mid\!L)}\{b_{0}(M,\!L)\!-\!h_{\dagger}(L)\}+\right.\] \[\left.I(A\!=\!a^{\circ})\{h_{\dagger}(L)\!-\!\psi_{3}\}\right]\!-\!\Psi.\]
The weighted estimator will still be sample-bounded, but will need to be slightly modified in the following way:
**Algorithm 3** Algorithm for Weighted ICE (generalized frontdoor formula for discrete exposure with more than two levels)
```
1:Non-parametrically compute \(P(A\!=\!a)\) for all values of \(a\!\in\!\mathcal{A}\).
[MISSING_PAGE_POST]
2: Compute the MLEs \(\hat{\kappa}\) of \(\kappa\) from the observed data for the treatment model \(P(A{=}a|\)\(L;\kappa)\). In addition, compute the MLEs \(\hat{\alpha}\) of \(\alpha\) from the observed data for the treatment model \(P(A{=}a|M,L;\alpha)\), or compute the MLEs \(\hat{\gamma}\) of \(\gamma\) from the observed data for the mediator model \(P(M{=}m|A,L;\gamma)\)
3: For all levels of \(a^{\circ}\!\in\!\mathcal{A}\) that is not equal to \(a^{\dagger}\), do the following: 1. In the individuals whose \(A{=}a^{\circ}\), fit a regression model \(Q_{a^{\circ}}(M,L;\theta_{a^{\circ}}){=}\)\(g^{-1}\{\theta_{a^{\circ}}^{T}\phi_{a^{\circ}}(M,L)\}\) for \(b_{0,a^{\circ}}(M,L){=}E(Y|M,L,A{=}a^{\circ})\) where the score function for each observation is weighted by \(\hat{W}_{1,a^{\circ}}\) where \(\hat{W}_{1,a^{\circ}}1\) equals \[\frac{P(A{=}a^{\circ}|L;\hat{\kappa})P(A{=}a^{\dagger}|M,L;\hat{\alpha})}{P(A {=}a^{\dagger}|L;\hat{\kappa})P(A{=}a^{\circ}|M,L;\hat{\alpha})}\] if \(\hat{\alpha}\) was estimated in the previous step, or \(\hat{W}_{1,a^{\circ}}\) equals \[\frac{f(M|A{=}a^{\dagger},L;\hat{\gamma})}{f(M|A{=}a^{\circ},L;\hat{\gamma})}\] if \(\hat{\gamma}\) was estimated in the previous step. Moreover, \(\phi_{a^{\circ}}(M,L)\) is a known function of \(M\) and \(L\). More specifically, we solve for \(\theta\) in the following estimating equations: \[\mathbb{P}_{n}\Big{[}I(A{=}a^{\circ})\phi_{a^{\circ}}(M,L)\hat{W}_{1,a^{\circ }}\left\{Y{-}Q_{a^{\circ}}(M,L;\theta_{a^{\circ}})\right\}\Big{]}{=}0\] 2. In those whose \(A{=}a^{\dagger}\), fit a regression model \(R_{a^{\circ}}(L;\eta_{a^{\circ}}){=}g^{-1}\{\eta_{a^{\circ}}^{T}\Gamma_{a^{ \circ}}(L)\}\) for \(h_{\dagger}(L){=}E(b_{a^{\circ}}(M,L)|L,A{=}a^{\dagger})\) where the score function for each observation is weighted by \[\hat{W}_{2,a^{\circ}}{=}\frac{P(A{=}a^{\circ}|L;\hat{\kappa})}{P(A{=}a^{ \dagger}|L;\hat{\kappa})}.\] Here, \(\gamma_{a^{\circ}}(L)\) is a known function of \(L\). More specifically, we solve for \(\eta\) in the following estimating equations: \[\mathbb{P}_{n}\Big{[}I(A{=}a^{\dagger})\Gamma_{a^{\circ}}(L)\hat{W}_{2,a^{ \circ}}\Big{\{}Q_{a^{\circ}}(M,L;\hat{\theta}_{a^{\circ}}){-}R_{a^{\circ}}(L; \eta_{a^{\circ}})\Big{\}}\Big{]}{=}0\] 3. In those whose \(A{=}a^{\circ}\), fit another regression model \(T(\beta_{a^{\circ}}){=}g^{-1}(\beta_{a^{\circ}})\) for \(\psi_{3}{=}\)\(E\{h_{\dagger,a^{\circ}}(L)|A{=}a^{\circ}\}\) with just an intercept. More specifically, we solve for \(\beta_{a^{\circ}}\) in the following estimating equations: \[\mathbb{P}_{n}[I(A{=}a^{\circ})\{R_{a^{\circ}}(L;\hat{\eta}_{a^{\circ}}){-}T( \beta_{a^{\circ}})\}]{=}0\] 4. Compute \(\hat{T}_{a^{\circ}}{\equiv}T(\hat{\beta}_{a^{\circ}})\) for all observations.
4: Estimate \(\hat{W}_{WICE}{=}\mathbb{P}_{n}\Big{\{}I(A{=}a^{\dagger})Y{+}{\sum_{\forall a^{ \circ}\neq a^{\dagger}}}I(A{=}a^{\circ})\hat{T}_{a^{\circ}}\Big{\}}\)
## I Additional simulation study
The data-generating mechanism for our second simulation study and model specifications are provided in Table 1. We consider four scenarios to illustrate the robustness of our proposed estimator to model misspecification. We consider four model specification scenarios: (1) all models are correctly specified, (2) only the models for \(b_{0}(M,L)\) and \(h_{\dagger}(L)\) are correctly specified, (3) only the models for \(b_{0}(M,L)\) and \(P(A{=}a\mathord{\left|\vphantom{\vbox{\hbox{\kern 0.0pt$\mid$}}\right. \kern-1.2pt}L})\) are correctly specified, and (4) only the models for \(P(M{=}m\mathord{\left|\vphantom{\vbox{\hbox{\kern 0.0pt$\mid$}}\right. \kern-1.2pt}A},L)\)_and_\(P(A{=}a\mathord{\left|\vphantom{\vbox{\hbox{\kern 0.0pt$\mid$}}\right. \kern-1.2pt}L})\) are correctly specified. The correct mediator model in the specification scenarios is the one used in the data generation process, and the exposure and outcome models are approximately correctly specified by including pairwise interactions between all the variables to ensure flexibility.
Table 2 shows the results from the simulation study. As expected by our theoretical derivations, when all of the working models are correctly specified, all of the estimators are nearly unbiased. The AIPW estimator and our proposed weighted ICE estimator are also nearly unbiased in the three model misspecification settings whereas the IPW and ICE estimators are not all unbiased.
[Table 1 about here.]
[Table 2 about here.]
## Appendix J Asymptotic properties
In observational studies, model misspecification in the estimation of nuisance functions can induce biased estimates of the ACE. In recent years, there has been an explosion in developing flexible data-adaptive methods (e.g. kernel smoothing, generalized additive models, ensemble learners, random forest) combined with doubly robust estimators that can reduce the risk of model misspecification and provide valid causal inference. These machine learning techniques offer more protection against model misspecficiation than the parametric models.
From first order expansion of a singly-robust plug-in estimator (IPW and ICE estimators), it can be shown that we require the nuisance parameter estimators to converge to the truth at rate \(n^{-1/2}\). However, this is not possible for non-parametric conditional mean functions as this rate is not attainable for these types of functions. However when doubly robust estimators are used with data-adaptive methods this issue largely disappears are doubly robust estimators enjoy the small bias property (Newey et al., 2004).
In this section we will examine the Remainder or Bias term from the following decomposition. For notational brevity, we suppress \(\mathcal{O}\) in the equations below. For generality, suppose that \(\Psi(\hat{P})\) is an estimator that solves the estimating equations based on the efficient influence function. We have that
\[\sqrt{n}(\Psi(\hat{P})-\Psi(P)) = \sqrt{n}\Big{[}\mathbb{P}_{n}(\varphi^{eff}(\hat{P}))-P(\varphi^{ eff}(\hat{P}))\Big{]}+\sqrt{n}\Big{[}\Psi(\hat{P})+P(\varphi^{eff}(\hat{P}))- \Psi(P)\Big{]}\] \[= \mathbb{G}_{n}(\varphi^{eff}(\hat{P}))+\mathbb{G}_{n}[\varphi^{ eff}(\hat{P})-\varphi^{eff}(P)]+\] \[\sqrt{n}\Big{[}\Psi(\hat{P})+P(\varphi^{eff}(\hat{P}))-\Psi(P) \Big{]}\] \[= \underbrace{\mathbb{G}_{n}(\varphi(\hat{P}))}_{T_{1}}+ \underbrace{\mathbb{G}_{n}[\varphi(\hat{P})-\varphi(P)]}_{T_{2}}+\] \[\sqrt{n}\left[\underbrace{\Psi(\hat{P})+P(\varphi^{eff}(\hat{P}) )-\Psi(P)}_{R}\right]\]
where \(\mathbb{G}_{n}[X]=\sqrt{n}(\mathbb{P}_{n}-P)(X)\) for any \(X\) and we define \(\varphi(\mathcal{O};\tilde{P})=\varphi^{eff}(\mathcal{O};\tilde{P})+\Psi(O; \tilde{P})\) for any \(\tilde{P}\). The first term given by \(T_{1}\) is a centered sample average which converges to a mean zero Normal distribution by the central limit theorem. The second term is known as an empirical process term, which can be shown to be \(o_{p}(1)\) if we assume that nuisance functions and their corresponding estimators are not too complex and belong to Donsker class. Alternatively, one can use sample splitting and cross fitting to overcome issues with overfitting (Chernozhukov et al., 2018).
The last term is known as the remainder or bias term. We will need to show that \(R=o_{p}(1)\) under some conditions about the convergence rates of the nuisance functions.
\[\Psi(\hat{P})+P(\varphi^{eff}(\hat{P}))-\Psi(P)=\] \[E_{P}\left[\underbrace{\frac{I(A{=}a^{\circ})\hat{f}(M\,|a^{ \dagger},L)}{\hat{f}(M\,|a^{\circ},L)}(Y{-}\hat{b}_{0}(M,L))}_{(A)}+\underbrace{ \frac{I(A{=}a^{\dagger})\hat{f}(a^{\circ}\,|L)}{\hat{f}(a^{\dagger}\,|L)}(\hat {b}_{0}(M,L){-}\hat{h}_{\dagger}(L))+I(A{=}a^{\circ})\hat{h}_{\dagger}(L)-P(A{=} a^{\circ})\psi_{3}}_{(B)}\right]\]
We examine the terms in blue (henceforth denoted as (A)) and term in red (henceforth denoted as (B)) in detail. Starting with the term in (B):
\[(B) {=}E_{P}\left[I(A{=}a^{\circ})(\hat{h}_{\dagger}(L){-}\,h_{ \dagger}(L))+\frac{I(A{=}a^{\dagger})\hat{f}(a^{\circ}\,|L)}{\hat{f}(a^{ \dagger}\,|L)}(\hat{b}_{0}(M,L){-}\,\hat{h}_{\dagger}(L))\right]\] \[{=}E_{P}\left[I(A{=}a^{\circ})(\hat{h}_{\dagger}(L){-}\,h_{ \dagger}(L))+\frac{I(A{=}a^{\dagger})\hat{f}(a^{\circ}\,|L)}{\hat{f}(a^{ \dagger}\,|L)}\Big{\{}E_{P}(\hat{b}_{0}(M,L)\,|A{=}a^{\dagger},L){-}\,\hat{h}_ {\dagger}(L)\Big{\}}\right]\] \[{=}E_{P}\left[f(a^{\circ}\,|L))(\hat{h}_{\dagger}(L){-}\,h_{ \dagger}(L))+\frac{f(a^{\dagger}\,|L)\hat{f}(a^{\circ}\,|L)}{\hat{f}(a^{ \dagger}\,|L)}\Big{\{}E_{P}(\hat{b}_{0}(M,L)\,|A{=}a^{\dagger},L){-}\,\hat{h}_ {\dagger}(L)\Big{\}}\right]+\] \[{\qquad f(a^{\circ}\,|L)\Big{\{}E_{P}(\hat{b}_{0}(M,L)\,|A{=}a^{ \dagger},L){-}\,E_{P}(\hat{b}_{0}(M,L)\,|A{=}a^{\dagger},L)\Big{\}}\] \[{=}E_{P}\left[\Big{\{}E_{P}(\hat{b}_{0}(M,L)\,|A{=}a^{\dagger},L) {-}\,\hat{h}_{\dagger}(L)\Big{\}}\left\{\frac{f(a^{\dagger}\,|L)\hat{f}(a^{ \circ}\,|L)}{\hat{f}(a^{\dagger}\,|L)}{-}\,f(a^{\circ}\,|L)\right\}\right]+\] \[{=}E_{P}\left[\big{\{}E_{P}(\hat{b}_{0}(M,L)\,|A{=}a^{\dagger},L) {-}\,\hat{h}_{\dagger}(L)+\,h_{\dagger}(L){-}\,h_{\dagger}(L)\big{\}}\Big{\{} \hat{f}(a^{\circ}\,|L){-}\,f(a^{\circ}\,|L)\Big{\}}\frac{1}{\hat{p}(a^{\dagger} \,|L)}\right]\] \[{=}E_{P}\left[\big{\{}h_{\dagger}(L){-}\,\hat{h}_{\dagger}(L) \big{\}}\left\{\hat{f}(a^{\circ}\,|L){-}\,f(a^{\circ}\,|L)\right\}\frac{1}{ \hat{p}(a^{\dagger}\,|L)}\right]+\] \[{\qquad E_{P}\left[E_{P}\big{\{}E_{P}(\hat{b}_{0}(M,L){-}\,b_{0}( M,L)\,|A{=}a^{\dagger},L)\big{\}}\left\{\hat{f}(a^{\circ}\,|L){-}\,f(a^{ \circ}\,|L)\right\}\frac{1}{\hat{p}(a^{\dagger}\,|L)}\right]+\] \[{\qquad\underbrace{E_{P}\left\{E_{P}\left(\hat{b}_{0}(M,L){-}\,b_ {0}(M,L)\,|A{=}a^{\dagger},L\right)I(A{=}a^{\circ})\right\}}_{(B.2)}}\]
We keep in mind the term in purple (term (B.2)) as we expand upon term (A):
\[(A) {=}E_{P}\left[\frac{I(A{=}a^{\circ})\hat{f}(M\,|a^{\dagger},L)}{ \hat{f}(M\,|a^{\circ},L)}(Y{-}\,\hat{b}_{0}(M,L))\right]\] \[{=}E_{P}\left[\frac{I(A{=}a^{\circ})\hat{f}(M\,|a^{\dagger},L)}{ \hat{f}(M\,|a^{\circ},L)}\Big{(}b_{0}(M,L){-}\,\hat{b}_{0}(M,L)\Big{)}\right]\]
\[= E_{P}\left(I(A{=}a^{\circ})E_{P}\left[\frac{\hat{f}(M\left|a^{ \dagger},L)f(M\left|a^{\circ},L\right.\right\rangle}{\hat{f}(M\left|a^{\circ},L )f(M\left|a^{\dagger},L\right.\right\rangle}\right\}\!\!\left\{b_{0}(M,L)-\hat{b }_{0}(M,L)\left|A{=}a^{\dagger},L\right.\right\}\right]\right)\]
Now, adding (A) and (B.2) (term in purple) we get the following:
\[E_{P}\left(I(A{=}a^{\circ})E_{P}\left[\left\{\frac{\hat{f}(M \left|a^{\dagger},L\right.\right\rangle}f(M\left|a^{\circ},L\right.)}{\hat{f}(M \left|a^{\circ},L\right.)f(M\left|a^{\dagger},L\right.)}-1\right\}\!\left\{b_{ 0}(M,L)-\hat{b}_{0}(M,L)\left|A{=}a^{\dagger},L\right\}\right]\right)\] \[= E_{P}\left(I(A{=}a^{\circ})E_{P}\left[\left\{\frac{f(M\left|a^{ \circ},L\right.)}{\hat{f}(M\left|a^{\dagger},L\right.)}\frac{1}{\hat{f}(M \left|a^{\circ},L\right.)}\right\}\!\left\{\hat{f}(M\left|a^{\dagger},L\right. )-f(M\left|a^{\dagger},L\right.)\right\}\!\left\{b_{0}(M,L)-\hat{b}_{0}(M,L) \left|A{=}a^{\dagger},L\right.\right\}\right]\right)+\] \[E_{P}\left(I(A{=}a^{\circ})E_{P}\left[\left\{\frac{1}{\hat{f}(M \left|a^{\circ},L\right.)}\right\}\!\left\{f(M\left|a^{\circ},L\right.)-\hat{f}( M\left|a^{\circ},L\right.)\right\}\!\left\{b_{0}(M,L)-\hat{b}_{0}(M,L)\left|A{=}a^{ \dagger},L\right.\right\}\right]\right)\]
Thus, together we have (A)+(B) equals:
\[E_{P}\left[\left\{h_{\dagger}(L)-\hat{h}_{\dagger}(L)\right\} \!\left\{\hat{f}(a^{\circ}\left|L)-f(a^{\circ}\left|L\right.)\right\}\!\frac{1 }{\hat{p}(a^{\dagger}\left|L)}\right]+\] \[E_{P}\left[E_{P}\left\{E_{P}\left\{E_{P}(\hat{b}_{0}(M,L)-b_{0}( M,L)\left|A{=}a^{\dagger},L\right.)\right\}\!\left\{\hat{f}(a^{\circ}\left|L \right.)-f(a^{\circ}\left|L)\right.\right\}\!\frac{1}{\hat{p}(a^{\dagger}\left| L)}\right]+\] \[E_{P}\left(I(A{=}a^{\circ})E_{P}\left[\left\{\frac{f(M\left|a^{ \circ},L\right.)}{f(M\left|a^{\dagger},L\right.)}\frac{1}{\hat{f}(M\left|a^{ \circ},L\right.)}\right\}\!\left\{\hat{f}(M\left|a^{\dagger},L\right.)-f(M \left|a^{\dagger},L\right.)\right\}\!\left\{b_{0}(M,L)-\hat{b}_{0}(M,L)\left|A {=}a^{\dagger},L\right.\right\}\right]\right)+\] \[E_{P}\left(I(A{=}a^{\circ})E_{P}\left[\left\{\frac{1}{\hat{f}(M \left|a^{\circ},L\right.)}\right\}\!\left\{f(M\left|a^{\circ},L\right.)-\hat{f}( M\left|a^{\circ},L\right.)\right\}\!\left\{b_{0}(M,L)-\hat{b}_{0}(M,L)\left|A{=}a^{ \dagger},L\right.\right\}\right]\right)\]
By an application of Cauchy-Schwartz, we can show that as long as:
1. \(\left\|\hat{h}_{\dagger}(L)-h_{\dagger}(L)\right\|\)\(\left\|\hat{f}(a^{\circ}\left|L)-f(a^{\circ}\left|L\right.)\right\|{=}O_{p}(n^{-\nu})\), and
2. \(\left\|\hat{b}_{0}(M,L)-b_{0}(M,L)\right\|\)\(\left\|\hat{f}(a^{\circ}\left|L)-f(a^{\circ}\left|L\right.)\right\|{=}O_{p}(n^{-\nu})\), and
3. \(\left\|\hat{b}_{0}(M,L)-b_{0}(M,L)\right\|\)\(\left\|\hat{f}(M\left|A,L\right.)-f(M\left|A,L\right.)\right\|{=}O_{p}(n^{-\nu})\)
for \(\nu{>}1/2\) and where \(\left\|f(x)\right\|{=}\!\left\{\int\!\left|f(x)\right|^{2}dP(x)\right\}^{1/2}\), i.e. the \(L_{2}(P)\) norm. Then, \(\sqrt{n}\!\left\{\Psi(\hat{P})+P(\varphi^{eff}(\hat{P}))-\Psi(P)\right\}{=}o_{p}(1)\). This can be accomplished, for example, if the nuisance functions are each consistently estimated at a rate of \(n^{-1/4}\) or faster.
Note that \(h_{\dagger}(L){=}\!\sum_{m}\!b_{0}(M,L)f(m\left|a^{\dagger},L)\right.\). In our estimators, we propose estimating \(h_{\dagger}(L)\) by regressing \(b_{0}(M,L)\) on \(L\) in those whose \(A{=}a^{\dagger}\) to ensure sample-boundedness. However, if we estimate is \(h_{\dagger}(L)\) by calculating \(\sum_{m}\!\hat{b}_{0}(M,L)\hat{f}(m\left|a^{\dagger},L)\) explicitly (as in Fulcher et al., 2020), then we can show that the remainder term reduces to the following asymptotically8:
\[E_{P}\left(I(A{=}a^{\circ})E_{P}\left[\left\{\frac{f(M\left|a^{ \circ},L\right)}{f(M\left|a^{\dagger},L)}\frac{1}{f^{*}(M\left|a^{\circ},L)} \right.\right\}\right\{f^{*}(M\left|a^{\dagger},L)-f(M\left|a^{\dagger},L) \right.\right\}\left\{b_{0}(M,L)-b_{0}^{*}(M,L)\left|A=a^{\dagger},L\right. \right\}\right]\right)+\] \[E_{P}\left(I(A{=}a^{\circ})E_{P}\left[\left\{\frac{1}{f^{*}(M \left|a^{\circ},L\right)}\right\}\{f(M\left|a^{\circ},L)-f^{*}(M\left|a^{ \circ},L)\right.\right\}\left\{b_{0}(M,L)-b_{0}^{*}(M,L)\left|A=a^{\dagger},L \right.\right\}\right]\right)+o_{p}(1)\]
where \(f^{*}(M\left|A,L)\) and \(b_{0}^{*}(M,L)\) denote the limiting values of \(\hat{f}(M\left|A,L)\) and \(\hat{b}_{0}(M,L)\). This gives intuition to why the augmented inverse probability weighted estimator proposed in Fulcher et al. (2020) is consistent when models for \(b_{0}(M,L)\) and \(P(A{=}a\left|L\right.)\) are correctly specified, or when the model for \(P(M{=}m\left|A,L\right.)\) is correctly specified.
|
2305.18506 | Generalization Ability of Wide Residual Networks | In this paper, we study the generalization ability of the wide residual
network on $\mathbb{S}^{d-1}$ with the ReLU activation function. We first show
that as the width $m\rightarrow\infty$, the residual network kernel (RNK)
uniformly converges to the residual neural tangent kernel (RNTK). This uniform
convergence further guarantees that the generalization error of the residual
network converges to that of the kernel regression with respect to the RNTK. As
direct corollaries, we then show $i)$ the wide residual network with the early
stopping strategy can achieve the minimax rate provided that the target
regression function falls in the reproducing kernel Hilbert space (RKHS)
associated with the RNTK; $ii)$ the wide residual network can not generalize
well if it is trained till overfitting the data. We finally illustrate some
experiments to reconcile the contradiction between our theoretical result and
the widely observed ``benign overfitting phenomenon'' | Jianfa Lai, Zixiong Yu, Songtao Tian, Qian Lin | 2023-05-29T15:01:13Z | http://arxiv.org/abs/2305.18506v1 | # Generalization Ability of Wide Residual Networks
###### Abstract
In this paper, we study the generalization ability of the wide residual network on \(\mathbb{S}^{d-1}\) with the ReLU activation function. We first show that as the width \(m\rightarrow\infty\), the residual network kernel (RNK) uniformly converges to the residual neural tangent kernel (RNTK). This uniform convergence further guarantees that the generalization error of the residual network converges to that of the kernel regression with respect to the RNTK. As direct corollaries, we then show \(i)\) the wide residual network with the early stopping strategy can achieve the minimax rate provided that the target regression function falls in the reproducing kernel Hilbert space (RKHS) associated with the RNTK; \(ii)\) the wide residual network can not generalize well if it is trained till overfitting the data. We finally illustrate some experiments to reconcile the contradiction between our theoretical result and the widely observed "benign overfitting phenomenon"
Early Stopping Generalization Error Neural Tangent Kernel Residual Networks Uniform Convergence
## 1 Introduction
Deep neural networks have led to many great successes in various fields. It is widely observed that the performance of the network is highly dependent on the architecture of the network [17, 12, 33]. A prominent architecture, leading to a major improvement in the current deep learning, is the residual network, known also as ResNet [12]. The residual network introduced the skip connection, which makes training a network with hundreds or thousands of layers possible. Since [12] invented the residue neural network, it has been widely applied in various fields and has obtained incredible success.
Many works tried to explain the success of residual networks, however, most of them focus on the optimization aspects. For example, thanks to the skip connection, [34] showed that residual networks can avoid the vanishing gradient problem; [20] showed that it is easier to train a network with skip connections since the loss landscape is smoother compared with the network without skip connections; [25] showed that the network with skip connections can avoid
spurious local minima. However, it is still unclear how the skip connection affects the generalization ability of neural networks.
Jacot et al. [15] made a seminal observation that the training process of wide neural networks can be well approximated by that of kernel regression with respect to the NTK as the width \(m\to\infty\). More precisely, they first interpreted the gradient flow of neural networks as a gradient flow associated to a time-varying kernel regression problem and this time-varying kernel is called the neural network kernel (NNK). They then observed that as the width \(m\to\infty\), the NNK pointwisely converges to the neural tangent kernel (NTK), a time-invariant kernel during the training process. Inspired by the idea of the NTK, [18; 21] theoretically proved that the generalization error of wide fully-connected neural networks can be approximated by that of kernel regression with respect to the NTK. In other words, one can study the generalization ability of wide neural networks by studying the generalization ability of kernel regression with respect to the NTK.
The generalization ability of kernel regression was an active field two decades ago. With the polynomial decay of the eigenvalues associated to the kernel, [4; 24; 40] showed the spectral algorithms, including the early-stopped kernel regression with gradient flow, are minimax optimal under some mild conditions. [30; 2; 6; 22] and [23] considered the generalization performance of kernel ridgeless regression in low dimensional and high dimensional data respectively. Some papers reinvestigated the generalization performance of kernel ridge regression under the Gaussian design assumption of the eigenfunctions [5; 16; 9; 28] and offered some elaborate results. For example, [9] depicted the generalization error of kernel ridgeless regression under different source conditions, regularization and noise levels.
Some researchers have analyzed the residual neural tangent kernel (RNTK) which is first introduced in [14]. [14] showed that the residual network kernel (RNK) at the initialization converges to the RNTK as the width \(m\to\infty\). [32] further showed the stability of RNK and the RNK point-wisely convergences to the RNTK during the training process. However, the ReLU function, the commonly used activation function, does not satisfy the assumptions in [32]. [3] showed that for the input distributed uniformly on the hypersphere \(\mathbb{S}^{d-1}\), the eigenfunctions of RNTK are the spherical harmonics and the \(k^{th}\) multiple eigenvalue of the RNTK \(\mu_{k}\asymp k^{-d}\).
Though the above works offer some understanding of the residual networks and the RNTK, they say nothing about the generalization ability of the residual networks. In this paper, we perform a study on the generalization ability of the residual network on \(\mathbb{S}^{d-1}\). In Section 3, we first show that the RNK converges to the RNTK uniformly during the training process as the width \(m\to\infty\), therefore the generalization error of the residual network is well approximated by that of kernel regression with respect to the RNTK. With this approximation, we then show in Section 4 that: \(i\)) the residual network produced by an early stopping strategy is minimax rate optimal; \(ii\)) the overfitted residual network can not generalize well. It is clear that the "benign overfitting phenomenon" found on neural networks violates the latter statement. To reconcile this contradiction, we further illustrate some experiments on the role played by the signal strength in the "benign overfitting phenomenon" in Section 5.
### Contributions
\(\bullet\)_RNK converges to RNTK uniformly._ Though [32] showed the RNK point-wisely converge to the RNTK during the training process, it is insufficient for showing that the generalization error of residual networks can be well approximated by that of kernel regression with respect to RNTK. Moreover, their claims do not hold for the ReLU activation. In this paper, we first show that the RNK converges to the RNTK uniformly and that the dynamic of training the residual network converges to that of the RNTK regression uniformly. Thus, the generalization performance of the residual networks can be approximated well by that of the RNTK regression.
\(\bullet\)_Generalization performance of residual networks._ With the assumption that the regression function \(f_{*}\in\mathcal{H}\), the reproducing kernel Hilbert space (RKHS) associated to the RNTK \(r\) defined on \(\mathbb{S}^{d-1}\), we prove that training a residual network with a properly early stopping strategy can produce a residual network achieving the minimax-optimal rate \(n^{-d/(2d-1)}\), meaning that the early stopped residual network can generalize well. On the other hand, we can show that if one trains a residual network till it overfits all training data, the resulting network generalizes poorly.
### Related works
There are two lines of works for the convergence of the NNK to the NTK. One is assuming that the activation function is a twice differentiable function [15; 19; 32]. After showing the convergence of the NNK to the NTK, [19] proved the convergence of the neural network function to the kernel regression function with the NTK as the width \(m\to\infty\). Following the idea of [19], [32] extend the strategy of [19] from fully-connected neural networks to residual networks. However, they only showed the pointwise convergence of NNK and RNK to the corresponding kernel regression functions and the ReLU function does not satisfy their assumption. Another one is focusing on the ReLU function.
They showed the "twice differentiability" of the ReLU function with high probability over random initialization as the width is large enough (see [11, 1, 18, 21] in detail). [11] and [1] proved the pointwise convergence for two-layer fully-connected networks and multilayer fully-connected networks, respectively. Subsequently, [18] and [21] further showed the uniform convergence for two-layer fully-connected networks and multilayer fully-connected networks, respectively. To the best of our knowledge, the uniform convergence of RNK is not solved in the previous papers.
There are a few papers analyzing the generalization error of residual networks through statistical decision theory with different nonparametric regression frameworks. For example, assuming that the data is noiseless and the regression function belongs to the flow-induced function spaces defined in [27], [37] proved the minimum-norm interpolated solution of residual networks can achieve the optimal rate. [36] showed that there exists a regularized residual network is nearly optimal on the noisy data if the regression function belongs to the Barron space defined in [26]. Although these empirical risk minimizers are optimal in their corresponding nonparametric regression frameworks, they are hard to apply in practice since the corresponding optimization problems are highly non-linear and non-convex. Thus, these static non-parametric explanations are far from a satisfactory theory. A more practicable theory, such as the generalization theory of residual networks trained by gradient descent, is needed.
### Preliminaries
Let \(f_{*}\) be a continuous function defined on a compact subset \(\mathcal{X}\subseteq\mathbb{S}^{d-1}\), the \(d-1\) dimensional sphere satisfying \(\mathbb{S}^{d-1}:=\left\{\mathbf{x}\in\mathbb{R}^{d}:\left\|\mathbf{x}\right\|_{2}=1\right\}\). Let \(\mu_{\mathcal{X}}\) be a uniform measure supported on \(\mathcal{X}\). Suppose that we have observed \(n\) i.i.d. samples \(\mathcal{D}_{n}=\left\{\left(\mathbf{x}_{i},y_{i}\right),i\in[n]\right\}\) sampling from the model:
\[y_{i}=f_{*}(\mathbf{x}_{i})+\varepsilon_{i},\quad i=1,\ldots,n, \tag{1}\]
where \(\mathbf{x}_{i}\)'s are sampled from \(\mu_{\mathcal{X}}\), \(\varepsilon_{i}\sim\mathcal{N}(0,\sigma^{2})\) for some fixed \(\sigma>0\) and \([n]\) denotes the index set \(\{1,2,...,n\}\). We collect \(n\) i.i.d. samples into matrix \(\mathbf{X}:=(\mathbf{x}_{1},\ldots,\mathbf{x}_{n})^{T}\in\mathbb{R}^{n\times d}\) and vector \(\mathbf{y}:=(y_{1},\ldots,y_{n})^{T}\in\mathbb{R}^{n}\). We are interested in finding \(\hat{f}_{n}\) based on these \(n\) samples, which can minimize the excess risk, i.e., the difference between \(\mathcal{L}(\hat{f}_{n})=\mathbf{E}_{(\mathbf{x},y)}\Big{[}\big{(}\hat{f}_{n}(\mathbf{x} )-y\big{)}^{2}\Big{]}\) and \(\mathcal{L}(f_{*})=\mathbf{E}_{(\mathbf{x},y)}\left[\big{(}f_{*}(\mathbf{x})-y\big{)}^{2}\right]\). One can easily verify the following formula about the excess risk:
\[\mathcal{E}(\hat{f}_{n})=\mathcal{L}(\hat{f}_{n})-\mathcal{L}(f_{*})=\int_{ \mathcal{X}}\Big{(}\hat{f}_{n}(\mathbf{x})-f_{*}(\mathbf{x})\Big{)}^{2}\,\mathrm{d}\mu _{\mathcal{X}}(\mathbf{x}). \tag{2}\]
It is clear that the excess risk is an equivalent evaluation of the generalization performance of \(\hat{f}_{n}\).
NotationFor two sequence \(a_{n},b_{n},\ n\geq 1\) of non-negative numbers, we write \(a_{n}=O(b_{n})\) (or \(a_{n}=\Omega(b_{n})\)) if there exists absolute constant \(C>0\) such that \(a_{n}\leq Cb_{n}\) (or \(a_{n}\geq Cb_{n}\)). We also denote \(a_{n}=\Theta(b_{n})\) or \(a_{n}\asymp b_{n}\) if \(a_{n}=O(b_{n})\) and \(a_{n}=\Omega(b_{n})\) both hold. We use \(\text{poly}(x,y)\) to represent a polynomial of \(x,y\) whose coefficients are absolute constants. The ReLU function is denoted as \(\sigma(x)=\max(x,0)\).
## 2 Residual neural tangent kernel
Following the formation of [14, 3, 32], we define an \(L\)-layer fully connected residual network \(f^{m}(\mathbf{x};\mathbf{\theta})\) with the width \(m\) and the parameters \(\mathbf{\theta}\) in a recursive manner
\[\begin{split} f^{m}(\mathbf{x};\mathbf{\theta})&=\mathbf{v}^{ T}\mathbf{\alpha}^{(L)}\\ \mathbf{\alpha}^{(l)}&=\mathbf{\alpha}^{(l-1)}+a\sqrt{\frac{ 1}{m}}\mathbf{V}^{(l)}\sigma\Bigg{(}\sqrt{\frac{2}{m}}\mathbf{W}^{(l)}\mathbf{\alpha}^{(l- 1)}\Bigg{)}\\ \mathbf{\alpha}^{(0)}&=\sqrt{\frac{1}{m}}\mathbf{A}\mathbf{x} \end{split} \tag{3}\]
for \(l\in[L]\) with parameters \(\mathbf{A}\in\mathbb{R}^{m\times d}\), \(\mathbf{W}^{(l)},\mathbf{V}^{(l)}\in\mathbb{R}^{m\times m}\) and \(\mathbf{v}\in\mathbb{R}^{m}\). All the weights are initialized by the standard normal distribution, i.e., \(\mathbf{v}_{i}\), \(\mathbf{V}^{(l)}_{i,j}\), \(\mathbf{W}^{(l)}_{i,j}\), \(\mathbf{A}_{i,k}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,1)\) for \(i,j\in[m],k\in[d],l\in[L]\). The hyper-parameter \(a\) is a constant satisfying \(0<a<1\) (normally \(a=L^{-\gamma}\) with \(0.5<\gamma\leq 1\)[14, 3]).
Adopting the derivation in [14], we assume both \(\mathbf{A}\) and \(\mathbf{v}\) are fixed at their initial values and \(\mathbf{W}^{(l)},\mathbf{V}^{(l)}\) are learned. Thus, \(\mathbf{\theta}=\text{vec}\Big{(}\big{\{}\mathbf{W}^{(l)},\mathbf{V}^{(l)}\big{\}}_{l=1}^{L} \Big{)}\) is the training parameters and the length of \(\mathbf{\theta}\) is \(2Lm^{2}\). Given \(n\) samples \(\{(\mathbf{x}_{i},y_{i}),i\in[n]\}\)
from (1), the network is trained by gradient descent with respect to the empirical loss
\[\hat{\mathcal{L}}_{n}(f^{m})=\frac{1}{2n}\sum_{i=1}\big{(}y_{i}-f^{m}(\mathbf{x}_{i}; \mathbf{\theta})\big{)}^{2}. \tag{4}\]
It is well known that the training of neural networks is a highly non-linear problem. The NTK, a time-invariant kernel proposed by Jacot [15], can be used to analyze the training process of neural networks as the width \(m\rightarrow\infty\). The RNTK, denoted by \(r\), is first given in [14]. Then [3] showed that the RNTK \(r\) on \(\mathbb{S}^{d-1}\) is the inner-product kernel (Theorem 4.1) and simplified the expression of RNTK on \(\mathbb{S}^{d-1}\) as follows (Corollary B.2)
\[r(\mathbf{x},\mathbf{x}^{\prime})=C\sum_{l=1}^{L}B_{l+1}(\mathbf{x},\mathbf{x}^{\prime})\left[ (1+a^{2})^{l-1}\kappa_{1}\Big{(}\frac{K_{l-1}(\mathbf{x},\mathbf{x}^{\prime})}{(1+a^{ 2})^{l-1}}\Big{)}+K_{l-1}(\mathbf{x},\mathbf{x}^{\prime})\kappa_{0}\Big{(}\frac{K_{l- 1}(\mathbf{x},\mathbf{x}^{\prime})}{(1+a^{2})^{l-1}}\Big{)}\right], \tag{5}\]
where \(C=\frac{1}{2L(1+a^{2})^{l-1}}\), \(K_{0}(\mathbf{x},\mathbf{x}^{\prime})=\mathbf{x}^{T}\mathbf{x}^{\prime}\), \(B_{L+1}(\mathbf{x},\mathbf{x}^{\prime})=1\) and
\[\kappa_{0}(u)=\frac{1}{\pi}(\pi-\arccos u),\quad\kappa_{1}(u)=\frac{1}{\pi} \left(u(\pi-\arccos u)+\sqrt{1-u^{2}}\right)\]
\[K_{l}(\mathbf{x},\mathbf{x}^{\prime})=K_{l-1}(\mathbf{x},\mathbf{x}^{\prime})+a^{2}(1-a^{2})^ {l-1}\kappa_{1}\bigg{(}\frac{K_{l-1}(\mathbf{x},\mathbf{x}^{\prime})}{(1+a^{2})^{l-1} }\bigg{)},\quad l=1,\ldots,L-1\]
[3] emphasized that for \(L=1\), the RNTK \(r\) is equal to the NTK of fully-connected neural networks. Thus, we only consider residual networks with \(L\geq 2\) in this context.
In this paper, we consider \(r(\mathbf{x},\mathbf{x}^{\prime})\) under the uniform measure on \(\mathbb{S}^{d-1}\), which admits the following Mercer decomposition:
\[r(\mathbf{x},\mathbf{x}^{\prime})=a^{2}\sum_{k=0}^{\infty}\mu_{k}\sum_{h=1}^{N(d,k)}Y _{k,h}(\mathbf{x})Y_{k,h}(\mathbf{x}^{\prime}),\]
where \(N(d,k)\) denotes the number of spherical harmonics of frequency \(k\) and \(\{Y_{k,h}\}\) are the spherical harmonics on \(\mathbb{S}^{d-1}\). [3] showed the decay rate of \(\mu_{k}\) is \(k^{-d}\) for any fixed \(L\geq 2\). To better investigate the generalization performance of kernel regression, we rewrite \(r(\mathbf{x},\mathbf{x}^{\prime})\) as the following Mercer decomposition:
\[r(\mathbf{x},\mathbf{x}^{\prime})=\sum_{j=1}^{\infty}\lambda_{j}\phi_{j}(\mathbf{x})\phi_{ j}(\mathbf{x}^{\prime}),\]
where \(\{\lambda_{j}\}_{j=1}^{\infty}\) and \(\{\phi_{j}\}_{j=1}^{\infty}\) are the decreasing eigenvalues sequence and corresponding eigenfunctions sequence of \(r(\mathbf{x},\mathbf{x}^{\prime})\) respectively. The decay rate of \(\lambda_{j}\) is more commonly used in analyzing the generalization ability of kernel regression. With the decay rate of \(\mu_{k}\), the decay rate of \(\lambda_{j}\) can be derived and guarantees the positive definiteness of the kernel \(r\).
**Lemma 2.1**.: _Let \(\lambda_{j}\) be the eigenvalues of RNTK \(r\) for any fixed \(L\geq 2\). Then we have \(\lambda_{j}\asymp j^{-\frac{d}{2-1}}\)._
**Corollary 2.2**.: \(r(\mathbf{x},\mathbf{x}^{\prime})\) _is positive definite under the uniform measure on \(\mathbb{S}^{d-1}\)._
The proof of Lemma 2.1 is presented in Supplementary Material.
## 3 Uniform convergence of wide residual networks
One can consider the empirical loss \(\hat{\mathcal{L}}_{n}\) as a function defined on the parameter space, which induces a gradient flow given by
\[\hat{\mathbf{\theta}}(t)=\frac{\partial}{\partial t}\mathbf{\theta}(t)=-\nabla_{\mathbf{ \theta}}\hat{\mathcal{L}}_{n}(f_{t}^{m})=-\frac{1}{n}\nabla_{\mathbf{\theta}}f_{t} ^{m}(\mathbf{X})(f_{t}^{m}(\mathbf{X})-\mathbf{y}) \tag{6}\]
where we emphasize that \(\nabla_{\mathbf{\theta}}f_{t}^{m}(\mathbf{X})\) is a \(2Lm^{2}\times n\) matrix. When the loss function \(\hat{\mathcal{L}}_{n}\) is viewed as a function defined on \(\mathcal{F}^{m}\), the space consisting of all residual networks \(f_{t}^{m}\), it induces a gradient flow in \(\mathcal{F}^{m}\) given by
\[\hat{f}_{t}^{m}(\mathbf{x})=\frac{\partial}{\partial t}f_{t}^{m}(\mathbf{x})=\nabla_{ \mathbf{\theta}}f_{t}^{m}(\mathbf{x})^{T}\hat{\mathbf{\theta}}(t)=-\frac{1}{n}r_{t}^{m}( \mathbf{x},\mathbf{X})(f_{t}^{m}(\mathbf{X})-\mathbf{y}), \tag{7}\]
where \(\nabla_{\mathbf{\theta}}f_{t}^{m}(\mathbf{x})\) is a \(2Lm^{2}\times 1\) vector, \(r_{t}^{m}(\mathbf{x},\mathbf{X})=\nabla_{\mathbf{\theta}}f_{t}^{m}(\mathbf{x})^{T}\nabla_{\mathbf{ \theta}}f_{t}^{m}(\mathbf{X})\) is a \(1\times n\) vector and \(r_{t}^{m}\) is a time-varying kernel function
\[r_{t}^{m}(\mathbf{x},\mathbf{x}^{\prime})=\nabla_{\mathbf{\theta}}f_{t}^{m}(\mathbf{x})^{T} \nabla_{\mathbf{\theta}}f_{t}^{m}(\mathbf{x}^{\prime}).\]
In order to prevent any potential confusion with the RNTK denoted by \(r\), we will refer to the time-varying kernel \(r_{t}^{m}\) as the RNK in this context.
The gradient flow equations (6) and (7) clearly indicate that the training process of residual networks is influenced by the random initialization of the pararmeters. To maintain focus and avoid unnecessary divergence, we adopt a commonly used initialization method from the existing literature [13, 8, 18, 21], which ensures that \(f_{0}^{m}(\mathbf{x})\) is initialized to zero. The detail is shown in Appendix.
It is well-known that the explicit solution of the highly non-linear equations (6) and (7) is hard to find out. But one can follow the idea of the NTK approach, using the NTK regression solutions to characterize the asymptotic behavior of the exact solution of these equations. For example, the time-independent kernel RNTK offered us a simplified version of the equation (7):
\[\dot{f}_{t}^{NTK}(\mathbf{x})=\frac{\partial}{\partial t}f_{t}^{NTK}(\mathbf{x})=- \frac{1}{n}r(\mathbf{x},\mathbf{X})(f_{t}^{NTK}(\mathbf{X})-\mathbf{y}) \tag{8}\]
where \(r(\mathbf{x},\mathbf{X})=(r(\mathbf{x},\mathbf{x}_{1}),\ldots,r(\mathbf{x},\mathbf{x}_{n}))\in\mathbb{ R}^{1\times n}\). This equation is defined on the space \(\mathcal{H}\), the RKHS associated to the kernel \(r\). The equation (8) is called the gradient flow associated to the kernel regression with respect to the kernel \(r\). Similar to the special initialization of the residual network function, we assume that the initial function \(f_{0}^{NTK}(\mathbf{x})=0\). Then the equation (8) can be solved explicitly:
\[f_{t}^{NTK}(\mathbf{x})=r(\mathbf{x},\mathbf{X})r(\mathbf{X},\mathbf{X})^{-1}(\mathbf{I}-\mathrm{e}^{ -\frac{1}{n}r(\mathbf{X},\mathbf{X})t})\mathbf{y} \tag{9}\]
where \(r(\mathbf{X},\mathbf{X}):=\{r(\mathbf{x}_{i},\mathbf{x}_{j})\}_{i,j=1}^{n}\in\mathbb{R}^{n \times n}\).
[32] showed the RNK point-wisely converges to the RNTK during the training process, i.e., for given \(\mathbf{x},\mathbf{x^{\prime}}\in\mathcal{X}\), \(\sup_{t>0}|r_{t}^{m}(\mathbf{x},\mathbf{x^{\prime}})-r(\mathbf{x},\mathbf{x^{\prime}})|\to 0\) with high probability as the width \(m\to\infty\). However, it is insufficient for showing that the generalization error of residual networks can be well approximated by that of kernel regression with respect to RNTK. Moreover, [32] assumed the activation function satisfies that for \(\forall x,x^{\prime}\in\mathbb{R}\), \(|\sigma^{\prime}(x)-\sigma^{\prime}(x^{\prime})|\leq C|x-x^{\prime}|\) for some constant \(C>0\), which is not applicable to the ReLU function.
Let us denote by \(\lambda_{0}=\lambda_{\min}\left(r(\mathbf{X},\mathbf{X})\right)\) the minimal eigenvalue of the kernel matrix \(r\). Corollary 2.2 has shown that \(r\) is positive definite and thus \(\lambda_{0}>0\) almost surely. One of our main technical contributions is that the convergence of the RNK \(r_{t}^{m}\) to the RNTK \(r\) is uniform with respect to all \(t\geq 0\) and all \(\mathbf{x}\in\mathcal{X}\). Thus, the excess risk \(\mathcal{E}(f_{t}^{m})\) of the wide residual network \(f_{t}^{m}\) could be well approximated by the excess risk \(\mathcal{E}(f_{t}^{NTK})\) of the RNTK regression function \(f_{t}^{NTK}\).
**Theorem 3.1**.: _There exists a polynomial \(\mathrm{poly}(\cdot):\mathbb{R}^{5}\to\mathbb{R}\), such that for any given training data \(\{(\mathbf{x}_{i},y_{i}),i\in[n]\}\), any \(\epsilon>0\) and any \(\delta\in(0,1)\), when the width \(m\geq\mathrm{poly}(n,\lambda_{0}^{-1},\|\mathbf{y}\|_{2},\log(1/\delta),1/\epsilon)\), we have_
\[\sup_{t\geq 0}|\mathcal{E}(f_{t}^{m})-\mathcal{E}(f_{t}^{NTK})|\leq\epsilon\]
_holds with probability at least \(1-\delta\) with respect to the random initialization._
The key to the proof of Theorem 3.1 is to show the uniform convergence of the RNK \(r_{t}^{m}\), which is given by the following proposition:
**Proposition 3.2**.: _There exists a polynomial \(\mathrm{poly}(\cdot):\mathbb{R}^{4}\to\mathbb{R}\), such that for any given training data \(\{(\mathbf{x}_{i},y_{i}),i\in[n]\}\) and any \(\delta\in(0,1)\), when the width \(m\geq\mathrm{poly}(n,\lambda_{0}^{-1},\|\mathbf{y}\|_{2},\log(1/\delta))\), we have_
\[\sup_{t\geq 0}\sup_{\mathbf{x},\mathbf{x^{\prime}}\in\mathcal{X}}|r_{t}^{m}(\mathbf{x},\mathbf{x^{ \prime}})-r(\mathbf{x},\mathbf{x^{\prime}})|\leq O(m^{-\frac{1}{12}}\sqrt{\log(m)}), \tag{10}\]
_with probability at least \(1-\delta\)._
By applying the proof strategy in [18, Proposition 3.2, Theorem 3.1], we can utilize the Proposition 3.2 in the current paper to finish the proof of Theorem 3.1.
Proposition 3.2 shows the uniform convergence of the kernel for any \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{X}\subseteq\mathbb{S}^{d-1}\). The proof of Proposition 3.2 is divided into three parts: \(i)\) we show that the RNK \(r_{t}^{m}\) point-wisely converges to the RNTK \(r\), \(ii)\) we prove the Holder continuity of \(r_{t}^{m}\) and the Holder continuity of \(r\); \(iii)\) we use the \(\epsilon\)-net argument to present the uniform convergence of \(r_{t}^{m}\) to \(r\). Proposition 3.2 extends the "fully-connected network version" that appears in [21] to the considered residual network. Yet, since the structure of the residual network is much more complex than that of the fully-connected network, the proof of Proposition 3.2 is more challenging. The proof of Proposition 3.2 is presented in Supplementary Material.
## 4 Generalization ability of wide residual networks
To facilitate a comprehensive analysis of the generalization performance of a residual network, it is essential to define the class of functions to which \(f_{*}\) belongs. In this study, we introduce the following assumption:
**Assumption 1**.: _The regression function \(f_{*}\in\mathcal{H}\) and \(\|f_{*}\|_{\mathcal{H}}\leq R\) for some constant \(R\), where \(\mathcal{H}\) is the RKHS associated to the kernel \(r\)._
Theorem 3.1 shows that \(\mathcal{E}(f_{t}^{m})\) is well approximated by \(\mathcal{E}(f_{t}^{NTK})\). Thus we can focus on studying the generalization ability of the RNTK regression function. Assumption 1 is actually a usual assumption appeared in the kernel regression literature (see e.g., [7; 38; 31; 4; 24]).
### Wide residual networks with early stopping achieve the minimax rate
The early stopping strategy is a widely employed technique in the training of various models, including kernel regression and neural networks, among others. Extensive research studies have provided solid theoretical foundations for the efficacy of early stopping [38; 31; 4; 24], where the determination of the optimal stopping time relies on the decay rate of eigenvalues associated with the kernel. It is worth noting that Corollary 2.1 furnishes us with the decay rate of eigenvalues for the kernel \(r\), while Theorem 3.1 establishes the assurance that the excess risk of the RNTK regression function \(f_{t}^{NTK}\) provides a reliable approximation of the excess risk of the residual network \(f_{t}^{m}\). As a result, we can derive the following Theorem 4.1.
**Theorem 4.1** (Early-stopped residual networks can generalize).: _Suppose Assumption 1 holds. For any given \(\delta\in(0,1)\), if one trains a residual network with width \(m\) that is sufficiently large and stops the gradient flow at time \(t_{*}\propto n^{d/(2d-1)}\), then for sufficiently large \(n\), there exists a constant \(C\) independent of \(\delta\) and \(n\), such that_
\[\mathcal{E}(f_{t_{*}})\leq Cn^{-\frac{d}{2d-1}}\log^{2}\frac{6}{\delta} \tag{11}\]
_holds with probability at least \(1-\delta\)._
[4] established the following minimax rate of regression over the RKHS \(\mathcal{H}\) associated to \(r\):
\[\inf_{f_{*}}\sup_{f_{*}\in\mathcal{H},\|f_{*}\|_{\mathcal{H}}\leq R}\mathbf{E} \mathcal{E}(\hat{f}_{n})=\Omega(n^{-\frac{d}{2d-1}}). \tag{12}\]
Thus, we have proved that training a wide residual network with the early stopping strategy achieves the optimal rate. The proof of Theorem 4.1 can be found in Supplementary Material.
### Overfitted residual networks generalize poorly
In this subsection, we are more interested in the generalization performance of \(f_{t}^{m}(x)\) for sufficiently large \(t\) such that \(f_{t}^{m}(x)\) can (nearly) fit the given data. As \(t\to\infty\), Equation (9) is considered as the kernel interpolation. Theorem 3.1 in [22] showed that the kernel interpolation generalizes poorly. The following theorem, which is a consequence of Theorem 3.1 in the current paper and Theorem 3.1 in [22], shows overfitted residual networks generalize poorly
**Theorem 4.2** (Overfitted residual networks generalize poorly).: _For any \(\epsilon>0\) and \(\delta\in(0,1)\), there is some constant \(c>0\) such that when \(n\) and \(m\) are sufficiently large, with the probability at least \(1-\delta\), we have_
\[\mathbf{E}\left[\liminf_{t\to\infty}\mathcal{E}(f_{t}^{m})\ \Big{|}\ \mathbf{X}\right]\geq cn^{-\epsilon}.\]
Though this theorem can not imply that the kernel interpolation is inconsistent, it actually shows that overfitted residual networks generalize poorly, which contradicts the "benign overfitting phenomenon". In the next section, we will illustrate several experiments on residual networks to reconcile this contradiction and show that the occurrence of "benign overfitting phenomenon" depends on the signal strength of the data.
## 5 Experiments
In Section 4, we have shown that the generalization error of a residual network depends on the stopping time. \(i)\) the residual network with the proper stopping time can achieve the minimax rate; \(ii)\) the residual network trained till the loss is near zero can not generalize well. However, the second result seems contradict to the reported "benign overfitting phenomenon" where overfitted neural networks do generalize well in many datasets. Thus, for residual networks, we
need to find an explanation that can reconcile the conflict between the second result and the widely observed "benign overfitting phenomenon".
To reconcile the same conflict for fully-connected neural networks, [18] has justified an insightful hypothesis. They emphasized that a subtle difference between the classification problem and the regression problem might be ignored in the reported experiments (Figure 2 in [18]). For the classification problem, they denoted \(i)\)\(t_{\text{opt}}\) as the optimal early stopping time, \(ii)\)\(t_{\text{loss}}\) as the time when the value of the loss function nears zero and \(iii)\)\(t_{\text{label}}\) as the time when the value of the label error rate nears zero. They notice that most of the reported experiments in "benign overfitting phenomenon" utilize the stopping time \(t_{\text{label}}\) and claim that the resulting neural network can overfit the data and generalize well [29, 39]. Thus [18] justified the following hypothesis for the fully-connected network: \(i)\) if the signal strength is strong, then \(t_{\text{label}}\) nears \(t_{\text{opt}}\) and \(t_{\text{label}}\) is much earlier than \(t_{\text{loss}}\) ; \(ii)\) if the signal strength is weak, then \(t_{\text{label}}\) is away from \(t_{\text{opt}}\) and nears \(t_{\text{loss}}\), where we can consider \(t_{\text{loss}}=\infty\). Adopting the same idea, we justify this hypothesis through various experiments for residual networks.
\(\bullet\)_Synthetic Data:_ Suppose that \(\mathbf{x}_{i},1\leq i\leq 500\) are i.i.d. uniformly sampled from \(\mathbb{S}^{2}\) and
\[y_{i}=f_{*}(\mathbf{x}_{i})=\lfloor\mathbf{x}_{i,(1)}+1\rfloor+2\lfloor\mathbf{x}_{i,(2)} +1\rfloor+4\lfloor\mathbf{x}_{i,(3)}+1\rfloor\in\{0,1,\cdots,7\}.\]
For a given \(p\in[0,1]\), we corrupt every label \(y_{i}\) of the data with probability \(p\) by a uniform random integer from \(\{0,1,\cdots,7\}\). For corrupted data with \(p\in\{0,0.1,0.2,0.3,0.4,0.5,0.6\}\), we train a residual fully-connected network (Equation (3) with \(m=1000\), \(L=5\), \(a=0.5\)) with the squared loss and the cross-entropy loss. We collect the testing accuracy based on \(10000\) testing data points with label corruption. The results are reported in Figure 1.
Figure 1: Synthetic Data: the gap between \(t_{\text{label}}\) and \(t_{\text{opt}}\) is increasing when the label corruption ratio \(p\) is increasing. The lower figures are the error bars associated with 10 replicate experiments of two different losses and show the mean and the standard derivation of the gap between \(t_{\text{label}}\) and \(t_{\text{opt}}\). When \(p=0\), the time gap between \(t_{\text{label}}\) and \(t_{\text{opt}}\) and the gap between the corresponding testing accuracy is extremely small, i.e., we observed the “benign overfitting”.
\(\bullet\)_Real Data:_ We perform the experiments on CIFAR-10 with the convolutional residual network. We use the model architecture introduced in [12, Section 4.2]. The first layer is \(3\times 3\) convolutions with 32 filters. Then we use a stack of 6 layers with \(3\times 3\) convolutions, 2 layers with 32 filters, 2 layers with 64 filters and 2 layers with 128 filters. The network ends with a global average pooling and a fully-connected layer. There are 8 stacked weighted layers in total.
We corrupt the data with \(p\in\{0,0.1,0.2,0.3,0.4,0.5,0.6\}\) and apply the Adam to training Alex with the initial learning rate of 0.001 and the decay factor 0.95 per training epoch. The results are reported in Figure 2.
The experimental results presented above provide empirical evidence that supports our hypothesis and resolves the discrepancy between the observed "benign overfitting phenomenon" and our theoretical framework for residual networks. Specifically, when the signal strength is strong, the "benign overfitting phenomenon" holds, and our theoretical results remain valid. However, when the signal strength is weak, the "benign overfitting" phenomenon no longer holds, and our theoretical findings offer an explanation for the failure of the "benign overfitting phenomenon."
## 6 Conclusion and discussion
In this study, we demonstrated that the RNK uniformly converges to the RNTK, indicating that kernel regression using the RNTK can effectively approximate the excess risk of residual networks. By analyzing the decay rate of eigenvalues associated with the RNTK, we established two key results: \(i)\) stopping the training process of residual networks at a suitable time can lead to a resulting neural network with excess risk achieving minimax optimality, and \(ii)\) an overfitted residual network may not generalize well. Additionally, we conducted experiments to address the discrepancy between our theoretical findings and the commonly observed "benign overfitting phenomenon."
Drawing on the approach of [18], our strategy can be applied to various neural network architectures. Specifically, one can first demonstrate the uniform convergence of neural network kernels (e.g., convolutional neural networks and recurrent neural networks) to the corresponding neural tangent kernels, and then examine the spectral properties of the NTK, including positive definiteness and eigenvalue decay rate. Thus, we anticipate that networks using an early stopping strategy can achieve optimal minimax rates.
|
2305.17635 | Theory of sounds in He II | A dynamical model for Landau's original approach to superfluid Helium is
presented, with two velocities but only one mass density. Second sound is an
adiabatic perturbation that involves the temperature and the roton, aka the
notoph. The action incorporates all the conservation laws, including the
equation of continuity. With only 4 canonical variables it has a higher power
of prediction than Landau's later, more complicated model, with its 8 degrees
of freedom. The roton is identified with the massless notoph. This theory gives
a very satisfactory account of second and fourth sounds.
Second sound is an adiabatic oscillation of the temperature and both vector
fields, with no net material motion. Fourth sound involves the roton, the
temperature and the density. With the experimental confirmation of
gravitational waves the relations between Hydrodynamics and Relativity and
particle physics have become more clear, and urgent. The appearance of the
Newtonian potential in irrotational hydrodynamics comes directly from
Einstein's equations for the metric. The density factor $\rho$ is essential; it
is time to acknowledge the role that it plays in particle theory.
To complete the 2-vector theory we include the massless roton mode. Although
this mode too is affected by the mass density, it turns out that the wave
function of the unique notoph propagating mode $\mathcal{N}$ satisfies the
normal massless wave equation $\Box\mathcal{N}$ = 0; the roton propagates as a
free particle in the bulk of the superfluid without meeting resistance. In this
circumstance we may have discovered the mechanism that lies behind the flow of
He-II through very thin pores. | Christian Fronsdal | 2023-05-28T05:16:05Z | http://arxiv.org/abs/2305.17635v1 | # Theory of sounds in He - II
###### Abstract
A dynamical model for Landau's original approach to superfluid Helium is presented, with two velocities but only one mass density. Second sound is an adiabatic perturbation that involves the temperature and the **roton**, _aka_ the **notoph. The action incorporates all the conservation laws, including the equation of continuity. With only 4 canonical variables it has a higher power of prediction than Landau's later, more complicated model, with its 8 degrees of freedom. The roton is identified with the massless notoph. This theory gives a very satisfactory account of second and fourth sounds.**
**Second sound is an adiabatic oscillation of the temperature and both vector fields, with no net material motion. Fourth sound involves the roton, the temperature and the density.**
**With the experimental confirmation of gravitational waves the relations between Hydrodynamics and Relativity and particle physics have become more clear, and urgent. The appearance of the Newtonian potential in irrotational hydrodynamics comes directly from Einstein's equations for the metric. The density factor \(\rho\) is essential; it is time to acknowledge the role that it plays in particle theory.**
**To complete the 2-vector theory we include the massless roton mode. Although this mode too is affected by the mass density, it turns out that the wave function of the unique notoph propagating mode \(\mathcal{N}\) satisfies the normal massless wave equation \(\mathfrak{n}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Table I. Introduction**
**II. The classical action principles**
_Hydrodynamics. Thermodynamics. Gauge theory. Speeds of sound._
**III. Dynamics of first and second sounds.**
_First sound. Second sound. Interpretation._
**IV. Fourth sound.**
**V. What comes nex?**
**I. Introduction**
**The classical action for adiabatic hydro-thermo-dynamics of irrotational fluid flows allows for the well known calculation of the speed of sound, understood as an oscillation of the mass density and the velocity potential at fixed, uniform entropy (Laplace 1825) [1]. Some fluids transmit a second type of "sound" that has been interpreted as an oscillation of entropy and temperature at fixed pressure (Tisza 1938) [2]. Experiments have confirmed that the temperature is oscillating (Peshkov 1946) [3] and that the pressure is only weakly involved.**
**This paper presents an alternative interpretation of second and fourth sounds, within Landau's 2 - flow theory [4] of phonons and rotons, as an adiabatic oscillation of the temperature and the dynamical roton mode, with fixed density and entropy. The theory is an application of adiabatic thermodynamics, formulated as an action principle.**
**The dynamics of the roton field (\(\ddot{X}\)) was identified with the notoph (Rasetti and Regge 1972) [5], providing the link to Special Relativity and Quantum Theory that is needed in any mature, physical field theory.**
**A 2-form gauge field \(Y\) is related to \(\ddot{\vec{X}}\) by \(Y_{ij}=\epsilon_{ijk}X^{k}\). The dynamical roton is the massless field [7]**
\[{\cal N}=\rho(\vec{\bigtriangledown}\cdot\vec{X}+{\rm const}).\]
**The principal new discovery that is reported here is that second sound is an adiabatic oscillation of the temperature and \({\cal N}\).**
**Section II is a brief introduction to the ideas that have led to a dynamical formulation of Landau's theory. The speed of second sound is calculated in Section III and fourth sound is tackled in Section IV.**
**II. The classical action principles**
**Hydrodynamics**
**The essence of classical hydrodynamics is expressed by two equations, the equation of continuity,**
\[\dot{\rho}+\vec{\bigtriangledown}\cdot\rho\vec{v}=0\]
**and the Bernoulli equation**
\[\frac{\partial}{\partial t}\vec{v}=-\vec{\bigtriangledown}\vec{v}^{2}/2-\frac{1}{ \rho}\vec{\bigtriangledown}p-\vec{\bigtriangledown}\varphi. \tag{2.2}\]
**Here \(\rho\) is the (mass) density and \(p\) is the pressure. It applies only to irrotational flows, when the velocity takes the form \(\vec{v}=-\vec{\bigtriangledown}\Phi.\) The two equations of motion are the Euler - Lagrange equations of a classical action principle. The field \(\varphi\) is the Newtonian potential.1**
Footnote 1: This theory is what remains of a relativistic theory when the relativistic scalar \(\psi\) is expanded as \(\psi=c^{2}t+\Phi+O(1/c^{2})\) and, \(g_{00}=c^{2}+2\phi+O(1/c^{2})\) and the other components are Lorenzian.
**For some of the most elementary flows another branch of hydrodynamics must be invoked. In a popular, didactic experiment a glass of water is placed on a turntable. After some time the water is seen to be turning with the glass like a solid body, the surface rising towards the edge to form a meniscus. In the theory that is used to explain this phenomenon the velocity is a time derivative, \(\ddot{X}\), and the 'Bernoulli equation' takes a different form,**
\[\vec{\bigtriangledown}\dot{\vec{X}}^{2}/2-\vec{\bigtriangledown}\varphi- \frac{1}{\rho}\vec{\bigtriangledown}p=0. \tag{2.3}\]
**This theory, by itself, is not an alternative to the irrotational theory. It does not have an equation of continuity; instead \(\ddot{\vec{X}}\) is subject to constraints, as expected of a vector field. The inclusion of the Newtonian potential in this equation is** _ad hoc,_ **it can not be justified by an application of General Relativity and the vector field \(\ddot{\vec{X}}\) is not affected by the transformations of the Galilei group. In conclusion, we need both types of vector fields to explain some of the simplest experiments.**
**The study of elementary applications like these are incontrovertible evidence that two kinds of flow are needed in hydrodynamis. A satisfactory description of the waterglass - on - turntable and the whorls seen in the wake of ships was proposed by Onsager. (Onsager 1962) [6].**
**Both theories can be expressed as action principles; the irrotational Lagrangian density is**
\[{\cal L}_{1}[\rho,\Phi,\varphi]=\rho(\dot{\Phi}-\vec{\bigtriangledown}\Phi^{ 2}/2-\varphi)-W_{1}[\rho] \tag{2.4}\]
**and Eq. (2.3) - without \(\varphi\) - is the Euler-Lagrange equation of**
\[{\cal L}_{2}[\rho,\vec{X}]=\rho(\vec{\bigtriangledown}\dot{\vec{X}}^{2}/2)-W_ {2}[\rho]. \tag{2.5}\]
**The density factor is traditional in \({\cal L}_{1}\), less so in \({\cal L}_{2}\); its appearance in both is crucial. 2**
Footnote 2: That compressibility of air is what makes flight possible was understood by Leonardo da Vinci in the 15th century.
**Current hydrodynamics results from adding (2.4) and (2.5),**
\[{\cal L}_{Hydro}[\rho.\Phi,\vec{X}]={\cal L}_{1}[\rho,\Phi]+{\cal L}_{2}[\rho, \vec{X}]+\frac{\kappa\rho}{2}d\psi dY. \tag{2.6}\]
**The last term will be explained below.**
**The idea of two independent vector fields was already introduced by Landau [4] in his theory of superfluid Helium, his phonon and roton velocities fields are \(-\vec{\bigtriangledown}\Phi\) and \(\dot{\vec{X}}\).**
**The classical theory of ordinary sound is derived from the Lagrangian (2.6),**
\[{\cal L}[\rho.\Phi,\vec{X}]=\rho(\dot{\Phi}-K-\varphi)-W[\rho],) \tag{2.7}\]
**with the kinetic potential**
\[K=(\vec{\bigtriangledown}\phi)^{2}/2-(\vec{\bigtriangledown}\dot{\vec{X}})^{2} /2-\kappa\dot{\vec{X}}\cdot\vec{\bigtriangledown}\Phi. \tag{2.8}\]
**This Lagrangian is invariant under the transformations of the Galilei group. (The field \(\vec{X}\) is inert, up to a change of gauge.) The flow \(\rho(\kappa\dot{\vec{X}}-\vec{\bigtriangledown}\Phi)\) is identified by the property of being conserved, as expressed by equation of continuity, derived from the Lagrangian by variation of \(\Phi\).**
**As we shall show, this is a suitable action for Landau's phonons and rotons and a wide range of other applications of adiabatic hydro-thermodynamics. In the literature inspired by Landau's work on superfluids one finds that the applications make little use of roton dynamics; instead the field \(\dot{\vec{X}}\) is more or less fixed. It is, therefore, not surprising to find that the theory, in its original, non - relativistic context, is characterized by strong constraints, as has been revealed by completion of the theory (Rasetti and Regge 1972). This is what brings the number of independent variables of hydrodynamics down to just 4.**
**The completed roton theory is a relativistic gauge theory. Like electrodynamics, it was completed with its development as a quantized gauge theory (Ogievetskij and Polubarinov 1963) [7] and Green, Schwartz and Witten (1987) [9]. Both relativity and quantum theory are needed for the formulation of unitarity. We return to this topic below.**
**Both \({\cal L}_{1}\) and \({\cal L}_{2}\) are non-relativistic limits of relativistic field theories; the former is a limit of**
\[\frac{1}{2}\rho(g^{\mu\nu}\psi_{,\mu}\psi_{,\nu}-c^{2})-W[\rho]. \tag{2.9}\]
**The non-relativistic limit includes the Newtonian potential defined by**
\[g_{00}=c^{2}t+2\varphi+O(1/c^{2}),\ \ \ \psi=c^{2}+\Phi+O(1/c^{2}) \tag{2.10}\]
**This is the origin of the Newtonian potential in Eq.(2.4) [9]. Its appearence in (2.5) cannot be justified by General Relativity.**
**Thermodynamics**
**The use of a variational principle for (adiabatic) thermodynamics is not often seen in the literature. There follows a resume that shows that the basic equations are the Euler-Lagrange equations of a simple action. This reformulation of adiabatic thermodynamics contains nothing that is unfamiliar. What it does is to set the limits of the applications; 3 it fixes the Hamiltonian, the kinetic potential and the angular momentum and it puts us in a better position to confront new applications, such as gravitational waves [11] and the speed of second sound.**
Footnote 3: It is evidently incompatible with an oscillating entropy.
**The equations that define adiabatic thermodynamics of a uniform system at rest are**
\[\frac{\partial F(T,V)}{\partial V}+P=0,\quad\frac{\partial F(T,\rho)}{\partial T }+S=0, \tag{2.11}\]
**where \(V\) is the volume, \(F\) is the Helmholtz free energy and \(P\) is the pressure. We prefer to formulate the theory in terms of densities,**
\[s=\rho S,\quad f(T,\rho)=\rho F(T,V).\]
**Following Callen we set the local version of Eq.s (2.11)**
\[\rho\frac{\partial f}{\partial\rho}-f=p,\quad\frac{\partial f}{\partial T}+s=0.\]
**Consider the action**
\[A_{1}[\Phi,\rho,T,S,P]=\int dt\bigg{(}\int_{\Sigma}\mathcal{L}_{1}-\int_{ \partial\Sigma}P\bigg{)}. \tag{2.12}\]
**Here \(P\) is the 3-form of pressure on the multifaceted boundary.**
**The Lagrangian density is**
\[\mathcal{L}_{1}=\rho(\dot{\Phi}-\vec{\bigtriangledown}\Phi^{2}/2-\varphi)-f( T,\rho)-sT. \tag{2.13}\]
**Assume that the specific entropy density \(S\) is fixed, constant and uniform. Vary this Lagrangian with respect to local variations of \(\rho\) and \(T\), with \(S\), \(P\) and - temporarily - the boundary \(\partial\Sigma\), fixed, then the Euler - Lagrange equations are as follows.**
**Variation of \(A_{1}\) with respect to \(\Phi\) gives the equation of continuity, with \(\vec{v}=-\vec{\bigtriangledown}\Phi\);**
\[\dot{\rho}+\vec{\bigtriangledown}\cdot(\rho\vec{v})=0, \tag{2.14}\]
**Variation with respect to \(T\) gives the adiabatic relation:**
\[\frac{\partial}{\partial T}f+s=0; \tag{2.15}\]
**it can be used to eliminate the temperature.**
**Theorem. When \(s=\rho S\), \(S\) fixed, constant and uniform, then**
\[\vec{\bigtriangledown}\frac{\partial}{\partial\rho}(f+sT)=\frac{1}{\rho}\vec{ \bigtriangledown}p. \tag{2.16}\]
**Local variation of \({\cal L}_{1}\), Eq.(2.13), by \(\rho\), followed by the elimination of \(T\), leads to the Bernoulli equation in the original form, Eq. (2.2).**
\[\vec{\bigtriangledown}\dot{\Phi}-\vec{\bigtriangledown}(\vec{\bigtriangledown }\Phi)^{2}/2-\vec{\bigtriangledown}\varphi-\frac{1}{\rho}\vec{\bigtriangledown }p=0.\]
**There is a proof in Fronsdal (2020) [10].**
**Finally, variation of the boundary gives**
\[{\cal L}_{1}|_{\partial\Sigma}=P. \tag{2.17}\]
**On-shell, on the boundary,**
\[p=\rho\frac{\partial f}{\partial\rho}-f={\cal L}_{1}-\rho\frac{\partial{\cal L }_{1}}{\partial\rho}=P. \tag{2.18}\]
**The first equality agrees with the first of Eq.s (2.11), but since it is taken to hold in a wider context it may be regarded as a definition; the second is a consequence of the fact that \(-f\) is the only term in the Lagrangian density that is not linear in \(\rho\). The last equality confirms the identification of \(p\) as the pressure, an extrapolation of \(P\) from the boundary to the interior.**
**We shall replace \({\cal L}_{1}\) by \({\cal L}_{1}+{\cal L}_{2}\), as in Eq. (2.6).**
**Gauge theory**
**The gauge theory behind the roton field \(\dot{\vec{X}}\) was discovered by Rasetti and Regge [5]. It is the theory of a massless 2-form, components \((Y_{\mu\nu})\). The free action density is \(\rho dY^{2}\). For hydrodynamics the complete action density is**
\[{\cal L}_{2}[\rho,Y]=\sqrt{-g}\frac{c^{2}}{12}\rho\,dY^{2}+\frac{\kappa}{2} \rho\,dYd\psi; \tag{2.19}\]
**The 2-form \(Y_{\mu\nu}\) is related to \(\vec{X}\) and \(\psi\) is related to \(\Phi\) by (2.10). The non relativistic Lagrangian in Eq. (2.12) is derived from \({\cal L}_{1}\) in Eq. (2.9) and \({\cal L}_{2}\) is a limit of \({\cal L}[\rho,Y]\) in (2.19).**
**1. The field \(\psi\) is a scalar field with a vacuum expectation value, \(\psi=\Phi+c^{2}t\). The field \(\Phi\) transforms, together with the velocity \(-\vec{\bigtriangledown}\Phi\), under the Galilei group, in the usual way, making \({\cal L}_{1}\) invariant under this group.**
**2. The components of the 2-form are**
\[Y_{ij}=\epsilon_{ijk}X^{k},\quad Y_{0i}=:\eta_{i}. \tag{2.20}\]
**The vector field \(\vec{\eta}\) is a gauge field, variation of the action with respect to \(\vec{\eta}\) gives the constraint - the gauge condition -**
\[\vec{\bigtriangledown}\wedge\vec{m}=0,\ \ \ \ \vec{m}:=\rho(\dot{\vec{X}}+\kappa\vec{ \bigtriangledown}\Phi), \tag{2.21}\]
**with the general solution**
\[\vec{m}=-\vec{\bigtriangledown}\tau.\]
**A special choice for the gauge parameter \(\tau\) is required for a massless mode to be recognized. This mode is**
\[{\cal N}:=\rho(\vec{\bigtriangledown}\cdot\vec{X}+\kappa). \tag{2.22}\]
**It is the only propagating field of this gauge theory. The free field equation is (Ref.s [11]).**
\[\hbox{\goth D}{\cal N}=0. \tag{2.23}\]
**Remark. Non-relativistic electrodynamics, a limit when the speed of light - \(c\) - tends to infinity, makes sense only in the absence of the magnetic field. And non-relativistic hydrodynamics is the regime where \({\cal N}=0\), for this field, like \(\vec{B}\), enters the Lagrangian and the equations of motion multiplied by \(c^{2}\). Consequently, any study of a steady configuration will be one in which \({\cal N}\) is negligible.**
**III. Dynamics of first and second sound**
**First sound**
**The classical theory of (first) sound propagation rests on the Eulerian theory, with the Lagrangian density \({\cal L}_{1}\) and the two equations of motion (2.1) and (2.2). The speed of propagation is expressed in terms of the adiabatic derivative of the pressure**
\[C_{1}=\sqrt{\frac{dp(\rho,S)}{d\rho}}. \tag{3.1}\]
**This equation is used in the quoted sources (Arp _et al_ [12], Brooks and Donnelly [13], Maynard [14] ) to determine the equation of state. We shall obtain similar formulas for second and fourth sounds.**
**First sound is an oscillation of the density and the velocity potential, \(\rho\) and \(\Phi\), and \(S\), fixed. The speed of first (ordinary) sound is usually calculated for a plane wave, a first order perturbation of a static configuration with uniform density. The two Euler - Lagrange equations are**
\[\dot{\rho}-\vec{\bigtriangledown}\cdot\rho\vec{\bigtriangledown}\Phi=0,\ \ \ \ \ \dot{\Phi}-\frac{\partial(f+sT)}{\partial\rho}\bigg{|}_{T}=0. \tag{3.2}\]
**Eliminating \(T\), or using Eq. (2.16) and differentiations leads, in first order perturbation theory, to**
\[\ddot{\rho}=\rho\vec{\bigtriangledown}\cdot\dot{\vec{v}}(\rho,S),\ \ \ \ \ \ \ \ \vec{\bigtriangledown}\cdot\dot{\vec{v}}=\frac{1}{\rho}\frac{\partial p(\rho, S)}{\partial\rho}\bigg{|}_{S}\Delta\rho \tag{3.3}\]
**and to (3.1). Note: in this case \(\vec{v}=-\vec{\bigtriangledown}\Phi\), since \(\vec{X}=0\).**
**Second sound**
**Since the notoph is a massless particle, we add a Stefan-Boltzmann term to the enternal energy density:**
\[u(T,\rho)\rightarrow\tilde{u}(T,\rho,{\cal N})=u+\frac{\alpha}{k}T^{k}\ N.\ \ \ {\cal N}:=\rho(\vec{\bigtriangledown}\cdot\dot{\vec{X}}+\kappa). \tag{3.4}\]
**Let \((T_{0},\rho_{0},\vec{X}_{0},{\cal N}_{0})\) be a stationary solution of the Euler - Lagrange equations for the Lagrangian**
\[{\cal L}=\rho(\dot{\Phi}-K)-f-sT-\frac{\alpha}{k}{\cal N}T^{k}\]
\[K=-\beta\dot{\vec{X}}^{2}/2+\vec{v}^{2}/2,\ \ \ \ \ \beta=1+\kappa^{2}; \tag{3.5}\]
**this is an alternative expression for Eq.(2.6) and \(\rho\vec{v}=\rho(\kappa\dot{\vec{X}}-\vec{\bigtriangledown}\Phi\) is the conserved current.**
**Second sound is a first order, adiabatic deformation**
\[(T_{0},\dot{\vec{X}}_{0})\rightarrow(T_{0}+dT,\vec{X}_{0}+d\vec{X}),\]
**with \({\cal N}_{0}=0,\vec{v}_{0}=0\) and \(dp=0,d\rho=0,d\vec{v}=0\).**
**To the experimenter second sound is excited by a forced oscillation of the temperature at the boundary and that is not registered by a pressure sensitive microphone, hence \(dp\approx 0\).**
**We must review the equations that govern these oscillations.**
**1. The relevant part of the internal energy density is**
\[-f-sT-\frac{\alpha}{k}{\cal N}T^{k}+\rho\beta\dot{\vec{X}}^{2}/2\]
**In adiabatic thermodynamics, for any fixed value of S, the theory is an isolated Lagrangian action principle and \(u\) is the Hamiltonian density. For any adiabatic variation the integrated internal energy is at a minimum.**
**Variation with respect to \(T\) and \(\vec{X}\), \(\rho\) fixed and uniform, gives the two Euler - Lagrange equations. From variation of \(T\):**
\[\int dT\frac{\partial(\tilde{f}\ +sT)}{dT}\bigg{|}_{S,\rho,{\cal N}}=\int dT \bigg{(}\rho\frac{\partial(\tilde{F}+ST)}{\partial T}\bigg{|}_{S,\rho}+\frac{ \alpha}{k}T^{k}\frac{d{\cal N}}{dT}\bigg{)} \tag{3.6}\]
**In the first order of the perturbation this quantity is zero,**
\[d(\rho C_{V})-\alpha T^{k-1}d{\cal N}=0. \tag{3.7}\]
**Explanation: The internal energy density is \(\ddot{u}\) and this has values that are measured and that give the recorded values of \(C_{V}\); but in the cited papers \(\mathcal{N}\) does not represent another variable; it is just a function of \(T\), so their \(\rho C_{V}\) includes a term \(-(\alpha/k)T^{k}(d\mathcal{N}/dT)\).**
**Derivation with respect to the time gives**
\[\rho\frac{\partial C_{V}}{\partial T}\dot{T}-\alpha T^{k-1}d\dot{\mathcal{N}} =0.\]
**This is valid to first order in perturbation theory if \(\mathcal{N}_{0}=0\), as is natural under the circumstances. Under the same conditions,**
\[\rho\frac{\partial C_{V}}{\partial T}\ddot{T}-\alpha T^{k-1}\ddot{\mathcal{N} }=0. \tag{3.8}\]
**2. From variation of \(\vec{X}\),**
\[\rho\beta d\ddot{\vec{X}}\cdot\dot{\vec{X}}-\frac{\alpha}{k}T^{k}d\mathcal{N} =0,\]
**or**
\[d\vec{X}\cdot(\beta\,d\ddot{\vec{X}}-\alpha T^{k-1}\vec{\bigtriangledown}T)=0. \tag{3.9}\]
**From (3.7) and the divergence of (3.8) follows that**
\[\rho\frac{\partial C_{V}}{\partial T}\bigg{|}_{p}\ddot{T}-\alpha T^{k-1}d \ddot{\mathcal{N}}=0,\ \ \ \ \ \beta d\ddot{\mathcal{N}}-\alpha\rho T^{k-1}\Delta T=0 \tag{3.9}\]
**and the speed \(C_{2}\) is**
\[C_{2}=\frac{\alpha}{\sqrt{\beta}}T^{k-1}\bigg{(}\frac{\partial C_{V}}{ \partial T}\bigg{|}_{p}\bigg{)}^{-1/2}. \tag{3.10}\]
**The values of \(\partial C_{V}/\partial T|_{p}\) will be taken from experimental data, for \(0<p<25MPa\).**
**Direct comparison with experimental values**
**Arp** _et al_**[**12**]****, Brooks and Donnelly** **[**13**]** **and Maynard** **[**14**]** **have collected results from many experiments. Their results for \(C_{V}\) are plotted in Fig.1, along with our very simple interpolation.4 The logarithmic singularity was placed on the \(\lambda\) line.**
Footnote 4: Our interpolation formula was needed in the lowest interval of temperature only, there was no need for the elaborate interpolation used by Arp.
**Our interpolation for \(C_{V}\) is, for \(p=0\), \(0.4<T<2.4\)**
\[C_{V}=-\ln((2.18-T)^{5/2}+10^{-30})+3.35-4.5T+1.6T^{2},\ \ \ . \tag{3.11}\]
**Units are joules/gram.**
**Fig.1. The lower part shows values of \(C_{V}\) (in joules) determined by measurements. The solid curve is our simple interpolation of the data. This interpolation was used to calculate the speed \(C_{2}\) of second sound, using Eq. (3.13), showed for \(k=3\) (\(units\) m/s); it is our prediction for the speed of second sound in He-II.**
**The curves \(C_{V}(T)\) and \(C_{2}(T)\) are shown in Fig. 1. The lowest value of \(C_{V}\) on the interpolation curve is -.033 at \(T=.8282\). The calculation stops at \(T=.7476\), near the point where the experimenters loose their signal (Williams and Rosenbaum 1979) [15] and peaks at \(T\)= 2.18046. Only the overall factor, 17.5 in Eq. (3.16) could be adjusted for a best fit.**
**Similar fits were obtained for \(p=2MPa\) after a small adjustment of the parameters:**
\[C_{V}=-\ln((2.165-T)^{2.5}+10^{-30})+4.0-5.85x+2.1x^{2}. \tag{3.12}\]
**The minimum of the interpolation curve is.00389 at \(T=.700\). The curve begins at \(T=.8287\), and peaks at \(T=2.17875\), at the \(\lambda\) line.**
**Calculations have verified similar agreement for pressure 5, 10, 15, 20 and 25 MPa.**
**Given the experimental data for the values of \(C_{V}\), the theory predicts the speed of second sound to be given up to a multiplicative constant by Eq. (3.16). These formulas, with \(k=3\), give very good fits from the \(\lambda\) line down to \(T=.8\), where \(C_{V}\) has a local minimum and the signal is lost. Fitting the overall constant factor to the experiment we find that, for \(p=0\) and for \(p=2\), 5 the final result is**
Footnote 5: In the quoted reviews velocities are given in m/sec, energy densities in joules.
\[C_{2}=\frac{\alpha}{\sqrt{\beta}}\ T^{2}\bigg{(}\frac{\partial C_{V}}{ \partial T}\bigg{)}^{-1/2},\ \ \ \.8<T<T_{\lambda}. \tag{3.13}\]
**T
**with \(\alpha/\sqrt{\beta}=17.5\) in the units m/s and joules. Here \(C_{V}\) was taken from the tables in terms of joules and the velocites were expressed in terms of \(m/s\).**
**Interpretation**
**In the term \((\alpha/k)T^{k}{\cal N}\) that we have included in the internal energy, \({\cal N}\) is the notoph amplitude. The power \(k\) in \(T^{k}\) was left open to be determined by measurements. The experimental value is \(k=3\). The factor \(T^{3}\) is proportional to the number of quanta predicted by Planck's theory. The new term in the internal energy density is thus identified as the Stefan - Boltzmann term associated with the notoph.**
**Why include \({\cal N}T^{3}\) instead of \(aT^{4}\). The notoph is a new experience and the wisest course is to accept the value provided by the experiments, which fixes the value of \(K\) at 3..**
**We have set the parameter \({\cal N}_{0}\) equal to zero. This is because the experiments were made in vessels of a size such that boundary effects, the origin of capillary effects, are expected to be weak. The effect of varying this parameter away from zero is insignificant.**
**IV. Fourth sound**
**Fourth sound is observed in thin films and in containers packed with silicon wafers. The usual interpretation is that the "normal component" remains at rest; we shall assume that**
\[\vec{v}_{0}=0,\quad\vec{\bigtriangledown}\Phi_{1}=0,\]
**that \(S\) is fixed, constant and uniform, while \(\vec{X},\rho\) and \(T\) oscillate together. Of the four equations of motion these three are relevant for the determination of the speed;**
**1. Equation of continuity:**
\[d\dot{\rho}+\kappa\rho d(\vec{\bigtriangledown}\cdot\dot{\vec{X}})=0.\]
**To zero order in perturbation theory the system is taken to be stationary, and both terms are zero. For simplicity we shall replace \(\vec{X}\) by \({\cal N}\) as the
independent variable. Consider a plane wave perturbation, then to first order the equation reduces to**
\[\dot{\rho}_{1}+\kappa\rho_{1}(\vec{\bigtriangledown}\cdot\dot{\vec{X}}_{0})+ \kappa\rho_{0}(\vec{\bigtriangledown}\cdot\dot{\vec{X}}_{1})=0.\]
**It is clearly vital to know something about the zeroth approximation.**
**In the bulk of the fluid the field \(\dot{\vec{X}}_{0}\) is stationary and \((\vec{\bigtriangledown}\cdot\dot{\vec{X}}_{0})\) is expected to vanish; in that case, in first order of perturbation,**
\[d\rho+\kappa d{\cal N}=\rho_{0},\quad\mbox{constant;}\]
**this will allow us to eliminate the density from the Bernoulli equation.**
**2. The adiabatic condition:**
\[\left.\frac{\partial(\tilde{f}+sT)}{\partial T}\right|_{\rho,{\cal N},S}dT= \rho\frac{\partial(\tilde{F}+ST}{\partial T}\right|_{S}dT\]
\[=-\frac{\alpha}{3}T^{3}{\cal N}-\left.\frac{\partial(\tilde{f}+sT)}{\partial \rho}\right|_{T{\cal N},S}d\rho\]
**and using Eq. (2.16):**
\[\left.\frac{\partial(\tilde{f}+sT)}{\partial\rho}\right|_{T,{\cal N},S}= \frac{1}{\rho}\frac{\partial p}{\partial\rho}\]
**to get**
\[\left.\frac{\partial(\tilde{f}+sT)}{\partial T}\right|_{\rho,{\cal N},S}dT=- \frac{\alpha}{3}T^{3}d{\cal N}-\frac{\partial p}{\partial\rho}d\rho,\]
**or**
\[dC_{V}=(\alpha T^{2}+\kappa C_{1}^{2})d{\cal N}-\rho_{0}C_{1}^{2}\]
**The time derivatives:**
\[\frac{\partial C_{V}}{\partial T}\dot{T}+\rho_{0}\frac{\partial C_{1}^{2}}{ \partial T}\dot{T}=(\alpha T^{2}+\kappa C_{1}^{2})\dot{N}\]
**3. The Bernoulli equation is,**
\[d\ddot{\vec{X}}=\alpha T^{k-1}\vec{\bigtriangledown}T,\]
**or**
\[\ddot{\cal N}-\alpha T^{2}\Delta T=0,\]
**and together they give**
\[{C_{4}}^{2}=\frac{\ddot{T}}{\Delta T}=\frac{\ddot{T}}{\dot{\cal N}}\frac{ \ddot{\cal N}}{\Delta T}=\frac{\left(\frac{\alpha}{100}T^{2}+{\kappa C_{1}}^{ 2}\right)}{\frac{1}{10}\frac{\partial C_{V}}{\partial T}+\rho_{0}\frac{ \partial}{\partial T}C_{1}^{2}}\frac{\alpha T^{2}}{100}.\]
**The unit of velocity is here \(10^{4}cm/sec\), and \(C_{V}\) is in joules. In the numerator \(\alpha=17.5\beta\) as in the calculation of \(C_{2}\) and \(1/100\) converts \(\alpha\) to the new unit of speed. The factor 1/10 in the denominator is valid when \(C_{V},\) is expressed in joules, as taken from the tables.**
**Fig.2. The relation (4.4). Red line: Interpolation by the author of measurements of \(C_{4}\). Blue line: Values of \(C_{4}\) calculated from experimental values of \(C_{1},C_{2}\) and \(C_{V}\) reported in ref.s [13-15].**
**Fig. 2 summarizes the result for \(C_{4}\). The red line is the square of fourth sound, interpolated from the experimental data [13-15]. It is almost covered by the blue line, the value given by Eq. (4.4). The fit was made with only one free parameter, the physical parameter \(\kappa\) of the fluid, for the first time determined experimentally. The vertical coordinate is the square of the speed of fourth sound in units of \((10^{4}cm/sec)^{2}\).**
**The value of the parameter \(\alpha/\sqrt{\beta}\) was determined in Section III. That leaves the value of \(\kappa\) as the sole free parameter; the value determined by using the earlier value of \(\alpha\) is**
\[\kappa=.556\pm.0005.\]
**If instead Eq. (4.4) is used to determine both parameters then \(\alpha/\sqrt{\beta}\) is bracketed between 17.0 and 18.0.**
**That gives a unique theory, with no free parameters, that can be used to predict the strength of capillary effects and other properties of He - II.**
**The analytic interpolations used for \(C_{1}\) and \(C_{4}\) were**
\[{C_{1}}^{2}=5.63+0.05*[1.2-x]-(0.77*[x-1.2]^{2})\]
\[{C_{4}}^{2}=54000-(50000*[x-1.2]^{2})\]
**V. What comes next?**
**This paper reports another application of a version of Landau's 2-vector theory of superfluids. It should be pointed out that the alternative idea of two densities has found no direct experimental support. The number "\(\rho_{s}/\rho_{n}\)" is fixed in terms of \(\rho,T\), and \(p\); it is not an independent variable. [13]**
**The need for extra variables, besides a velocity potential, a density and the temperature, was demonstrated at the end of the 17th century and yet the first viable suggestion in that direction was Landau's idea of two velocities, at first in a very narrow context.**
**The two'versions' of hydrodynamics date from the beginning. They have been said to be equivalent, but that is evidently not the case. The rotons are strongly associated with the socalled 'Lagrangian version' of hydrodynamics. This was pointed out in an important paper by Rasetti and Regge, but that paper had repercussions in string theory only. [16]**
**Today we see the roton-notoph identification as the coming-of age of hydrodynamics, with applications to a large class of fluid phenomena, including capillary action, flight and gravitational waves. The recently confirmed unification of General Relativity with Particle Theory has given new impetus to bringing hydrodynamics into contact with both. We hope that the present paper will stimulate a more unified approach to these important branches of physics, by showing that hydrodynamics can be approached by methods that have been proper to Particle Physics, and profit from it. The approach assumes a precise model of fluids and makes detailed predictions on its own, without special adaptations in each special case.**
**To end where we began, superfluids still pose challenges. The existence of spin is obvious but details need to be examined. The spectacular ability of He-II to penetrate very fine pores is probably a manifestation of capillary phenomenon, related to the properties of the massless notoph, but this needs to be clarified. Finally, the growing importance of notoph = roton makes it urgent to discover how to detect it, in the CMB and in the laboratory.**
**Computation codes are available on request from the author.**
**Acknowledgements**
**I thank Gary Williams for discussions and information, and Joe Rudnick for conversations. I also wish to acknowledge the crucial reference to the paper [5], by Alexander Zheltukhin. I also thank Chair David Saltzberg for support.** |
2304.00060 | Evidential Transactions with Cyberlogic | Cyberlogic is an enabling logical foundation for building and analyzing
digital transactions that involve the exchange of digital forms of evidence. It
is based on an extension of (first-order) intuitionistic predicate logic with
an attestation and a knowledge modality. The key ideas underlying Cyberlogic
are extremely simple, as (1) public keys correspond to authorizations, (2)
transactions are specified as distributed logic programs, and (3) verifiable
evidence is collected by means of distributed proof search. Verifiable
evidence, in particular, are constructed from extra-logical elements such as
signed documents and cryptographic signatures. Despite this conceptual
simplicity of Cyberlogic, central features of authorization policies including
trust, delegation, and revocation of authority are definable. An expressive
temporal-epistemic logic for specifying distributed authorization policies and
protocols is therefore definable in Cyberlogic using a trusted time source. We
describe the distributed execution of Cyberlogic programs based on the
hereditary Harrop fragment in terms of distributed proof search, and we
illustrate some fundamental issues in the distributed construction of
certificates. The main principles of encoding and executing cryptographic
protocols in Cyberlogic are demonstrated. Finally, a functional encryption
scheme is proposed for checking certificates of evidential transactions when
policies are kept private. | Harald Ruess, Natarajan Shankar | 2023-03-20T10:18:43Z | http://arxiv.org/abs/2304.00060v1 | # Evidential Transactions with Cyberlogic
###### Abstract
This research has been supported by the DARPA Automated Rapid Certification Of Software (ARCOS) project under contract number HR0011043439, National Institute of Aerospace Award #C21-202017-SRI, and NSF Grant SHF-1817204. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Government, DARPA, NASA, NIA, or NSF. |
2306.11714 | Meta-Analysis of Transfer Learning for Segmentation of Brain Lesions | A major challenge in stroke research and stroke recovery predictions is the
determination of a stroke lesion's extent and its impact on relevant brain
systems. Manual segmentation of stroke lesions from 3D magnetic resonance (MR)
imaging volumes, the current gold standard, is not only very time-consuming,
but its accuracy highly depends on the operator's experience. As a result,
there is a need for a fully automated segmentation method that can efficiently
and objectively measure lesion extent and the impact of each lesion to predict
impairment and recovery potential which might be beneficial for clinical,
translational, and research settings. We have implemented and tested a fully
automatic method for stroke lesion segmentation which was developed using eight
different 2D-model architectures trained via transfer learning (TL) and mixed
data approaches. Additionally, the final prediction was made using a novel
ensemble method involving stacking and agreement window. Our novel method was
evaluated in a novel in-house dataset containing 22 T1w brain MR images, which
were challenging in various perspectives, but mostly because they included T1w
MR images from the subacute (which typically less well defined T1 lesions) and
chronic stroke phase (which typically means well defined T1-lesions).
Cross-validation results indicate that our new method can efficiently and
automatically segment lesions fast and with high accuracy compared to ground
truth. In addition to segmentation, we provide lesion volume and weighted
lesion load of relevant brain systems based on the lesions' overlap with a
canonical structural motor system that stretches from the cortical motor region
to the lowest end of the brain stem. | Sovesh Mohapatra, Advait Gosai, Anant Shinde, Aleksei Rutkovskii, Sirisha Nouduri, Gottfried Schlaug | 2023-06-20T17:42:30Z | http://arxiv.org/abs/2306.11714v1 | # Meta-Analysis of Transfer Learning for Segmentation of Brain Lesions
###### Abstract
A major challenge in stroke research and stroke recovery predictions is the determination of a stroke lesion's extent and its impact on relevant brain systems. Manual segmentation of stroke lesions from 3D magnetic resonance (MR) imaging volumes, the current gold standard, is not only very time-consuming, but its accuracy highly depends on the operator's experience. As a result, there is a need for a fully automated segmentation method that can efficiently and objectively measure lesion extent and the impact of each lesion to predict impairment and recovery potential which might be beneficial for clinical, translational, and research settings. We have implemented and tested a fully automatic method for stroke lesion segmentation which was developed using eight different 2D-model architectures trained via transfer learning (TL) and mixed data approaches. Additionally, the final prediction was made using a novel ensemble method involving stacking and agreement window. Our novel method was evaluated in a novel in-house dataset containing 22 T1w brain MR images, which were challenging in various perspectives, but mostly because they included T1w MR images from the subacute (which typically less well defined T1 lesions) and chronic stroke phase (which typically means well defined T1-lesions). Cross-validation results indicate that our new method can efficiently and automatically segment lesions fast and with high accuracy compared to ground truth. In addition to segmentation, we provide lesion volume and weighted lesion load of relevant brain systems based on the lesions' overlap with a canonical structural motor system that stretches from the cortical motor region to the lowest end of the brain stem. Such a combination of an automatically determined lesion with its impact on a relevant brain system provides a fast and objective, user-experience independent functional understanding of a lesions' extent and impact.
**Keywords:** Automatic lesion segmentation, deep learning, transfer learning, lesion load, brain stroke
## 1 Introduction
Strokes are a leading cause of long-term disability and mortality rates, impacting millions of individuals annually in the world [1]. In the United States alone, around 795,000 people experience
either a new or a recurrent stroke each year, with approximately 610,000 being first-time occurrences and 185,000 being recurrent attacks [2, 3]. The identification and segmentation of a stroke lesion plays an important role for accurate diagnosis and etiology, prognosis, and treatment options [4]. With extensive and long-time training, clinical and translational professionals can define the extent of a lesion through visual inspection, however, accurate manual segmentation and determining the impact of a lesion on relevant and eloquent regions of the brain is time-consuming, expensive, and can be error-prone due to the expertise necessary for the visual analysis of images and to keep the inter-observer variability to a minimum [5]. Thus, developing user independent methods for rapid determination of any lesion, in particular lesions in both subacute and chronic phase of a stroke, becomes an important endeavor in clinical and research settings, allows the stratification of patients into clinical trials, and enhances our understanding of a lesions' impact on a patient's functional impairment and their stroke outcome potential [6, 7, 8].
Transfer learning (TL) uses pre-trained models to extract significant information and features from one domain and apply them to a related but distinct separate domain. In the context of medical imaging, TL has emerged as an effective approach to deal with issues including the lack of labeled data and the need for robust feature extraction [9]. Researchers can initiate their work with models that have already been pre-trained on extensive, general-purpose datasets, possessing a feature set capable of identifying and capturing significant attributes. The models can then be fine-tuned for the particular medical imaging needs at hand or the particular imaging dataset requiring segmentation [10, 11]. Intermediate task training (ImTT), a subset of TL, involves training a model on a series of related tasks before fine-tuning on the target task [12, 13]. Recent studies have demonstrated the efficacy of TL and ImTT in various medical imaging tasks, including lesion segmentation [14], tumor classification [11], and tissue compartments identification [15].
To explore the potential of TL in 2D based deep learning models, we conducted a comprehensive meta-analysis focusing on the index lesion segmentation in stroke patients that had imaging studies obtained in the subacute and chronic phase after an index stroke event had occurred. The emphasis on the index lesion in our paper was done with the understanding that there are other abnormal regions in a brain with a stroke that might be due to chronic small vessel disease related to the presence of long-standing vascular risk factors (e.g., smoking, high blood pressure, hyperlipidemia, diabetes, etc) or due to secondary degeneration of white matter tracts induced by the index lesion and more visible in the subacute and chronic stroke phase. Our study compared the performance of models utilizing ImTT with those using mixed data. Furthermore, we explored the effectiveness of ensembling fine-tuned models trained on mixed data by incorporating a binary overlap (stacking) and a novel window approach. Our study also emphasized the fact that through these techniques, 2D models can achieve comparable accuracy to hand-drawn lesion masks while requiring fewer computational resources.
## 2 Methodology
### Selection of MRI data
We examined 602 anonymized MR brain images and their corresponding hand-drawn lesions maps that resulted from a consensus of multiple experienced raters curated from Anatomical Tracings of Lesions After Stroke (ATLAS) dataset [16] (547 high resolution T1w MR brain images) and our anonymized and de-identified in-house dataset (55 high resolution T1w MR brain images, either 1X1X1 acquisition voxel size or resampled to 1X1X1 voxel size). This encompassed a wide range of acute, subacute and chronic stroke patients (as defined by [16]) lesions in various brain regions as defined in Section 2.3
### Preprocessing of MRI data
Several preprocessing measures were conducted to prepare the MRI data for further analysis. These steps were necessary to standardize the data and enhance the performance of our deep learning models. Firstly, all MRI images were resampled to a consistent size of 256x256x128 voxels to ensure uniformity across the dataset. The skull, meninges, and ventricles were then removed/excluded from the images. Subsequently, the images and their corresponding masks were sliced in the axial plane to generate appropriate input to our 2D models. To enhance the robustness of our model, various
image augmentation techniques were applied to generate multiple variants of the original images. This approach increases the dataset size and introduces variability, which ultimately improves the model's ability to generalize and perform well on unseen data. The augmentation process was carried out using a custom Python function leveraging the ImageDataGenerator class from the Keras library [17]. Finally, all images and masks were normalized with the mask values being binarized.
### Splitting the whole brain into multiple regions
Based on our comprehensive understanding of anatomy, major vascular systems, and clinical knowledge regarding a stroke lesion's impact of particular brain systems (the motor system in our dataset), we divided the whole brain into four different _"super-regions"_, which were gross-anatomically defined. Dividing the brain into these major _super-regions_ allowed us to identify which model architectures learned and captured different unique complexities associated with each region. Super Region 1 (R1) consists of the inferior frontal region and sub-cortical deep gray matter regions of the brain (e.g., basal ganglia, thalamus, insular cortex)
Super Region 2 (R2) consists of the cerebellum and three parts of the brainstem (medulla, pons, and midbrain regions).
Super Region 3 (R3) consists of the superior part of the frontal lobe and the entire occipital lobe and parietal lobe.
Super Region 4 (R4) consists of the temporal lobe, limbic lobe (e.g., cingulate gyrus), and fronto-meisal part of the brain.
All the sub-regions in each of the _super-regions_ were selected from the Talairach atlas [18, 19] as shown in Figure 1. The subregions were added and multiplied with consequent intensity values using the FSL software. As shown in Section 3.2, the models' performance was improved by this approach when ensemble methods were applied.
### Model training
We conducted an extensive study by training and evaluating eight distinct model architectures to enhance the analysis. These architectures encompass a wide variety of approaches, including U-Net, U-Net++, Residual U-Net (ResUNet), Residual Network (ResNet), VGG U-Net (VGGUNet), V-Net, Fully Convolutional DenseNet (FC DenseNet), and Attention U-Net (Att UNet). All models were trained on a high-performance computer equipped with 128 gigabytes of RAM, an NVIDIA GeForce RTX 3080 GPU with 12 gigabytes of memory, and an AMD Threadripper Pro 5955WX Processor, running on the Ubuntu 20.04 operating system. Detailed information about the hyper-parameters and model architectures can be found in the supporting information (SI) section of our manuscript.
Our meta-analysis consisted of primarily comparing two different model training and fine-tuning approaches: training and validation on mixed data and training using TL with ImTT. Our aim was to compare the efficacy of both approaches for the specific task of subacute/chronic stroke lesion segmentation.
Figure 1: MN1152 1mm brain with separate _super-regions_ marked in different signal intensities
#### 2.4.1 Mixed data
We trained, validated and tested the models on slices of all 547 MR images along with their corresponding hand-drawn lesion masks from the ATLAS v2.0 dataset and 33 of the image mask pairs from our in-house dataset. In this case, the training set was made up of 80% of all 2D slices of all MR image datasets, while the validation set (used for hyperparameter tuning) and testing set each contained 10%. Finally, the models were evaluated on the independent test set consisting of the remaining 22 MR brain images from the in-house dataset.
#### 2.4.2 Intermediate Task Training
We initially trained the models on slices of 447 MR images along with their corresponding hand-drawn lesion masks curated from the ATLAS v2.0 dataset. Next, we divided the remaining 100 MR images from the ATLAS into four groups based on the lesion locations, as depicted in Section 2.3. We then fine-tuned the models with the brains and corresponding hand-drawn lesion masks focusing on specific _super-regions_, further fine-tuning them on our target task which is segmenting our in-house dataset by using 33 of its T1w MR images. These models too were evaluated on the independent test set as described previously. The suffix _FT indicates models trained using ImTT.
### Ensembles of trained models
In addition to using two different training approaches, we examined different ensembles of these eight unique model architectures to evaluate whether a collective strategy would produce superior results compared to relying on a single model after applying the binary mask to the predicted lesion masks. We implemented two ensemble techniques as detailed below.
#### 2.5.1 Stacking
The stacking technique employed in this study utilized a binary overlap method. The final mask for each image was determined by considering the overlapping voxels resulting from stacking individual model predictions. To construct the stacks, we chose the four top-performing models per _super-region_, and analyzed all of their possible combinations to derive the best ensemble. The stacking method has been illustrated in Figure 2.
#### 2.5.2 Agreement Window
The agreement window ensemble method is a novel approach developed for this study that combined a window kernel with stacking. The 3D window kernel was designed to retain the union of predicted voxels if their stacking overlap was greater than a certain threshold; whereas it would discard all voxels in the window if the minimum overlap was not satisfied. This window was convolved on the chosen stack of individual model predictions to generate a final outcome. While binary stacking generally resulted in a smaller, less descriptive mask, this method allowed us to eliminate predicted noise from individual models, as well as retain a more accurate mask shape around the targeted index lesion area. Different values of the window size and overlap threshold were tested to identify the optimal parameters. The agreement window method is illustrated in Figure 2.
### Evaluation metrics
We assessed our results using four distinct metrics: dice coefficient, Jaccard Index (also known as IoU), precision, and recall to gain insight into the performance of the models and ensemble methods. Detailed definitions of each of the chosen evaluation metrics and their relevance can be found in the SI.
### Evaluating lesion impact on motor regions
We employed a systematic approach to assess the impact of brain lesions in stroke patients on relevant or eloquent canonical brain structures, such as the corticospinal tract (CST), connecting the motor cortex with alpha motor neurons in the spinal cord. The canonical CST should be understood as a surrogate structural marker of the extent, intersubject variability as well as anatomical variability of a relevant motor system [20, 21]. Based on the CST-lesion load value for either side
of the brain [20], we classified the impacts into three categories on the basis of incremental order of lesion load values such as Small, Medium, and Large.
* Large: Indicates a high lesion load, corresponding to a severe neurological impairment.
* Medium: Indicates a medium lesion load, corresponding to a moderate neurological impairment.
* Small: Indicates a small lesion load, corresponding to a mild neurological impairment.
This evaluation parameter not only helps in understanding the anatomical impacts of lesions on a relevant brain system but also examines the reliability of the model predictions as detailed in Section 3.3.
Figure 2: Comprehensive workflow – visualizing the end-to-end process with model training and ensembling approaches
## 3 Results
### Performance assessment of 2D-based DL models
Based on our evaluation of the models using an independent set of 22 T1w MR Sequences from our in-house dataset and the chosen evaluation metrics, we observed that the ImTT training approach is more effective than the mixed-data approach, particularly when considering the entire brain. Additionally, the evaluation of predictions from a single model revealed that the Att-UNet_FT model outperforms all other models in terms of the DSC, IoU and ER LL metrics while the ResNet_FT model has the lowest error rate in predicting the Lesion Volume as shown in Table 1.
### Performance assessment of model prediction in _super-region-wise_ and ensemble methods
In comparison to performing a whole-brain analysis, we identified four anatomically distinct brain _super-regions_ and evaluated the performance of individual models and ensemble methods in calculating critical lesion information - lesion volume and lesion load.
Following the labeling of stroke locations in our training data with their corresponding brain _super-regions_, we selected the top 4 performing models for each _super-region_ and applied the two ensemble methods as described in Section 2. In our analysis, we combined _super-regions_ 1 and 4, as we observed that all lesions present in _super-regions_ 1 also overlapped with _super-regions_ 4.
Table 2 presents the _super-region-wise_ individual model and ensemble performances. It is observed that the agreement window ensemble method consistently produces lower error rates than the stacking method for all _super-regions_, while only being outperformed by the Att UNet_FT model for strokes present in _super-regions_ 3. Specifically, when evaluated based on lesion load, the agreement window predictions have significantly improved and have a lower average error rate across all _super-regions_, as opposed to the average error rates from individual models and binary stacking.
Table 3 presents a comprehensive comparison of the top-performing model(s) for all of the different whole-brain and _super-region-wise_ ensemble methods described in this paper. The _super-region-specific_ predictions of the top performing models from each _super-region_ (as shown in Table 2) were aggregated to generate a final _super-region-wise_ lesion prediction for a target brain.
Examining the results, we notice that _super-region-wise_ ensembles surpass the performance of whole-brain predictions by individual models and even whole-brain ensembles. Furthermore, the effectiveness of the agreement window method is seen by its superior performance in both the whole-brain and _super-region-wise_ approaches, with the _super-region-wise_ agreement window method generating the best lesion volume and lesion load predictions.
\begin{table}
\begin{tabular}{c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{**Mixed Data**} & \multicolumn{6}{c}{**Intermediate**} & **Task Training** \\
**Model** & DSC & IoU & Prec & Rec & ER LV & ER LL & DSC & IoU & Prec & Rec & ER LV & ER LL \\ \hline Att UNet & 0.647 & 0.497 & **0.754** & 0.584 & 64.65 & 23.07 & **0.701** & **0.552** & 0.656 & **0.789** & **35.74** & **22.91** \\ FC DenseNet & 0.580 & 0.442 & 0.600 & 0.602 & 75.30 & 44.64 & **0.715** & **0.564** & **0.711** & **0.738** & **41.43** & **27.70** \\ ResNet & 0.623 & 0.467 & **0.772** & 0.545 & 87.87 & 46.53 & **0.661** & **0.523** & 0.590 & **0.807** & **38.34** & **30.09** \\ ResUNet & 0.622 & 0.434 & 0.601 & 0.646 & 99.33 & 46.22 & **0.654** & **0.514** & **0.631** & **0.748** & **72.18** & **33.58** \\ UNet & 0.602 & 0.455 & **0.609** & 0.679 & 89.00 & 36.29 & **0.607** & **0.461** & 0.572 & **0.708** & **81.70** & **35.24** \\ UNet++ & **0.686** & **0.597** & 0.653 & **0.695** & **30.12** & **23.28** & 0.633 & 0.512 & **0.655** & 0.629 & 67.83 & 25.35 \\ VGG UNet & 0.570 & 0.414 & **0.628** & 0.583 & 74.56 & 41.16 & **0.615** & **0.473** & 0.590 & **0.702** & **60.22** & **32.63** \\ VNet & **0.711** & **0.569** & 0.697 & **0.773** & 62.60 & **24.16** & 0.700 & 0.552 & **0.755** & 0.685 & 49.00 & 25.28 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of performance of models segmenting the lesions in the whole brains of independent validation set. (DSC = Dice Similarity Coefficient, Prec = Precision, Rec = Recall, ER LV = % Error Rate of Lesion Volume, ER LL = % Error Rate of Lesion Load)
### Performance assessment of final predictions using lesion load
Figure 3 illustrates the comparison between the three lesion categories of hand-drawn lesion masks and predicted lesion masks, based on their respective CST lesion load values. Out of 22 predictions, 18 exhibit category levels within the same range, and most instances have a significantly low error rate. The overall error rate for the models used is presented in Table 2.
Figure 3 clearly shows three instances of outliers or inaccurate predictions in the final outcomes. Across all three cases, the model excessively estimated the CST lesion load values, incorrectly moving one from the small to medium range and two from the medium to large. This trend signifies an overestimation of lesion by the model, which can be viewed more favorably than if the model has an underestimation of lesion, as the overestimation ensured that the true lesions were not left out.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Super Region(s)** & **Method** & **Top Model(s)** & **ER LV** & **ER LL** \\ \hline \multirow{8}{*}{1 \& 4} & \multicolumn{2}{c}{Att UNet\_FT} & 139.97 & 18.41 \\ & \multicolumn{2}{c}{} & FC DenseNet\_FT & 188.51 & 15.04 \\ & \multicolumn{2}{c}{} & ResNet\_FT & 51.16 & 17.40 \\ & \multicolumn{2}{c}{} & UNet++ & 189.52 & 14.90 \\ & \multicolumn{2}{c}{} & FC DenseNet\_FT, UNet++ & 43.37 & 14.76 \\ & \multicolumn{2}{c}{} & FC DenseNet\_FT, ResNet\_FT & 46.52 & 15.92 \\ & \multicolumn{2}{c}{} & FC DenseNet\_FT, UNet++ (2, 0.75) & 38.37 & **7.73** \\ & \multicolumn{2}{c}{} & FC DenseNet\_FT, UNet++ (2, 0.75) & **33.45** & 8.42 \\ \hline \multirow{8}{*}{2} & \multirow{8}{*}{Individual} & \multicolumn{2}{c}{Att UNet} & 99.03 & 13.83 \\ & \multicolumn{2}{c}{} & ResNet\_FT & 51.48 & 20.46 \\ & \multicolumn{2}{c}{} & ResUNet & 82.83 & 16.33 \\ & \multicolumn{2}{c}{} & VNet & 50.91 & 15.79 \\ & \multicolumn{2}{c}{} & Att UNet, ResUNet & 29.95 & 20.83 \\ & \multicolumn{2}{c}{} & Att UNet, VNet & 28.25 & 20.04 \\ & \multicolumn{2}{c}{} & Att UNet, VNet (3, 0.5) & 25.81 & 12.56 \\ & \multicolumn{2}{c}{} & Att UNet, VNet (4, 0.5) & **26.62** & **13.89** \\ \hline \multirow{8}{*}{3} & \multicolumn{2}{c}{} & Att UNet & 16.15 & 18.14 \\ & \multicolumn{2}{c}{} & Att UNet\_FT & **11.78** & **11.09** \\ & \multicolumn{2}{c}{} & UNet++ & 14.52 & 16.21 \\ & \multicolumn{2}{c}{} & VNet & 27.65 & 16.10 \\ & \multicolumn{2}{c}{} & Att UNet, VNet & 13.92 & 20.64 \\ & \multicolumn{2}{c}{} & UNet++, VNet & 11.86 & 15.12 \\ & \multicolumn{2}{c}{} & UNet++, VNet (2, 0.75) & 11.72 & 14.65 \\ & \multicolumn{2}{c}{} & UNet++, VNet (3, 0.5) & 10.89 & 15.01 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of ER LV and ER LL across different models’ _super-region-wise_ predictions and ensemble approaches. Top 4 models per _super region(s)_ were chosen; all combinations of which were then ensembled, out of which the two best combinations are reported. The optimal (window size, overlap threshold) parameters found empirically for the agreement window approach are denoted next to the respective model names.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Scope** & **Method** & **Top Model(s)** & **DSC** & **ER LV** & **ER LL** \\ \hline Whole-Brain & Mixed Data & UNet++ & 0.686 & 30.12 & 23.28 \\ Whole-Brain & ImTT & Att UNet\_FT & 0.701 & 35.74 & 22.91 \\ Whole-Brain & Stack & Att UNet\_FT, UNet++ & 0.720 & 41.34 & 26.93 \\ Whole-Brain & AW & Att UNet\_FT, UNet++ (3, 0.5) & 0.726 & 41.25 & 22.11 \\ & \multicolumn{2}{c}{} & [R1, R4]; FC DenseNet\_FT, UNet++ & & & \\ Super-Region-wise & Stack & [R2]: AttUNet, VNet & 0.719 & 37.87 & 23.44 \\ & \multicolumn{2}{c}{} & [R3]: AttUNet\_FT & & & \\ Super-Region-wise & AW & [R1, R4]: FC DenseNet\_FT, UNet++ (3, 0.75) & & & \\ Super-Region-wise & AW & [R2]: AttUNet, VNet (3, 0.5) & & **0.736** & **25.58** & **15.47** \\ & \multicolumn{2}{c}{} & [R3]: AttUNet\_FT & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of top models and their ensembles based on all of the techniques presented in the study. (AW = Agreement Window)
### Visual comparison
Figure 4 presents a comparative analysis of prediction results from two distinct training approaches for models, as discussed in Section 2.4. **A, B, C** shows three separate predictions generated by three different model architectures on brain MR images in the independent validation set. It is evident that the mixed data approach leads to some false positives in the models' predictions, which are effectively eliminated when employing the ImTT training approach.
Figure 5 presents a comparative analysis of the prediction results from various approaches applied to the independent validation set. For a more effective comparison, the figure displays predictions corresponding to lesions in two separate _super regions_, along with their hand-drawn ground truth lesion masks. In this instance, the lesions are present in _super-regions_\(2\) and \(4\). In accordance with the Table 2, the models which are ensembled are Att UNet, VNet and FC DenseNet_FT, UNet++ for _super-regions_\(2\) and \(4\) respectively. The agreement window has a dimension of 2X2X2 pixels and a minimum overlap threshold of 75% for the _super-region_\(4\) ensemble, compared to a dimension of 3X3X3 pixels with a minimum overlap threshold of 50% for the _super-region_\(2\) ensemble.
It can be observed that the agreement window ensemble methods tend to marginally overpredict the lesion area. However, it effectively predicts the targeted lesion area and exhibits almost no false positives in areas outside the lesion's location compared to other approaches.
## 4 Discussion and Conclusion
### Interpretability of results
Our study not only offers valuable insights into the comparative performance of ensemble methods when used in conjunction with TL or mixed data training approaches but also provides a basis of comparison with other recent works using the ATLAS dataset [22, 23, 24, 25]. The primary goal of our research was to demonstrate that 2D-based models, when combined with TL and ensemble techniques, can achieve fast, user-experience independent, accuracy levels comparable to hand-drawn or ground truth lesion masks.
In comparison to recent works by [22, 23, 24], where the ATLAS dataset is utilized for model training, the achieved Dice Similarity Coefficient (DSC) on the validation set are 0.723, 0.592, and 0.650 respectively. In our study using the ATLAS dataset, our best performing approach yields a DSC
Figure 3: Comparison between hand-drawn and model-predicted lesion masks, including severity levels on motor region based on lesion load value. The x-axis are the individual subjects aligned according to their ground truth and model predicted CST-lesion load (from left to right in ascending order). The circles in **B** denote outliers or inaccurate predictions made by the model.
of 0.736 on the independent dataset, demonstrating the effectiveness of our method in segmenting lesions in T1w MR sequences. Furthermore, our study provides insights into lesion volume and lesion load of relevant brain structures, contributing to a more comprehensive understanding of each lesion's impact.
As we observe in Table 2 and 3, the improvement seen using the _super-region-wise_ prediction can be attributed by the adaptability, reduced complexity, and noise reduction capabilities of specific model architectures, leading to more effective segmentation in particular _super-regions_ for some models. Furthermore, the empirically derived parameters ensure that our ensemble methods are tailored to the main aim of our current approach, namely predicting the index stroke lesion.
However, our individual models and ensemble methods have also been able to detect more wide-spread small vessel disease lesions in the stacking method as seen in Figure 4A and 5C highlighted circle. Upon further study, a tailored adjustment of parameters in the combined stacking and agreement window algorithm would allow us to detect small vessel disease(SVD) lesions, in particularly SVD lesions leading to T1-hypothesis [26], as well as the degeneration of long-range tracts as a secondary effects of the index lesion which can also lead to a decrease in T1 signal, but this signal decrease is seen in the anatomical course of the descending corticospinal tract (CST) [27, 28].
### Advantages and limitations in ensemble methods
Three important innovations were made in our study. Along with applying TL, we implemented two ensemble methods: the stacking method and the stacking method combined with the window algorithm. A primary advantage of employing the stacking method lies in its ability to effectively reduce a significant number of false positives that frequently occur when relying solely on a single model's predictions. This improvement in accuracy can be attributed to the integration of predictions from multiple models, which consequently increases the likelihood of identifying true index
Figure 4: Comparison of hand-drawn lesion maps and lesion map predictions from two distinct model training approaches. **A, B, C** display predictions from ResNet, FC DenseNet, and ResUNet (trained with mixed data) alongside ResNet_FT, FC DenseNet_FT, and ResUNet_FT (trained using ImTT) respectively.
lesion areas. Furthermore, the integration of the window algorithm with stacking provides an additional advantage by not only considering the overlapping predictions but also detecting edges of the lesions that might have been excluded otherwise.
However, ensemble methods also come with some limitations. The stacking method tends to underpredict lesions in some cases due to overlapping, which may lead to missed detections and a potentially incomplete assessment of the targeted lesion area. Conversely, the combined stacking and window algorithm tends to overpredict the lesions by approximately 10% in other cases. While this overprediction can result in an inflated estimation of lesion presence, we accept this trade-off in light of the overall enhancement in model performance. Furthermore, the overprediction might be due to an increased sensitivity of the combined stacking and agreement window algorithm in detecting secondary lesions (e.g., degeneration of the corticospinal tract distal to an index lesion - which might also lead to a decrease in signal in T1w images) as well as small vessel disease lesions in areas of the brain that are unrelated to the index stroke.
### Clinical implications and potential applications
Our findings have considerable implications for various applications in clinical practice and in translational research, particularly in improving the accuracy and speed of diagnosis, prognosis, and treatment planning for acute, subacute and chronic stroke patients. By adopting these methods, we can improve the detection and characterization of lesions, leading to more informed decision-making,better patient outcomes predictions, as well as fast and accurate stratification of stroke patients based on lesion load data. One critical aspect of the management and treatment of stroke patients is an understanding of the impact of a lesion and correlating it to the patients' functional impairments and to their possible gains in standard rehabilitation therapy as well as in experimental therapies. Our approach allows for the rapid and precise calculation of lesion volume and lesion load of relevant systems in the brain e.g., the CST as a surrogate structure of the motor system, enabling clinicians to evaluate the severity of the stroke, predict the likelihood of recovery as well as a posthoc determination of the failure of recovery, develop targeted rehabilitation strategies to
Figure 5: Comparison of hand-drawn lesion maps and predictions from distinct models and ensemble approaches. **A** represents the actual T1w MR scan and ground truth. **B** represents the predictions made by different models, and their super-region-wise ensembles. **C** represents a zoomed out view of the final super-region-wise ensemble and the subsequent final prediction superimposed on the actual T1w MR scan
reduce the effects of stroke-related disabilities, and provide a stratification tool for enrollment in various clinical trials.
### Future work
Future research work will concentrate on automating lesion segmentation and rapidly measuring the impact of a lesion on relevant brain systems utilizing machine learning approaches. Furthermore, exploring personalized treatment strategies based on individual lesion characteristics, patient demographics (e.g., biological age versus calendar age), small vessel disease lesion load, surrogate markers of brain health, and genetic factors could result in more targeted and efficient therapeutic interventions. Longitudinal studies as well as multimodal MR sequences could be pursued for fine-grain analysis of lesion progression over time.
### Conclusion
Our study underscores the potential and relevance of applying TL to fine-tune large pre-trained 2D-based models on specific, targeted data. This approach facilitates precise segmentation of lesions in both subacute and chronic stroke patients.
The models we built for this study take into consideration unique aspects of the task at hand, focusing on lesion load and lesion volume.
A comparative analysis of different 2D model architectures was also conducted, providing valuable insights into their respective strengths and weaknesses. Our findings elucidate how variations in model architectures can influence the learning patterns of the models in different regions of the brain. This infers that different models, due to their architectural differences, can learn differently from the specific features of various brain regions.
In essence, our research not only substantiates the effectiveness of 2D-based models when fine-tuned with Transfer Learning and used in conjunction with ensemble methods, but also emphasizes the importance of choosing the appropriate model architecture for particular brain regions and task-specific criteria. Our work, therefore, offers meaningful contributions to the task of lesion segmentation in stroke patients and may pave the way for further improvements in stroke diagnosis and prognosis.
### Supplement information
All the tables and definitions mentioned in the manuscript are available in the supplement information. The link here is [https://shorturl.at/nqIJK](https://shorturl.at/nqIJK).
### Acknowledgement
GS and AS were partly supported by a grant from NIMH (Brain-Initiative) (7R01MH111874-05) the data analysis and computing expenses were supported by an in-kind financial contribution from Brainify, LLC.
### Author contributions
Conceptualization: Sovesh Mohapatra, Advait Gosai, Gottfried Schlaug
Data curation: Anant Shinde, Sirisha Nouduri, Gottfried Schlaug
Formal analysis: Sovesh Mohapatra, Advait Gosai, Aleksei Rutkovskii
Funding acquisition: Anant Shinde, Gottfried Schlaug
Investigation: Sovesh Mohapatra, Advait Gosai, Aleksei Rutkovskii
Methodology: Sovesh Mohapatra, Advait Gosai, Gottfried Schlaug
Project administration: Sovesh Mohapatra, Gottfried Schlaug
Resources: Gottfried Schlaug
Supervision: Gottfried Schlaug
Clinical Validation: Sirisha Nouduri, Anant Shinde, Gottfried Schlaug
Writing - original draft: Sovesh Mohapatra, Gottfried Schlaug
Writing - review and editing: Sovesh Mohapatra, Advait Gosai, Gottfried Schlaug |
2309.01382 | A symmetry perspective of the Riemann zeros | We study the relationship between the zeros of the Riemann zeta function and
physical systems exhibiting supersymmetry, $PT$ symmetry and $SU(2)$ group
symmetry. Our findings demonstrate that unbroken supersymmetry is associated
with the presence of non-trivial zeros of the zeta function. However, in other
cases, supersymmetry is spontaneously broken and the ground state energy of the
system is not zero. Moreover, we have established the manifestation of PT
symmetry invariance within our supersymmetric system. In addition, our findings
provide insights into a $SU(2)$ symmetry that arises within these systems, with
the Hilbert space having a two-level structure. | Pushpa Kalauni, Prasanta K. Panigrahi | 2023-09-04T06:20:09Z | http://arxiv.org/abs/2309.01382v1 | # A symmetry perspective of the Riemann zeros
###### Abstract
We study the relationship between the zeros of the Riemann zeta function and physical systems exhibiting supersymmetry, \(PT\) symmetry and \(SU(2)\) group symmetry. Our findings demonstrate that unbroken supersymmetry is associated with the presence of non-trivial zeros of the zeta function. However, in other cases, supersymmetry is spontaneously broken and the ground state energy of the system is not zero. Moreover, we have established the manifestation of PT symmetry invariance within our supersymmetric system. In addition, our findings provide insights into a \(SU(2)\) symmetry that arises within these systems, with the Hilbert space having a two-level structure.
## I Introduction
Supersymmetric quantum mechanics (SUSY QM) [1; 2; 3] is a theoretical framework that combines ideas from quantum mechanics and supersymmetry, a symmetry that relates particles with different spin values. One of the interesting features of SUSY QM is that its ground state energy always vanishes, a consequence of the supersymmetry of the theory.
Recently, there has been a fascinating development in the study of SUSY QM, which connects it to the Riemann Hypothesis [4], a famous problem in mathematics that deals with the distribution of prime numbers [5]. Specifically, it has been shown that the non-trivial zeros of the Riemann zeta function can be described as the ground state energy of certain SUSY QM models.
In some recent studies, the relationship between the Riemann zeta function and various aspects of Physics has been explored [6; 7; 8; 9; 10; 11]. In another study, the Riemann zeta function relates to scattering amplitudes in quantum field theory [12]. Additionally, the Riemann zeta function and its relevance to second-order supersymmetric quantum mechanics have been discussed [13].
The Riemann zeta function [14] is defined for complex values of \(s\) in the following form as
\[\zeta(s)= \frac{1}{(1-2^{1-s})}\sum_{n=1}^{\infty}(-1)^{n+1}n^{-s},\ \Re(s)>0. \tag{1}\]
The Riemann zeta function can also be defined in the complex plane by the contour integral [14]
\[\zeta(s)=\frac{\Gamma(1-s)}{2\pi i}\int_{C}\frac{t^{s-1}}{e^{-t}-1}dt, \tag{2}\]
where the contour of integration encloses the negative \(t\)-axis, looping from \(t=-\infty-i0\) to \(t=-\infty+i0\) enclosing the point \(t=0\).
The Riemann zeta function possesses both trivial and non-trivial zeros. Trivial zeros of the zeta function occur when \(s\) is negative even integer. These zeros can be obtained from the functional relation of the Riemann zeta function, which is given by
\[\zeta(s)=2^{s}\pi^{s-1}\sin(\frac{\pi s}{2})\Gamma(1-s)\zeta(1-s),\,s<1. \tag{3}\]
Non-trivial zeros are associated with the Riemann hypothesis, which asserts that all non-trivial zeros of the Riemann zeta function \(\zeta(s)\) lie on the critical line, the line in the complex plane where the real part of \(s\) is equal to \(1/2\). To incorporate trivial zeros into the supersymmetric model, a modified inner product was introduced, and the Hilbert space of the supersymmetric system was defined by implementing appropriate boundary conditions. This was discussed in more detail in a study by Kalauni et al. [15].
In [15], we define a supersymmetric model in which the eigenvalue of the system arises in the form of \(\zeta(s)\zeta(1-s)\) (where \(s=\sigma+i\omega\), \(\sigma\) and \(\omega\) are real), which is the product of the Riemann zeta function evaluated at complex arguments \((\sigma+i\omega)\) and \((1-\sigma-i\omega)\). It was demonstrated that the states of the supersymmetric system under consideration form an orthonormal set of functions, when the Hilbert space is defined for a finite interval of real line, specifically in \(L^{2}(1,a)\) with parameters \(\sigma=1/2\) and \(a=e^{2\pi/\omega}\). As a result, it was
shown that the ground state energy vanishes only when \(\sigma=1/2\). However, it is natural to ask whether there are any additional indications which suggest that the ground state energy vanishes only when \(\sigma=1/2\). The aim of this paper is to gain a more comprehensive understanding of the supersymmetric system under consideration by exploring different symmetries within the system. To achieve this, we employ various methods, including analyzing the Witten index, examining \(PT\) symmetry, and investigating \(SU(2)\) symmetry present in the system.
In the next section, we provide a review of the SUSY QM model, which yields the Riemann zeta function as an eigenvalue of the system. In the third section, the Witten index is used to examine the unbroken and broken supersymmetry of the system. It is shown that for \(\sigma=1/2\), it gives real eigenvalues of the system while for other values of \(\sigma\), the energy of the system becomes complex, indicating a possible connection with \(PT\) symmetry [16; 17].
\(PT\) symmetry is a concept in quantum mechanics which refers to the behavior of physical systems that are invariant under the combined operations of parity (\(P\)) and time reversal (\(T\)). One of the interesting aspects of \(PT\) symmetry is that it can lead to the existence of complex eigenvalues and non-Hermitian operators in quantum mechanics. In the fourth section, we investigate the \(PT\) symmetry in our supersymmetric system and establish that it remains unbroken only if a specific condition holds. However, for other cases, PT symmetry is broken. In the fifth section, we show that this Hilbert space of the system forms a \(SU(2)\) symmetry (function-dependent) which is present in the system because of the presence of the trivial and the non-trivial zeros of the zeta function.
These results provide new insights into the underlying algebraic structures and symmetries of the system. This gives the motivation for comprehending the interrelations between the Riemann zeros and diverse symmetries, namely supersymmetry, \(PT\) symmetry, and \(SU(2)\) symmetry.
## II Supersymmetric system and Riemann zeta function
To begin, we review our supersymmetric model [15], which yields the Riemann zeta function as an eigenvalue. This supersymmetric model is defined by introducing lowering and raising operators (denoted by \(A\) and \(A^{\dagger}\), respectively), which is given as
\[A =x^{-i\omega}\Omega,\] \[A^{\dagger} =\Omega^{\dagger}x^{i\omega}, \tag{4}\]
where
\[\Omega= \frac{\Gamma(x\frac{d}{dx}+1)}{2\pi i}\int\limits_{C}\frac{t^{-x }\frac{d}{dx}-1}{e^{-t}-1}dt, \tag{5}\]
and \(\omega\) is real and contour \(C\) is as given in Eq. (2). When the operator \(\Omega\) (defined in Eq. (5)) acts on the monomial \(x^{-s}\), it produces the eigenvalues in terms of the Riemann zeta function as [18]
\[\Omega x^{-s} =\left[\frac{\Gamma(x\frac{d}{dx}+1)}{2\pi i}\int\limits_{C} \frac{t^{-x}\frac{d}{dx}-1}{e^{-t}-1}dt\right]x^{-s},\] \[=\left[\frac{\Gamma(1-s)}{2\pi i}\int\limits_{C}\frac{t^{s-1}}{ e^{-t}-1}dt\right]x^{-s},\] \[=\zeta(s)\,x^{-s}. \tag{6}\]
Using these operators \(\Omega\) and \(\Omega^{\dagger}\), it is possible to construct a set of eigenstates for the Hamiltonian of the system, which correspond to different energy levels of the system. These eigenstates can be used to compute the spectrum of the system and determine its ground state energy.
The Hamiltonian of the supersymmetric system can be written in terms of a \(2\times 2\) matrix as
\[H= \left[\begin{array}{cc}H_{-}&0\\ 0&H_{+}\end{array}\right], \tag{7}\]
where
\[H_{-} =A^{\dagger}A=\Omega^{\dagger}\Omega,\] \[H_{+} =AA^{\dagger}=x^{-i\omega}\Omega\Omega^{\dagger}x^{i\omega}, \tag{8}\]
are supersymmetric partner Hamiltonians.
We define the wavefunction as \(x^{-s}\) with \(s=\sigma+i\omega\) (where \(\sigma\) and \(\omega\) are real). It gives eigenvalues of the system in terms of the Riemann zeta function as
\[H_{-}x^{-\sigma-i\omega}=\zeta(\sigma+i\omega)\zeta(1-\sigma-i\omega)x^{- \sigma-i\omega}. \tag{9}\]
Eq. (9) shows that \(H_{-}\) has real eigenvalues only if \(\sigma=1/2\). This means that when \(\sigma\neq 1/2\), then the eigenvalues of \(H_{-}\) will be complex numbers.
In our specific model [5; 15], the ground state energy of the system is described in terms of the Riemann zeta function. Our analysis indicates that the unbroken
supersymmetry present in the system results as a vanishing ground state energy of the system. We elaborate on the concept of unbroken supersymmetry and its relation to the ground state energy in the subsequent section by utilizing the Witten index.
## III Unbroken and broken supersymmetry
The Witten index is a powerful tool for studying supersymmetric theories. One of the most important applications of the Witten index is in determining whether or not supersymmetry is broken in a given theory.
To calculate the Witten index for a given system, we count the number of bosonic and fermionic states with zero energy, denoted as \(n_{B}^{E=0}\) and \(n_{F}^{E=0}\), respectively. These counts are done under three distinct conditions, which Witten identified in his paper [1]:
1. If \(n_{B}^{E=0}-n_{F}^{E=0}\neq 0\), then supersymmetry is unbroken.
2. If \(n_{B}^{E=0}=n_{F}^{E=0}=0\), supersymmetry is spontaneously broken.
3. If \(n_{B}^{E=0}\) and \(n_{F}^{E=0}\) are equal but non-zero, supersymmetry is unbroken.
We start by defining the wavefunction as \(x^{-s}\) with \(s=\sigma+i\omega\) (where \(\sigma\) and \(\omega\) are real) and for applying Witten index condition, we check that
\[H_{-}x^{-s} = \zeta(s)\zeta(1-s)x^{-s};\] \[H_{+}x^{s-1-i\omega} = \zeta(s)\zeta(1-s)x^{s-1-i\omega}, \tag{10}\]
which shows that when the parameter \(s\) is fixed, there exists a bosonic state denoted as \(x^{-s}\) and a corresponding fermionic state denoted as \(x^{s-1-i\omega}\), where both states contribute the same amount of energy. However, it is important to note that the bosonic and fermionic states are normalized and form a Hilbert space if \(\text{Re}(s)=1/2\). As we see that if \(\text{Re}(s)=1/2\), \(\text{Im}(s)=\omega\), and \(\zeta(1/2+i\omega)=0\), we find a bosonic state \(x^{-1/2+i\omega}\) and a corresponding fermionic state \(x^{-1/2-i\omega}\), both of which contribute zero energy to the ground state. This fulfills the third condition of the Witten index, where both \(n_{B}^{E=0}\) and \(n_{F}^{E=0}\) are equal to 1, resulting in zero difference and indicating unbroken symmetry. However, if \(\text{Re}(s)=1/2\) and \(\omega\) does not correspond to the imaginary part of the non-trivial zeros of the zeta function, then supersymmetry remains broken due to Witten's second condition.
For values of Re(s) other than 1/2, no solutions exist within the Hilbert space, resulting in both \(n_{B}^{E=0}\) and \(n_{F}^{E=0}\) being zero, which indicates symmetry breaking. This implies that when Re(s)\(\neq\)1/2, then the eigenvalues of \(H_{-}\) are complex numbers.
This peculiarity of complex eigenvalues can be linked to the \(PT\) symmetry of the system. In the following section, we elaborate on the \(PT\) symmetry of the system and its relation to the eigenvalues of the Hamiltonian.
## IV \(Pt\) unbroken and broken symmetry
In [15], it has been demonstrated that the \(n^{th}\) states of the system are normalized and constitute a complete set of functions in the interval \([1,a]\) of square-integrable functions \(L^{2}\). The value of \(a\) is related to non-trivial zeros of the zeta function, specifically, if \(\zeta(\frac{1}{2}+i\omega)=0\) where \(\omega=2\pi/\log a\). Additionally, these functions are also orthonormal in the interval \([-a,-1]\) of square-integrable functions \(L^{2}\).
In order to define \(PT\) symmetry, it is necessary to have a Hilbert space where the operations \(x\rightarrow-x\) and \(p\to p\) can be utilized. It indicates that we can use \(PT\) symmetry for this particular type of function, which may have important implications for the properties and behavior of the system.
To incorporate both positive and negative values of \(x\), we take the direct sum of these two Hilbert spaces denoted as \(L^{2}[-a,-1]\oplus L^{2}[1,a]\). Therefore, we define wave functions in the following form
\[\psi_{1}(x)=|x|^{-\sigma-i\omega}\hskip 28.452756pt\psi_{2}(x)=\text{sgn}(x)|x| ^{-\sigma-i\omega}. \tag{11}\]
If the Hamiltonian \(H_{-}\) is invarinat under \(PT\) symmetry, it satisfies,
\[[PT,H_{-}]\psi_{1}(x)=0, \tag{12}\]
It follows that
\[\zeta(\sigma+i\omega)\zeta(1-\sigma-i\omega)-\zeta(\sigma-i\omega)\zeta(1- \sigma+i\omega)=0. \tag{13}\]
Similarly, we can check \(PT\) symmetry for \(\psi_{2}(x)\) as
\[[PT,H_{-}]\psi_{2}(x)=0, \tag{14}\]
which is true if Eq. (13) is satisfied.
An important insight derived from this connection is that \(PT\) symmetry in the system remains invariant in two cases: i) \(\mathrm{Re}(s)=\nicefrac{{1}}{{2}}\) and ii) \(\mathrm{Im}(s)=0\) (where \(s=\sigma+i\omega\)).
This finding implies that \(PT\) symmetry, similar to supersymmetry, remains unchanged when the \(\mathrm{Re}(s)=\nicefrac{{1}}{{2}}\). However, for values of \(s\) other than those specified, the \(PT\) symmetry of the system is broken.
This system also possesses an underlying \(SU(2)\) algebraic structure like Quasi-exactly solvable system [19]. In the following section, we demonstrate this wavefunction-dependent \(SU(2)\) symmetry, which arises subject to the same constraint outlined in Eq. (13).
## V \(Su(2)\) symmetry
\(SU(2)\) is a Lie group that describes the group of special unitary transformations. The generators of \(SU(2)\) are the angular momentum operators \(J_{-}\), \(J_{0}\), and \(J_{+}\), which satisfy the following commutation relations
\[[J_{+},J_{-}]=2J_{0},\] \[[J_{0},J_{\pm}]=\pm J_{\pm}. \tag{15}\]
To establish the link between the Riemann zeros and \(SU(2)\) symmetry, we begin by defining \(J_{-}\) and \(J_{+}\) using the lowering and raising operators of our supersymmetric system, with a normalizing factor of \(1/\sqrt{\zeta(k)\zeta(1-k)}\):
\[J_{-}=\frac{A}{\sqrt{\zeta(k)\zeta(1-k)}}= \frac{1}{\sqrt{\zeta(k)\zeta(1-k)}}x^{-i\omega}\Omega,\] \[J_{+}=\frac{A^{\dagger}}{\sqrt{\zeta(k)\zeta(1-k)}}= \frac{1}{\sqrt{\zeta(k)\zeta(1-k)}}\Omega^{\dagger}x^{i\omega}, \tag{16}\]
and we chose values of \(k=1/2\) for non-trivial zeros and \(k=-2N-i\omega\) for the case of trivial zeros.
The commutation relation between \(J_{+}\) and \(J_{-}\) leads to,
\[[J_{+},J_{-}]=\frac{(\Omega^{\dagger}\Omega-x^{-i\omega}\Omega\Omega^{\dagger} x^{i\omega})}{\zeta(k)\zeta(1-k)}=2J_{0}, \tag{17}\]
where
\[J_{0}=\frac{(\Omega^{\dagger}\Omega-x^{-i\omega}\Omega^{\dagger}\Omega x^{i \omega})}{2\zeta(k)\zeta(1-k)}. \tag{18}\]
We define \(J_{x}\) and \(J_{y}\) in the following form,
\[J_{x}=\frac{\Omega^{\dagger}x^{i\omega}+x^{-i\omega}\Omega}{2 \sqrt{\zeta(k)\zeta(1-k)}},\] \[J_{y}=\frac{\Omega^{\dagger}x^{i\omega}-x^{-i\omega}\Omega}{2i \sqrt{\zeta(k)\zeta(1-k)}}. \tag{19}\]
The action of the generators \(J_{x}\), \(J_{y}\), and \(J_{0}\) on a function \(x^{-s}\) yields,
\[J_{x}x^{-s}=\frac{\zeta(1-s+i\omega)x^{-s+i\omega}+\zeta(s)x^{- s-i\omega}}{2\sqrt{\zeta(k)\zeta(1-k)}},\] \[J_{y}x^{-s}=\frac{\zeta(1-s+i\omega)x^{-s+i\omega}-\zeta(s)x^{- s-i\omega}}{2i\sqrt{\zeta(k)\zeta(1-k)}},\] \[J_{0}x^{-s}=\left[\frac{\zeta(s)\zeta(1-s)-\zeta(s-i\omega)\zeta (1-s+i\omega)}{2\zeta(k)\zeta(1-k)}\right]x^{-s}. \tag{20}\]
For \(SU(2)\), the Casimir operator is given by,
\[J^{2}=J_{x}^{2}+J_{y}^{2}+J_{0}^{2}, \tag{21}\]
that commutes with each of these generators \(J_{x}\), \(J_{y}\) and \(J_{0}\).
The Casimir operator acts on the function \(x^{-s}\) as
\[J^{2}x^{-s}= \left[\frac{\zeta(s-i\omega)\zeta(1-s+i\omega)+\zeta(s)\zeta(1-s )}{2\zeta(k)\zeta(1-k)}\right]x^{-s}\] \[+\left[\frac{\zeta(s-i\omega)\zeta(1-s+i\omega)-\zeta(s)\zeta(1- s)}{2\zeta(k)\zeta(1-k)}\right]^{2}x^{-s}, \tag{22}\]
where \(s=\sigma+i\omega\). The structures of the commutators and the Casimir operator on this Hilbert space are indicative of a Hilbert-space-dependent structure [20].
We will now demonstrate how the \(SU(2)\) algebra is related to the existence of both non-trivial and trivial zeros in zeta functions, possessing a Hilbert-space-dependent spin \(1/2\) structure.
### Non-trivial Riemann zeros
We define a \(SU(2)\) eigenstate as
\[|\sigma,m\rangle=\frac{1}{\sqrt{\log a}}x^{-\sigma+i(m-\frac{1}{2})\omega}, \tag{23}\]
where \(a=e^{2\pi/\omega}.\) The spin of a particle with quantum number \(s\) can take \(2s+1\) values for the unitary irreducible representation. In the present case, the Hilbert space has a two-level \(\sigma=1/2\) representation. These two states can be represented as \(|\sigma,m\rangle=\)\(|1/2,1/2\rangle\) and \(|1/2,-1/2\rangle\). When \(\sigma=1/2\), these states form a discrete, orthonormal, and complete basis in a finite interval of real line [15] and the orthonormality condition for this basis is given by
\[\langle 1/2,m|1/2,m^{\prime}\rangle=\delta_{mm^{\prime}}. \tag{24}\]
We assume that \(\omega\) is the imaginary part of non-trivial zeros of the zeta function and define
\[\zeta(\frac{1}{2}+i\omega)=\zeta(\frac{1}{2}-i\omega)=0. \tag{25}\]
Using \(k=1/2\) and Eq. (25), we can write Eq. (20) as
\[J_{x}|1/2,1/2\rangle = \frac{1}{2}|1/2,-1/2\rangle,\] \[J_{y}|1/2,1/2\rangle = -\frac{1}{2i}|1/2,-1/2\rangle,\] \[J_{0}|1/2,1/2\rangle = \frac{1}{2}|1/2,1/2\rangle, \tag{26}\]
which provides the eigenvalues of \(J^{2}\) and \(J_{0}\) as
\[J^{2}|1/2,\pm 1/2\rangle = \frac{3}{4}|1/2,\pm 1/2\rangle,\] \[J_{0}|1/2,\pm 1/2\rangle = \pm\frac{1}{2}|1/2,\pm 1/2\rangle. \tag{27}\]
From Eqs. (16-27), we can see that the operators \(J_{+}\), \(J_{-}\), and \(J_{0}\) satisfy the well-known \(SU(2)\) algebraic commutation relations
\[[J_{+},J_{-}]|1/2,1/2\rangle = (2J_{0})|1/2,1/2\rangle,\] \[[J_{0},J_{-}]|1/2,1/2\rangle = -J_{-}|1/2,1/2\rangle,\] \[[J_{0},J_{+}]|1/2,-1/2\rangle = J_{+}|1/2,-1/2\rangle. \tag{28}\]
It is noteworthy that \(SU(2)\) algebra in this system is dependent on the zeta function, and the zeta function must satisfy a condition given in Eq. (25) in order to satisfy this algebra. This constraint implies that \(\sigma\) must be \(1/2\), indicating that it applies to the non-trivial zeros of the zeta function. We explain the significance of the non-trivial zeros of the Riemann zeta function for demonstrating the \(SU(2)\) algebra. Now, we examine how the trivial zeros help us understand the \(SU(2)\) algebra in our system.
### Trivial Riemann zeros:
It is well known that the Riemann zeta function \(\zeta(\sigma+i\omega)\) possesses trivial zeros when \(\sigma=-2N\) and \(\omega=0\). This can be understood through the functional relation of the zeta function, which is given in Eq. (3).
In this case, we define the state
\[\psi_{\sigma,m}=\frac{1}{\sqrt{\log a}}x^{-\sigma+i(\frac{1}{2}-m)\omega}. \tag{29}\]
These functions do not form the orthonormal set under Dirac inner product unless \(\sigma=1/2\), which is the case for non-trivial zeros. For other values of \(\sigma\), we have introduced the modified inner product [15; 21] then it forms an orthonormal set of functions.
Using \(\sigma=-2N\), \(k=-2N-i\omega\), \(m=1/2\) and using Eq. (29), one can write Eq. (20) as,
\[J_{x}\psi_{-2N,1/2} = \frac{\zeta(1+2N+i\omega)}{2\sqrt{\zeta(-2N-i\omega)\zeta(1+2N+ i\omega)}}\psi_{-2N,-1/2},\] \[J_{y}\psi_{-2N,1/2} = \frac{\zeta(1+2N+i\omega)}{2i\sqrt{\zeta(-2N-i\omega)\zeta(1+2N+ i\omega)}}\psi_{-2N,-1/2},\] \[J_{0}\psi_{-2N,1/2} = -\frac{1}{2}\psi_{-2N,1/2}. \tag{30}\]
It leads to
\[J^{2}=(J_{x}^{2}+J_{y}^{2}+J_{0}^{2})=\frac{3}{4}, \tag{31}\]
which also implies that \(j=\frac{1}{2}\) and \(m\) varies from \(-1/2\) to \(1/2\).
We see that to establish the functional-dependent \(SU(2)\) algebra, it is essential to have either non-trivial or trivial zeros of the zeta function. Without the presence of either of these types of Riemann zeros, our model would not exhibit the algebraic structure of \(SU(2)\). This indicates that the existence of either type of zero plays a critical role in the formation of the \(SU(2)\) algebra here, which is an appealing outcome.
## VI Conclusion
We have demonstrated that various symmetries are linked to both the Riemann zeros and the supersymmetric system underlying it. Supersymmetry naturally leads to the utilization of the Witten index for the characterization of ground state energy. Interestingly, it is shown that SUSY is unbroken for \(\text{Re}(s)=\sigma=1/2\). The appearance of PT symmetry and the function-dependent \(SU(2)\) symmetries are deeply connected to the properties of the Hilbert space, which can be characterized by the Riemann zeros. Our results show that SUSY remains unbroken only when \(\sigma=1/2\), which corresponds to the non-trivial zeros of the zeta function. We have also demonstrated that the \(PT\) symmetry remains unbroken in two cases: i) \(\text{Re}(s)=\sfrac{1}{2}\) and ii) \(\text{Im}(s)=0\) (where \(s=\sigma+i\omega\)) while in other cases, the \(PT\) symmetry is broken. We find an underlying \(SU(2)\) algebraic structure, where the algebra is realized in the Hilbert space with suitable constraints. Intriguingly, only the spin-half representation is realized. The Hilbert space-dependent \(SU(2)\) symmetry and the presence of
only spin-half unitary irreducible representation are indicative of a deeper structure, needing further study.
Our findings provide novel perspectives on the Riemann zeros and their association with various symmetries. These symmetries shed light on the fundamental connections between the Riemann hypothesis and physics, and highlight the potential for new insights and discoveries at the intersection of these two fields.
###### Acknowledgements.
We acknowledge Prof. Kimball A. Milton for the useful discussions. This work is financially supported by the DST, Govt. of India under the Women Scientist A, Ref. No. DST/WOS-A/PM-64/2019. We are thankful to the referees for providing their valuable feedback.
|
2310.15294 | Adaptive End-to-End Metric Learning for Zero-Shot Cross-Domain Slot
Filling | Recently slot filling has witnessed great development thanks to deep learning
and the availability of large-scale annotated data. However, it poses a
critical challenge to handle a novel domain whose samples are never seen during
training. The recognition performance might be greatly degraded due to severe
domain shifts. Most prior works deal with this problem in a two-pass pipeline
manner based on metric learning. In practice, these dominant pipeline models
may be limited in computational efficiency and generalization capacity because
of non-parallel inference and context-free discrete label embeddings. To this
end, we re-examine the typical metric-based methods, and propose a new adaptive
end-to-end metric learning scheme for the challenging zero-shot slot filling.
Considering simplicity, efficiency and generalizability, we present a
cascade-style joint learning framework coupled with context-aware soft label
representations and slot-level contrastive representation learning to mitigate
the data and label shift problems effectively. Extensive experiments on public
benchmarks demonstrate the superiority of the proposed approach over a series
of competitive baselines. | Yuanjun Shi, Linzhi Wu, Minglai Shao | 2023-10-23T19:01:16Z | http://arxiv.org/abs/2310.15294v1 | # Adaptive End-to-End Metric Learning for Zero-Shot Cross-Domain Slot Filling
###### Abstract
Recently slot filling has witnessed great development thanks to deep learning and the availability of large-scale annotated data. However, it poses a critical challenge to handle a novel domain whose samples are never seen during training. The recognition performance might be greatly degraded due to severe domain shifts. Most prior works deal with this problem in a two-pass pipeline manner based on metric learning. In practice, these dominant pipeline models may be limited in computational efficiency and generalization capacity because of non-parallel inference and context-free discrete label embeddings. To this end, we re-examine the typical metric-based methods, and propose a new adaptive end-to-end metric learning scheme for the challenging zero-shot slot filling. Considering simplicity, efficiency and generalizability, we present a cascade-style joint learning framework coupled with context-aware soft label representations and slot-level contrastive representation learning to mitigate the data and label shift problems effectively. Extensive experiments on public benchmarks demonstrate the superiority of the proposed approach over a series of competitive baselines.1
Footnote 1: The source code is available at [https://github.com/Switchsyj/AdEd2M-XSF](https://github.com/Switchsyj/AdEd2M-XSF).
## 1 Introduction
Slot filling, as an essential component widely exploited in task-oriented conversational systems, has attracted increasing attention recently Zhang and Wang (2016); Goo et al. (2018); Gangadharaiah and Narayanaswamy (2019). It aims to identify a specific type (_e.g._, artist and playlist) for each slot entity from a given user utterance. Owing to the rapid development of deep neural networks and with help from large-scale annotated data, research on slot filling has made great progress with considerable performance improvement Qin et al. (2019); Wu et al. (2020); Qin et al. (2020, 2021).
Despite the remarkable accomplishments, there are at least two potential challenges in realistic application scenarios. First is the data scarcity problem in specific target domains (_e.g._, _Healthcare_ and _E-commerce_). The manually-annotated training data in these domains is probably unavailable, and even the unlabeled training data might be hard to acquire Jia et al. (2019); Liu et al. (2020). As a result, the performance of slot filling models may drop significantly due to extreme data distribution shifts. The second is the existence of label shifts (as shown in the example in Figure 1). The target domain may contain novel slot types unseen in the source-domain label space Liu et al. (2018); Shah et al. (2019); Liu et al. (2020); Wang et al. (2021), namely there is a mismatch between different domain label sets. This makes it difficult to apply the source models to completely unseen target domains that are unobservable during the training process.
Zero-shot domain generalization has been shown to be a feasible solution to bridge the gap of domain shifts with no access to data from the target domain. Recent dominating advances focus on the two-step pipeline fashion to learn the zero-shot model using the metric learning paradigms Shah et al. (2019); Liu et al. (2020); He et al. (2020); Wang et al. (2021); Siddique et al. (2021). Nevertheless, besides inefficient inference resulted from non-parallelization,
Figure 1: Examples from SNIPS dataset. Apart from data distribution shifts, the target domain contains novel slot types that are unseen in the source domain label space (_e.g._, _Album_ and _Service_). Moreover, the slot entities tend to embody domain-specific nature in contrast to the counterpart contexts.
the generalization capability of these models may be limited due to lack of knowledge sharing between sub-modules, and context-independent discrete static label embeddings. Although the alternative question-answering (QA) based methods Du et al. (2021); Yu et al. (2021); Liu et al. (2022) are able to achieve impressive results, they need to manually design and construct the questions/queries, essentially introducing detailed descriptive information about the slot types.
In this work, we revisit the metric-based zero-shot cross-domain slot filling under challenging domain (both data and label) shifts. We propose an adaptive end-to-end metric learning scheme to improve the efficiency and effectiveness of the zero-shot model in favor of practical applications. For one thing, we provide a cascade-style joint learning architecture well coupled with the slot boundary module and type matching module, allowing for knowledge sharing among the sub-modules and higher computational efficiency. Moreover, the soft label embeddings are adaptively learnt by capturing the correlation between slot labels and utterance. For another, since slot terms with same types tend to have the semantically similar contexts, we propose a slot-level contrastive learning scheme to enhance the slot discriminative representations within different domain context. Finally, to verify the effectiveness of the proposed method, we carry out extensive experiments on different benchmark datasets. The empirical studies show the superiority of our method, which achieves effective performance gains compared to several competitive baseline methods.
Overall, the main contributions can be summarized as follows: (1) Compared with existing metric-based methods, we propose a more efficient and effective end-to-end scheme for zero-shot slot filling, and show our soft label embeddings perform much better than previous commonly-used static label representations. (2) We investigate the slot-level contrastive learning to effectively improve generalization capacity for zero-shot slot filling. (3) By extensive experiments, we demonstrate the benefits of our model in comparison to the existing metric-based methods, and provide an insightful quantitative and qualitative analysis.
## 2 Methodology
In this section, we first declare the problem to be addressed about zero-shot slot filling, and then elaborate our solution to this problem.
### Problem Statement
Suppose we have the source domain \(\mathcal{D}_{\mathcal{S}}=\{(\mathbf{x}_{i}^{\mathcal{S}},\mathbf{y}_{i}^{ \mathcal{S}})\}_{i=1}^{N_{\mathcal{S}}}\) with \(N_{\mathcal{S}}\) labeled samples from distribution \(P^{\mathcal{S}}\), and the (testing) target domain \(\mathcal{D_{\mathcal{T}}}=\{(y_{j}^{\mathcal{T}})\}_{j=1}^{C}\) with \(C\) slot types from target distribution \(P^{\mathcal{T}}\). We define \(\Omega_{\mathcal{S}}\) as the label set of source domain \(\mathcal{D_{\mathcal{S}}}\), and \(\Omega_{\mathcal{T}}\) as the label set of target domain \(\mathcal{D_{\mathcal{T}}}\). \(\Omega_{sh}=\Omega_{\mathcal{S}}\cap\Omega_{\mathcal{T}}\) denotes the common slot label set shared by \(\mathcal{D_{\mathcal{S}}}\) and \(\mathcal{D_{\mathcal{T}}}\). In the zero-shot scenario, the label sets between different domains may be mismatching, thus \(\Omega_{sh}\subseteq\Omega_{\mathcal{S}}\) and \(P^{\mathcal{S}}\neq P^{\mathcal{T}}\). The goal is to learn a robust and generalizable zero-shot slot filling model that can be well adapted to novel domains with unknown testing distributions.
### Overall Framework
In order to deal with variable slot types within an unknown domain, we discard the standard sequence labeling paradigm by cross-labeling (_e.g._, B-playlist, I-playlist). Instead, we adopt a cascade-style architecture coupled with the slot boundary module and typing module under a joint learning framework. The boundary module is used to detect whether the tokens in an utterance are slot terms or not by the CRF-based labeling method with BIO schema, while the typing module is used to match the most likely type for the corresponding slot term using the metric-based method. Since pre-training model is beneficial to learn general representations, we adopt the pre-trained BERT Devlin et al. (2019) as our backbone encoder2. Figure 2 shows the overall framework, which is composed of several key components as follows:
Footnote 2: Notice that we assume the BERT model is used as our encoder, but our method can also be integrated with other model architectures (_e.g._, RoBERTa Liu et al. (2019)).
Context-aware Label EmbeddingLet \(\mathbf{c}=[c_{1},\cdots,c_{|\Omega_{\mathcal{S}}|}]\)\((c_{i}\in\Omega_{\mathcal{S}})\) denotes a slot label sequence consisting of all the elements of \(\Omega_{\mathcal{S}}\). Given an input utterance sequence \(\mathbf{x}=[x_{1},\cdots,x_{n}]\) of \(n\) tokens with the corresponding ground-truth boundary label sequence \(\mathbf{y}^{bd}=[y_{1}^{bd},\cdots,y_{n}^{bd}]\)\((y_{i}^{bd}\in\{\mathtt{B},\mathtt{I},\mathtt{O}\})\) and slot label sequence \(\mathbf{y}^{sl}=[y_{1}^{sl},\cdots,y_{n}^{sl}]\)\((y_{i}^{sl}\in\Omega_{\mathcal{S}})\), the slot label sequence acts a
ance, which is then encoded by BERT3:
Footnote 3: Considering the slot label sequence is not a natural language sentence linguistically, we remove the [SEP] token used to concatenate sentence pairs in BERT.
\[[\mathbf{r}_{label};\mathbf{r}_{utter}]=\mathrm{BERT}([\mathbf{c};\mathbf{x}]), \tag{1}\]
where \(\mathbf{r}_{label}\) and \(\mathbf{r}_{utter}\) denote the fused contextual representations of the label and utterance sequence, respectively.
For each slot type, the slot label matrix is obtained by averaging over the representations of the slot label tokens. Unlike the conventional discrete and static label embeddings Liu et al. (2020); Siddique et al. (2021); Ma et al. (2022) that capture the semantics of each textual label separately, we attempt to build the label-utterance correlation, and the adaptive interaction between the slot labels and utterance tokens encourages the model to learn the context-aware soft label embeddings dynamically, which will be exploited as the supervision information for the metric learning.
Slot Boundary DetectionTo determine the slot terms, we obtain the contextualized latent representations of the utterance through a single-layer BiLSTM,
\[\mathbf{h}_{utter}=\mathrm{BiLSTM}(\mathbf{r}_{utter}). \tag{2}\]
Then, a CRF layer is applied to the slot boundary decoding, aiming to model the boundary label dependency. The negative log-likelihood objective function can be formulated as follows:
\[\begin{split}&\mathbf{e}=\mathrm{Linear}(\mathbf{h}_{utter}),\\ & score(\mathbf{x},\mathbf{y})=\sum_{i=1}^{n}(\mathbf{T}_{ \mathbf{y}_{i-1},\mathbf{y}_{i}}+\mathbf{e}_{i}[\mathbf{y}_{i}]),\\ &\mathcal{L}_{bdy}=-\log p(\mathbf{y}^{bd}|\mathbf{x})\\ &\qquad\qquad=-\log\frac{\exp(score(\mathbf{x},\mathbf{y}^{bd}) )}{\sum_{\mathbf{y}^{\prime}\in\mathcal{Y}_{\mathbf{x}}}\exp(score(\mathbf{x},\mathbf{y}^{\prime}))},\end{split} \tag{3}\]
where \(\mathbf{e}\in\mathbb{R}^{n\times 3}\) denotes the three-way emission vectors containing boundary information, \(\mathbf{T}\) is the 3\(\times\)3 learnable label transition matrix, and \(\mathcal{Y}_{\mathbf{x}}\) is the set of all possible boundary label sequences of utterance \(\mathbf{x}\). While inference, we employ the Viterbi algorithm Viterbi (1967) to find the best boundary label sequence.
Metric-based Slot TypingAlthough slot boundary module can select the slot terms from an utterance, it fails to learn discriminative slot entities. Thus, we design a typing module to achieve it in parallel by semantic similarity matching between slot labels and utterance tokens.
Concretely, we take advantage of the above boundary information to locate the slot entity tokens of the utterance. We specially exploit the soft-weighting boundary embedding vectors for enabling differentiable joint training, which are combined with the contextual utterance representations
Figure 2: Illustration of the overall framework. Figure (a) shows the cascade-style joint learning architecture coupled with two core components: _Metric-based Slot Typing_ and _Slot Boundary Detection_. Figure (b) shows the slot-level contrastive learning module used only during training. The slot entity tokens with the same type are positive pairs (_i.e._ the blue lines) while those with different types are negative ones (_i.e._ the red lines).
to obtain the boundary-enhanced representations:
\[\begin{split}&\mathbf{r}_{bound}=\mathrm{softmax}(\mathbf{e})\cdot \mathbf{E}_{b},\\ &\mathbf{u}=\mathrm{Linear}(\mathrm{Concat}(\mathbf{r}_{utter}, \mathbf{r}_{bound})),\end{split} \tag{4}\]
where \(\mathbf{E}_{b}\in\mathbb{R}^{3\times d_{b}}\) is a look-up table to store trainable boundary embeddings, and \(d_{b}\) indicates the embedding dimension. Meanwhile, the label embeddings are calculated by a bottleneck module consisting of an up-projection layer and a down-projection layer with a GELU (Hendrycks and Gimpel, 2016) nonlinearity:
\[\mathbf{v}=\mathrm{Linear}_{up}(\mathrm{GELU}(\mathrm{Linear}_{dw}(\mathbf{r }_{label}))). \tag{5}\]
Furthermore, we leverage token-wise similarity matching between L2-normalized utterance representations and label embeddings. Since the slot entities are our major concern for predicting types, we ignore the non-entity tokens by mask provided by the boundary gold labels, resulting in the slot typing loss function defined as follows:
\[\begin{split}&\mathcal{L}_{typ}=-\underset{i=1}{\sum}\mathds{1}_{[y _{i}^{bd}\neq 0]}\mathrm{log}\frac{\mathrm{exp}((\mathbf{u}_{i},\mathrm{sg}( \mathbf{v}_{i^{*}})))}{\sum_{j=1}^{|\Omega_{\mathcal{S}}|}\mathrm{exp}(( \mathbf{u}_{i},\mathrm{sg}(\mathbf{v}_{j})))}\\ &-\underset{i=1}{\sum}\mathds{1}_{[y_{i}^{bd}\neq 0]}\mathrm{log} \frac{\mathrm{exp}((\mathrm{sg}(\mathbf{u}_{i}),\mathbf{v}_{i^{*}}))}{\sum_{j= 1}^{|\Omega_{\mathcal{S}}|}\mathrm{exp}((\mathrm{sg}(\mathbf{u}_{i}),\mathbf{ v}_{j}))},\end{split} \tag{6}\]
where \(\langle\cdot,\cdot\rangle\) measures the cosine similarity of two embeddings, \(\mathrm{sg}(\cdot)\) stands for the stop-gradient operation that does not affect the forward computation, \(i^{*}\) indicates the index corresponding to the gold slot label \(y_{i}^{sl}\), and \(\mathds{1}_{[y_{i}^{bd}\neq 0]}\in\{0,1\}\) is an indicator function, evaluating to 1 if \(y_{i}^{bd}\) is a non-O tag. Eq. 6 makes sure the label embeddings act as the supervision information (the _1st_ term) and meanwhile are progressively updated (the _2nd_ term).
Slot-level Contrastive LearningRecent line of works have investigated the instance-level contrastive learning by template regularization (Shah et al., 2019; Liu et al., 2020; He et al., 2020; Wang et al., 2021). As slots with the same types tend to have the semantically similar contexts, inspired by Das et al. (2022), we propose to use the slot-level contrastive learning to facilitate the discriminative slot representations that may contribute to adaptation robustness.4
Footnote 4: Different from Das et al. (2022), we do not use the Gaussian embeddings produced by learnt Gaussian distribution parameters. There are two main reasons: one is to ensure the stable convergence of training, and the other is that the token representations may not follow normal distribution.
More specifically, we define a supervised contrastive objective by decreasing the similarities between different types of slot entities while increasing the similarities between the same ones. We just pay attention to the slot entities by masking out the parts with O boundary labels. Then, we gather _in-batch_ positive pairs \(\mathcal{P}^{+}\) with the same slot type and negative pairs \(\mathcal{P}^{-}\) with different ones:
\[\begin{split}&\mathbf{s}=\mathrm{ReLU}(\mathrm{Linear}(\mathbf{r}_{utter})),\\ &\mathcal{P}^{+}=\{(\mathbf{s}^{i},\mathbf{s}^{j})|y_{i}^{sl}=y_{j }^{sl},i\neq j\},\\ &\mathcal{P}^{-}=\{(\mathbf{s}^{i},\mathbf{s}^{j})|y_{i}^{sl}\neq y _{j}^{sl},i\neq j\},\end{split} \tag{7}\]
where \(\mathbf{s}\) denotes the projected point embeddings, and all example pairs are extracted from a mini-batch. Furthermore, we adapt the NT-Xent loss (Chen et al., 2020) to achieve the slot-level discrimination, and the contrastive learning loss function can be formulated as:
\[\mathcal{L}_{ctr}=-\log\frac{\frac{1}{|\mathcal{P}^{+}|}\sum_{(\mathbf{s}^{i}, \mathbf{s}^{j})\in\mathcal{P}^{+}}\mathrm{exp}(d(\mathbf{s}^{i},\mathbf{s}^{j })/\tau)}{\sum_{(\mathbf{s}^{i},\mathbf{s}^{j})\in\mathcal{P}}\mathrm{exp}(d( \mathbf{s}^{i},\mathbf{s}^{j})/\tau)}, \tag{8}\]
where \(\mathcal{P}\) denotes \(\mathcal{P}^{+}\cup\mathcal{P}^{-}\), \(d(\cdot,\cdot)\) denotes the distance metric function (_e.g._, cosine similarity distance), and \(\tau\) is a temperature parameter. We will investigate different kinds of metric functions in the following experiment section.
### Training and Inference
During training, our overall framework is optimized end-to-end with min-batch. The final training objective is to minimize the sum of the all loss functions:
\[\mathcal{L}=\mathcal{L}_{bdy}+\mathcal{L}_{typ}+\mathcal{L}_{ctr}, \tag{9}\]
where each part has been defined in the previous subsections. During inference, we have the slot type set of the target domain samples, and the testing slot labels constitute the label sequence, which is then concatenated with the utterance as the model input. The CRF decoder predicts the slot boundaries of the utterance, and the predicted slot type corresponds to the type with the highest-matching score. We take the non-O-labeled tokens as slot terms while the O-labeled tokens as the context.
## 3 Experiments
### Datasets and Settings
To evaluate the proposed method, we conduct the experiments on the SNIPS dataset for zero
shot settings (Coucke et al., 2018), which contains 39 slot types across seven different domains: AddToPlaylist (ATP), BookRestaurant (BR), GetWeather (GW), PlayMusic (PM), RateBook (RB), SearchCreativeWork (SCW) and SearchScreeningEvent (SSE). Following previous studies (Liu et al., 2020; Siddique et al., 2021), we choose one of these domains as the target domain never used for training, and the remaining six domains are combined to form the source domain. Then, we split 500 samples in the target domain as the development set and the remainder are used for the test set. Moreover, we consider the case where the label space of the source and target domains are exactly the same, namely the zero-resource setting (Liu et al., 2020) based on named entity recognition (NER) task. We train our model on the CoNLL-2003 (Sang and Meulder, 2003) dataset and evaluate on the CBS SciTech News dataset (Jia et al., 2019).
### Baselines
We compare our method with the following competitive baselines using the pre-trained BERT as encoder: (1) **Coach**. Liu et al. (2020) propose a two-step pipeline matching framework assisted by template regularization; (2) **PCLC**.Wang et al. (2021) propose a prototypical contrastive learning method with label confusion; (3) **LEONA**. Siddique et al. (2021) propose to integrate linguistic knowledge (e.g., external NER and POS-tagging cues) into the basic framework.
Although not our focused baselines, we also compare against the advanced generative baselines (Li et al., 2023) with T5-Large and QA-based methods (Du et al., 2021; Liu et al., 2022) that require manual efforts to convert slot type descriptions into sentential queries/questions, and process by means of the machine reading comprehension (MRC) architecture (Li et al., 2020).
### Implementation Details
We use the pre-trained uncased BERTBASE model5 as the backbone encoder. The dimension of the boundary embedding is set to 10. We use 0.1 dropout ratio for slot filling and 0.5 for NER. For the contrastive learning module, we use the cosine metric function and select the optimal temperature \(\tau\) from 0.1 to 1. During training, the AdamW (Loshchilov and Hutter, 2019) optimizer with a mini-batch size 32 is applied to update all trainable parameters, and the initial learning rate is set to 2e-5 for BERT and 1e-3 for other modules. All the models are trained on NIVIDIA GeForce RTX 3090Ti GPUs for up to 30 epochs. The averaged F1-score over five runs is used to evaluate the performance. The best-performing model on the development set is used for testing.
Footnote 5: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased)
### Zero-Shot Slot Filling
As shown in Table 1, our method achieves more promising performance than previously proposed metric-based methods on various target domains, with an average about 5% improvements compared with the strong baseline LEONA. We attribute it to the fact that our proposed joint learning model make full use of the sub-modules, and the context-aware soft label embeddings provide better prototype representations. Moreover, we also observe that the slot-level contrastive learning plays an important role in improving adaptation performance. Our model with _Slot-CL_ obtains consistent performance gains over almost all the target domains except for the SSE domain. We suspect that it may result from slot entity confusion. For example, for slot entities "_cinema_" and
\begin{table}
\begin{tabular}{l|c c c c c c c|c} \hline \multirow{2}{*}{**Domain Model** (**Src–Tgt, Unseen Rate) \(\rightarrow\) (**48\(\rightarrow\)5,40\%**) & **39\(\rightarrow\)14,57\%** & **44\(\rightarrow\)9,44\%** & **44\(\rightarrow\)9,55\%** & **46\(\rightarrow\)7,71\%** & **52\(\rightarrow\)2,0\%** & **46\(\rightarrow\)7,57\%** & **Avg.** \\ \hline CoNLLBERT (Liu et al., 2020) & 50.28 & 31.87 & 52.30 & 31.75 & 23.33 & 70.76 & 29.33 & 41.37 \\ PCLExpert (Wang et al., 2021) & 30.38 & 20.89 & 32.99 & 25.55 & 20.76 & 62.40 & 13.82 & 29.54 \\ LEONABERT (Siddique et al., 2021) & 51.23 & 46.68 & 68.72 & 43.20 & 25.23 & 47.01 & 27.99 & 44.01 \\ QASEFECT (Dü et al., 2021) & 59.29 & 43.13 & 59.02 & 33.62 & 33.34 & 59.90 & 22.83 & 44.45 \\ SUMCN (Liu et al., 2021) & 63.21 & 60.11 & 65.23 & 50.16 & 32.78 & 55.17 & 30.67 & 51.77 \\ GZPT\({}_{\text{T5-Laps}}\) (Li et al., 2023) & 59.83 & 61.23 & 62.58 & 62.73 & 45.88 & 71.30 & 48.26 & 58.82 \\ \hline Ours (w/o Slot-CL) & 61.13 & 41.67 & 71.47 & 34.77 & 30.75 & 68.81 & 34.64 & 49.03 \\ Ours & 61.13 & 42.35 & 69.87 & 36.24 & 33.25 & 70.81 & 34.06 & **49.67** \\ \hline \end{tabular}
\end{table}
Table 1: F1-scores of zero-shot slot filling across different domains. Slot-CL denotes the slot-level contrastive learning. We show the number of labels in the source and target domains. The unseen rate refers to the proportion of non-overlapped source-target domain labels in the target label set. \(\dagger\) denotes the QA-based methods that introduce manually-designed query for each slot label. \(\ddagger\) denotes the generative method along with prompts for each slot label.
"_theatre_" from SSE, they are usually annotated with object_location_type, but "_cinemas_" in "_caribbean cinemas_" and "_theatres_" in "_star theatres_" are annotated with location_name, which is prone to be misled by the contrastive objective. Additionally, without introducing extra manual prior knowledge, our method achieves very competitive performance compared with the QA-based baselines.
### Zero-Resource NER
In particular, we examine our method in the zero-resource NER setting. As presented in Table 2, our method is also adaptable to this scenario, and exceed or match the performance of previous competitive baselines. Meanwhile, the slot-level contrastive learning can yield effective performance improvements.
### Ablation Study and Analysis
In order to better understand our method, we further present some quantitative and qualitative analyses that provides some insights into why our method works and where future work could potentially improve it.
Inference SpeedOne advantage of our framework is the efficient inference process benefiting from the well-parallelized design. We evaluate the speed by running the model one epoch on the BookRestaurant test data with batch size set to 32. Results in Table 3 show that our method achieves \(\times\)13.89 and \(\times\)7.06 speedup compared with the advanced metric-based method (i.e., LEONA) and QA-based method (i.e., SLMRC). This could be attributed to our batch-wise decoding in parallel. On the one hand, previous metric-based methods use the two-pass pipeline decoding process and instance-wise slot type prediction. On the other hand, the QA-based methods require introducing different queries for all candidate slot labels regarding each utterance, increasing the decoding latency of a single example.
Label-Utterance InteractionHere we examine how our model benefits from the label-utterance interaction. As presented in Table 4, the performance of our model drops significantly when eliminating the interaction from different aspects, justifying our design. Compared to the other degraded interaction strategies, the utterance-label interaction helps learn the context-aware label embeddings, namely the utterance provides the context cues for the slot labels. Furthermore, we also notice that interaction between slot labels also makes sense. When only let each slot label attend to itself and the utterance, we observe the performance drop probably due to the loss of discriminative information among different slot labels.
Effect of Context-aware Label EmbeddingWe study the effect of different types of label embeddings. Figure 3 shows the comparison results. We can see that the proposed context-aware soft label embedding outperforms other purely discrete or decoupled embeddings, including discrete BERT, decoupled BERT or GloVe (Pennington et al.,
\begin{table}
\begin{tabular}{c|c} \hline \hline
**Interaction Strategy** & **F1** \\ \hline Ours (w/o Slot-CL) & **49.03** \\ w/o Label \(\rightarrow\) Utterance & 45.42 \\ w/o Utterance \(\rightarrow\) Label & 45.24 \\ w/o Label \(\leftrightarrow\) Utterance & 47.25 \\ w/o Label \(\leftrightarrow\) Label & 45.24 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparisons of different label-utterance interaction strategies for slot filling.
\begin{table}
\begin{tabular}{c|c} \hline \hline
**Model** & **F1** \\ \hline Liu et al. (2020) & 69.53 \\ Jia et al. (2019) & 73.59 \\ Devlin et al. (2019) & 74.23 \\ Jia and Zhang (2020) & 75.19 \\ Wu et al. (2022) & 75.06 \\ \hline Ours (w/o Slot-CL) & 74.41 \\ Ours & **75.29** \\ \hline \hline \end{tabular}
\end{table}
Table 2: NER results on the target domain (i.e., SciTech News).
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Model** & **Time Cost** (s) & **Speedup** \\ \hline Coach\({}_{\text{BERT}}\) & 70.98 & 17.14\(\times\) \\ PCLC\({}_{\text{BERT}}\) & 76.61 & 18.50\(\times\) \\ LEONA & 57.49 & 13.89\(\times\) \\ SLMRC & 29.21 & 7.06\(\times\) \\ \hline Ours & 4.14 & 1.00\(\times\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of inference efficiency. Speedup denotes the ratio of time taken by slot prediction part of different models to run one epoch on the BookRestaurant with batch size 32.
2014) embeddings. Interestingly, when fine-tuning, we find that \(\text{BERT}_{\text{dis}}\) works slightly better than \(\text{BERT}_{\text{dec}}\), as it might be harmful to tune soft label embeddings without utterance contexts. Furthermore, we observe a significant improvement of our model when incorporating the GloVe static vectors, suggesting that richer label semantics can make a positive difference. Meanwhile, the discrete or decoupled label embeddings without fine-tuning may yield better results.
Metric Loss for Contrastive LearningHere we explore several typical distance metric functions (including Cosine, MSE, Smooth L1, and KL-divergence) for the slot-level contrastive objective, and we also consider the influence of temperature \(\tau\). Figure 4 reveals that the temperature value directly affects the final performance. Also, it shows better results overall at around \(\tau=0.5\) for each metric function we take. We select the cosine similarity function as our desired distance metric function, due to its relatively good performance.
Cross-Dataset SettingConsidering slot labels and utterances may vary significantly across different datasets, we further evaluate the proposed method under the cross-dataset scenario, a more challenging setting. Here we introduce another popular slot filling dataset ATIS Liu et al. (2019). It is used for the target (source) domain data while the SNIPS for the source (target) domain data6, as shown in Table 7. The results confirm that our method still works well in this challenging setting.
Footnote 6: We ignore the evaluation on the SGD (Rastogi et al., 2020), which is a fairly large-scale dataset with extremely unbalanced label distributions.
VisualizationFigure 5 shows the visualization of normalized slot entity representations before similarity matching using t-SNE dimensionality reduction algorithm (van der Maaten and Hinton, 2008). Obviously, our method can better obtain well-gathered clusters when introducing the slot-level contrastive learning, facilitating the discriminative entity representations.
## 4 Related Work
Zero-shot Slot FillingIn recent years, zero-shot slot filling has received increasing attention. A dominating line of research is the metric-learning method, where the core idea is to learn a prototype representation for each category and classify test data based on their similarities with prototypes Snell et al. (2017). For slot filling, the semantic embeddings of textual slot descriptions usually serve as the prototype representations Bapna et al. (2017); Lee and Jha (2019); Zhu et al. (2020). Shah et al. (2019) utilize both the slot description and a few examples of slot values to learn semantic representations of slots. Furthermore, various two-pass pipeline schemes are proposed by separating the slot filling task into two steps along with template regularization Liu et al. (2020), adversarial training He et al. (2020), contrastive learning Wang et al. (2021), linguistic prior knowledge Siddique et al. (2021). However, these mostly utilize the context-free discrete label embeddings, and the two-pass fashion has potential limitations due to a lack of knowledge sharing between sub-modules as well as inefficient inference. These motivate us to exploit the context-aware label representations under an end-to-end joint learning framework.
Another line of research is the QA-based methods that borrow from question-answering systems, relying on manually well-designed queries. Du et al. (2021) use a set of slot-to-question generation strategies and pre-train on numerous synthetic QA pairs. Yu et al. (2021) and Liu et al. (2022) apply the MRC framework Li et al. (2020) to overcome the domain shift problem. Heo et al. (2022) modify the MRC framework into sequence-labeling style by using each slot label as query. Li et al. (2023) introduce a generative framework using each slot label as prompt. In our work, we mainly focus on the metric-based method without intentionally introducing external knowledge with manual efforts.
Contrastive LearningThe key idea is to learn discriminative feature representations by contrasting positive pairs against negative pairs. Namely, those with similar semantic meanings are pushed towards each other in the embedding space while those with different semantic meanings are pulled apart each other. Yan et al. (2021) and Gao et al. (2021) explore instance-level self-supervised contrastive learning where sample pairs are constructed by data augmentation. Khosla et al. (2020) further explore the supervised setting by contrasting the set of all instances from the same class against those from the other classes. Das et al. (2022) present a token-level supervised contrastive learning solution to deal with the few-shot NER task by means of Gaussian embeddings.
Previous studies for slot filling mainly focus on
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Src\(\rightarrow\)Tgt** & **SNIPS\(\rightarrow\)ATIS** & **ATIS\(\rightarrow\)SNIPS** \\ \hline Coach & 14.35 & 9.76 \\ LEONA & 20.80 & 14.36 \\ Ours & 27.01 & 16.61 \\ \hline \hline \end{tabular}
\end{table}
Table 7: F1-scores in the cross-dataset setting.
Figure 5: t-SNE visualization of the normalized representations of different slot entities drawn from the BookRestaurant that contains much unseen slot labels.
instance-level contrastive learning, which may be sub-optimal for a fine-grained sequence labeling task. Inspired by supervised contrastive learning, we leverage a slot-level contrastive learning scheme for zero-shot slot filling to learn the discriminative representations for domain adaptation. For all existing slot entities within a mini-batch, we regard those with the same type as the positive example pairs and those with different type as negative ones.
## 5 Conclusion
In this paper, we tackle the problem of generalized zero-shot slot filling by the proposed end-to-end metric learning based scheme. We propose a cascade-style multi-task learning framework to efficiently detect the slot entity from a target domain utterance. The context-aware soft label embeddings are shown to be superior to the widely-used discrete ones. Regarding domain adaptation robustness, we propose a slot level contrastive learning scheme to facilitate the discriminative representations of slot entities. Extensive experiments across various domain datasets demonstrate the effectiveness of the proposed approach when handling unseen target domains. Our investigation also confirms that semantically richer label representations enable help further boost the recognition performance, which motivates us to further explore external knowledge enhanced soft label embeddings for advancing the metric-based method.
## Limitations
Although our work makes a further progress in the challenging zero-shot slot filling, it is subject to several potential limitations. Firstly, since slot label sequence is used as the prefix of the utterance, this directly results in a long input sequence. Secondly, our method may be negatively affected by severe label ambiguity. There are some slot entities with rather similar semantics, leading to wrong slot type predictions. For example, "_book a mandonese restaurant_", the slot entity type of "_manadonese_" is actually cuisine, but is easily identified as country. One major reason is that some utterances are relatively short and lack sufficient contextual cues. Thirdly, the recognition performance of metric-based methods may remain difficult to exceed that of advanced QA-based or generative methods due to the fact that the latter manually introduces detailed slot label description by well-designed queries or prompts.
|
2305.11244 | A Parameter-Efficient Learning Approach to Arabic Dialect Identification
with Pre-Trained General-Purpose Speech Model | In this work, we explore Parameter-Efficient-Learning (PEL) techniques to
repurpose a General-Purpose-Speech (GSM) model for Arabic dialect
identification (ADI). Specifically, we investigate different setups to
incorporate trainable features into a multi-layer encoder-decoder GSM
formulation under frozen pre-trained settings. Our architecture includes
residual adapter and model reprogramming (input-prompting). We design a
token-level label mapping to condition the GSM for Arabic Dialect
Identification (ADI). This is challenging due to the high variation in
vocabulary and pronunciation among the numerous regional dialects. We achieve
new state-of-the-art accuracy on the ADI-17 dataset by vanilla fine-tuning. We
further reduce the training budgets with the PEL method, which performs within
1.86% accuracy to fine-tuning using only 2.5% of (extra) network trainable
parameters. Our study demonstrates how to identify Arabic dialects using a
small dataset and limited computation with open source code and pre-trained
models. | Srijith Radhakrishnan, Chao-Han Huck Yang, Sumeer Ahmad Khan, Narsis A. Kiani, David Gomez-Cabrero, Jesper N. Tegner | 2023-05-18T18:15:53Z | http://arxiv.org/abs/2305.11244v2 | A Parameter-Efficient Learning Approach to Arabic Dialect Identification with Pre-Trained General-Purpose Speech Model
###### Abstract
In this work, we explore Parameter-Efficient-Learning (PEL) techniques to repurpose a General-Purpose-Speech (GSM) model for Arabic dialect identification (ADI). Specifically, we investigate different setups to incorporate trainable features into a multi-layer encoder-decoder GSM formulation under frozen pre-trained settings. Our architecture includes residual adapter and model reprogramming (input-prompting). We design a token-level label mapping to condition the GSM for Arabic Dialect Identification (ADI). This is challenging due to the high variation in vocabulary and pronunciation among the numerous regional dialects. We achieve new state-of-the-art accuracy on the ADI-17 dataset by vanilla fine-tuning. We further reduce the training budgets with the PEL method, which performs within 1.86% accuracy to fine-tuning using only 2.5% of (extra) network trainable parameters. Our study demonstrates how to identify Arabic dialects using a small dataset and limited computation with open source code and pre-trained models.
Srijith Radhakrishnan\({}^{1,2,4}\), Chao-Han Huck Yang\({}^{1,3}\), Sumeer Ahmad Khan\({}^{1,4}\) Narsis A. Kiani\({}^{1}\), David Gomez-Cabrero\({}^{1}\), Jesper N. Tegner\({}^{1,4}\)\({}^{1}\)King Abdullah University of Science and Technology, Saudi Arabia
\({}^{2}\)Manipal Institute of Technology, India; \({}^{3}\)Georgia Institute of Technology, USA; \({}^{4}\)SDAIA-KAUST
Center of Excellence in Data Science and Artificial Intelligence, Thuwal 23952, Saudi Arabia
{srijith.radhakrishnan,sumeer.khan,jesper.tegner}@kaust.edu.sa,[email protected]
**Index Terms**: Parameter-Efficient Learning, Dialect Identification, Arabic Dialect
## 1 Introduction
Dialect identification [1, 2] (DI) amounts to identifying similar dialects belonging to the same language family. It is a specific case of language identification [3] (LID) task. However, DI is more challenging than LID owing to the fact that dialects share similar acoustic and linguistic characteristics compared to different languages. Very minute differences in pronunciation [4] and accent are used as cues to identify dialects. Moreover, DI does not share the advantage of publicly available speech recognition models pre-trained on large speech data corpora for network initialization. Despite these challenges, DI remains relatively unexplored compared to LID.
In this study, we leverage upon a recent open-access and general-purpose speech recognition architecture, Whisper [5], pre-trained on a large speech corpus from OpenAI, to address DI in resource-constrained and data-limited conditions. We use Parameter-Efficient Learning [6, 7] (PEL) to adapt a large pre-trained model by training small additive modules embedded into the frozen pre-trained model. By doing so, we require less training time and computing resources to fine-tune the model for DI. Figure 1 is a schematic of the proposed parameter-efficient learning framework.
We choose to perform DI in Arabic owing to its substantial regional variations and the widespread use of Arabic as an official language in over 22 countries [8]. Notably, significant differences exist between the standard written form, referred to as Modern Standard Arabic, and the local colloquial dialects spoken in each region. Interestingly, not all dialects are mutually intelligible.
In this study, we present several contributions: (1) Firstly, we introduce the novel use of Parameter-Efficient-Learning (PEL) for this task, marking the first application of this approach to Arabic dialect identification [8]. (2) We investigate _different designs_ to incorporate trainable features into a multi-layer encoder-decoder frozen model. (3) We achieve new state-of-the-art accuracy on the **official** testing and development sets of ADI-17 [9] dataset using only 30.95% of the training data. (4) Lastly, we demonstrate that our PEL method achieves equivalent performance to full fine-tuning using only **2.5%** of (extra) network parameters.
## 2 Related work
### Existing works on Dialect prediction
Several works exist in applying Natural Language Processing to the Arabic text due to sufficient open-source Arabic textual data from multiple sources, such as Newspaper articles [11], and Twitter [12]. Unfortunately, minimal open-source Arabic speech data is available, as reported in [13]. Most existing Arabic dialect identification methods use machine learning and deep learning methods. [14] used phonetic and lexical features obtained from a speech recognition system combined with a multi-class support vector machine [15] to identify Arabic dialects. [16] used a Siamese neural network along with i-vector post-processing to learn similarities and dissimilarities among Arabic dialects.
### Parameter-Efficient Learning
Large pre-trained models have been very successful at various tasks in natural language processing. Parameter-efficient learning is a research direction that aims to reduce the computational cost by adapting large pre-trained models to downstream tasks by updating only a subset of (extra) parameters. In this section, we introduce several state-of-the-art parameter-efficient learn
Figure 1: Overview of proposed parameter-efficient learning framework for Arabic dialect identification building upon parameter-efficient learning [7] and label mapping [10].
ing methods.
**Residual Adapters** : Residual Adapters are small trainable blocks inserted between the layers of a transformer architecture [17, 18, 19, 20]. They down-project the latent dimension from the previous layer and apply a nonlinear activation function, followed by an up-projection. A residual connection surrounds the adapter layer. This setup encourages parameter sharing between the frozen components and localizes all the weight updates to the adapter modules.
**Neural Reprogramming**: Neural reprogramming can be used to repurpose a frozen pre-trained model to out-of-domain prediction tasks by adding trainable parameters to the input of the pre-trained model [10, 21]. The frozen model is followed by a label-mapping strategy to map the source labels to the out-of-domain target labels. The trainable input noise aligns the latent distribution of the target domain with being more similar to that of the source domain, using the pre-existing decision boundaries of the frozen model. Neural reprogramming works well when the input size of the target data is comparably smaller than the source data, as demonstrated in [10, 22, 6].
**BitFit**: BitFit refers to BIas-Term FIne-Tuning, which only updates the bias-terms and the task-specific classification layer of the frozen pre-trained model [23].
**Others**: Other state-of-the-art parameter-efficient learning techniques include LoRa [24], in which trainable low-rank matrices are embedded inside the transformer attention layers to approximate parameter updates and speech prompting methods [25, 26] as one approach shared similar motivation to input reprogramming, such trainable input and label mapping.
## 3 Method
We evaluate two fine-tuning strategies. First Vanilla fine-tuning, in which we fine-tune different components of the network parameters. Next, parameter-efficient fine-tuning, in which we train (extra) modules added to the network architecture and investigate their performance.
### Vanilla Fine-tuning
To incorporate the new dialect classes into the network architecture, we append the new classes to the pre-existent multilingual tokenizer and modify the token embedding matrix \(W_{e}\in\mathbb{R}^{t\times n}\) to \(W_{e}^{{}^{\prime}}\in\mathbb{R}^{(t+d)\times n}\). Here \(t\) and \(n\) are the dimensions of the tokenizer and network, respectively, and \(d\) denotes the number of dialects. The weights of \(W_{e}\) are then copied with random padding to \(W_{e}^{{}^{\prime}}\) for initialization. A cross-entropy loss is used to fine-tune different components of the network.
### Input Reprogramming
We begin our experiments in parameter-efficient fine-tuning with Input reprogramming. If \(x\) is the input to the pre-trained frozen model \(W_{\Theta}\left(.\right)\) parameterized by theta, the frozen mode predicts the dialect \(\hat{y}\) as \(W_{\Theta}\left(x\right)\rightarrow\hat{y}\). Input reprogramming aims to add trainable noise \(x_{t}\) to the input \(x\). In our application, the input to the frozen pre-trained model is the Log-Mel Spectrogram computed from 30 seconds of Arabic speech. We add trainable parameters of the same dimensions as \(x\) to minimize the prediction error between \(\hat{y}\) and the true label while not updating \(\Theta\), the parameters of the model as \(W_{\Theta}\left(x+x_{t}\right)\rightarrow\hat{y}\)
### Latent Space Efficient Learning
We insert small trainable modules called adapters between the encoder layers of our model as shown in Fig 2. These adapter layers contain a linear down projection of the latent input dimensions \(n\) to a bottleneck dimension \(b\) using \(W_{dp}\in\mathbb{R}^{n\times b}\). We apply \(g\left(.\right)\), the GELU [27] activation function on the bottleneck dimension followed by an up projection with \(W_{up}\in\mathbb{R}^{b\times n}\). Residual connections are applied around the adapter layer as represented in Equation 1. We compare the performance of multiple bottleneck dimensions \(b\), set to \(\frac{n}{2},\frac{n}{4}\), and \(\frac{n}{8}\) against multiple sample sizes in our experiments.
\[x\gets x+g\left(x.W_{dp}\right)W_{up} \tag{1}\]
### Token Mapping in Whisper
We utilize many-to-one hard-label mappings to map the source label of the model to our target dialect classes, motivated by [10]. Specifically, we randomly assign unique language tokens of the model to each dialect, sum over their logits, and apply the softmax function to calculate respective dialect probabilities. Performance significantly deprecicated when we attempted to implement a trainable label mapping setup.
## 4 Experimental Setup
In this section, we describe our experimental setup and present results for Arabic dialect identification on the ADI-17 dataset.
### General Purpose Speech Model
We use WhisperBase[5] as the underlying pretrained model composed of an encoder-decoder architecture for our experiments. A multi-task general purpose speech model trained on 680,000 hours of multilingual data from 99 languages ensuring rich representational knowledge. The input to the model is the log magnitude Mel spectrogram representation computed from 30 seconds of an audio clip. We pad the audio clip with zeros if the duration is less than 30 seconds. More details can be found at [5]. For training we use Adam optimizer for all tasks. We explore learning rates of \(1\mathrm{e}{-2}\), \(1\mathrm{e}{-3}\), \(1\mathrm{e}{-4}\), and utilize the optimal learning rate with a linear learning rate scheduler to train for 50 epochs until convergence. We trained our models using 4 V100 GPUs with an adequate batch size of 64 and 0.1 weight decay. Our implementation and pre-trained weights are open source at [https://github.com/Srijith-rkr/KAUST-Whisper-Adapter](https://github.com/Srijith-rkr/KAUST-Whisper-Adapter).
### Dataset
We evaluate our findings on the ADI-17 dataset [9] released as part of the MGB5 challenge. The dataset contains audio data for 17 Arabic dialects collected from YouTube. The training set contains 3033 hours of audio data. The dev and test set contain 58 hours of audio data. The training set of ADI17 is unbalanced in terms of the number of utterances and hours of data. For ex
Figure 2: Illustration of the transformer architecture embedded with adapter layers
ample, the Iraq (IRQ) dialect has 815.8 hours of training data, whereas the Jordan (JOR) dialect contains only 25.9 hours of training data. The audio clips have varying lengths, between 1 second to 26 minutes. To deal with class imbalance during training, we over-sample random windows of 30 seconds of audio data from clips from the minority classes containing more than 20 seconds in duration. The dataset is further divided into three subcategories in terms of duration, Short duration (\(<\) 5s), medium duration (5-20s), and long duration (\(>\) 20s). We perform our experiments using the long duration (\(>\) 20s) subsection of the dataset since we only use a fraction of the dataset to fine-tune different transfer learning methods to evaluate the effect of data on performance. The sample sizes and their duration are found in Table 2.
### Baseline
The baseline reported by ADI-17 [9] and the performance of other methods on ADI-17 with utterances over 20 seconds have been reported in Table 1. ADI-17 [9] utilized a convolutional neural network-based (CNN) architecture with a softmax output. The [29] used a fusion of transformer-based architecture and CNN with downsampling. [28] used a supervised clustering-based algorithm with triplet loss. The results in the table show that all three methods perform well in terms of accuracy. However, there are notable differences in the number of parameters used by each method, with ADI-17 [9] utilizing the fewest parameters at 13.1M and Supervised clustering [28] requiring over 50M parameters.
### Performance Studies
WhisperBase is an encoder-decoder-based transformer [30] architecture with a total trainable parameter of \(70\)M. We fine-tuned the encoder and decoder separately to examine the impact of the architecture on DI. Similarly, we evaluate the same setup for bias-only fine-tuning. The performance of fine-tuning with \(10,000\) samples and the number of fine-tuned parameters is reported in Table 1. Fine-tuning only the encoder achieves new state-of-the-art accuracy of 95.01% on ADI-17 while using only 30.95% of the training set. Fine-tuning the decoder alone yields comparable results to fine-tuning the entire model, but fine-tuning only the encoder achieves better results. However, fine-tuning the encoder bias terms performs significantly better than fine-tuning decoder bias terms. Input reprogramming does not perform well compared to bias-only fine-tuning, although they share a similar number of parameters.
WhisperBase has \(512\) latent dimensions from its transformer encoder. Thus, we train our residual adapter layers with bottleneck dimensions of \(64\), \(128\), and \(256\) to examine the impact of the bottleneck dimensions on performance.
We also tested (1) adapters with bridge connections, (2) adapters with dense connections [31], (3) adapters without residual connections, (4) adapters with self-attention backbone, (4a) with Gelu activation, (4b) with residual connections. We observed that the setup in Section 3.3 had the best performance. The adapter approach with \(256\) latent dimensions has the same performance as fully fine-tuning the model, despite it updating only 2.5% of the network parameters. This is advantageous since a separate copy of all model parameters does not have to be created for each downstream task of Whisper. Instead, only the adapter weights need to be stored for each downstream task.
The performance comparison of fine-tuning methods on the development set against the number of samples per class is displayed in Figure 3. Here we observe that although encoder fine-tuning, decoder fine-tuning, and full fine-tuning achieve similar performance against \(10,000\) samples per class, encoder fine-tuning performs significantly better with a lower number of samples. We conjecture that this is because only the encoder of WhisperBase processes the audio input. The decoder receives the output of the encoder. A similar trend is observed with bias-only fine-tuning. Furthermore, the adapter methods with \(64\), \(128\), and \(256\) latent dimensions display comparable performance, with minor improvements in performance as the bottle-neck dimensions increase. Notably, the adapter methods
\begin{table}
\begin{tabular}{c c c c c c} \hline
**Method** & **Trainable Para. (\(\downarrow\))** & **Trainable Ratio (\%)** & **Dev. Acc. (\(\uparrow\))** & **Test Acc. (\(\uparrow\))** & **Utility Score(\(\uparrow\))** \\ \hline Frozen WhisperBase[5] & 0 & 0\% & 2.52\% & 4.44\% & - \\ \hline Full fine-tuning & 71.8M & 100\% & 95.55\% & 93.34\% & 11.88 \\ Encoder fine-tuning & 18.9M & 26.32\% & **96.39**\% & **95.01**\% & 13.06 \\ Decoder fine-tuning & 52M & 72.42\% & 95.07\% & 93.75\% & 12.15 \\ \hline BitFit & 75.8K & 0.10\% & 59.01\% & 57.68\% & 11.82 \\ Encoder BitFit & 32.3K & 0.04\% & 44.59\% & 41.83\% & 9.28 \\ Decoder BitFit & 43.5K & 0.06\% & 36.78\% & 39.42\% & 8.50 \\ \hline Input Reprogramming & 240K & 0.33\% & 27.04\% & 27.91\% & 5.19 \\ Adapters-64 & 642K & 0.89\% & 92.79\% & 89.47\% & **15.41** \\ Adapters-128 & 1M & 1.39\% & 93.99\% & 91.50\% & 15.25 \\ Adapters-256 & 1.8M & 2.50\% & **95.55**\% & **93.15**\% & 14.89 \\ \hline \hline ADI-17 [9] & 13.1M & - & 93.70\% & 90.4\% & 13.17 \\ CNN+Transformer [29] & 64.3M & - & 94.04\% & 93.06 & 11.92 \\ Supervised clustering [28] & \(>\)50M & - & 95.09\% & 94.35\% & \(\sim\)12.25 \\ \hline \end{tabular}
\end{table}
Table 1: An overview of the model performance of fine-tuning and parameter-efficient learning on utterances over \(20\) seconds as in standard ADI-17 setup. The development and test accuracy were evaluated in the official setup and outperformed three previous state-of-the-art results [9, 28, 29] using its official dev and test settings. Noted that directly using frozen pre-trained Whisper (second row) with label mapping performs 2.52% to 4.44% in the dialect identification task, which motivates its demand of efficient model designs.
\begin{table}
\begin{tabular}{c c c} \hline
**\#samples per class** & **\#hours per class** & **fraction of ADI-17** \\ \hline
0.5k & 4.1 & 2.33\% \\
1k & 8.2 & 4.60\% \\
2k & 15.5 & 8.69\% \\
5k & 32.8 & 18.40\% \\
10k & 53.1 & 30.95\% \\ \hline \end{tabular}
\end{table}
Table 2: Dataset sample statistics
outperform decoder fine-tuning with fewer samples.
### Utility Discussion and Efficient Module Selection
In addition to accuracy, we also report Utility scores as defined in Equation (2) to compare the efficiency against performance. The results of this analysis are presented in Table 1.
\[Utility\ score\left(i\right)=\frac{Acc\_test\left(i\right)}{log\left(N\text{of trainable parameters }\right)} \tag{2}\]
The results in Table 1 show that adapters demonstrate the highest _Utility_ scores among all methods, as they perform well with a smaller number of trainable parameters. Specifically, adapter-\(64\) (ninth row)achieves the optimal performance versus efficiency tradeoff, indicating that this method may be the most efficient approach for the task.
### Neural Saliency Analysis on Acoustic Features
As a preliminary study investigating PEL learning experiments for Arabic dialect identification tasks, we aim to provide attribute-based interpretation based on frozen pre-trained Whisper model. Meanwhile, neural saliency methods [32, 33] provide interpretable intuitions behind black-box networks by analyzing the weight distribution over hidden neurons. Saliency methods such as Grad-CAM [33] can be used to identify input regions that influence the class prediction probabilities. In this work, we employ a perturbation-based saliency map [32] for feature-level interpretations. This algorithm masks parts of the input to determine regions responsible for the classifier decision to understand better the patterns the model uses to predict Arabic dialects. The results of our analysis are presented in Fig 4. Fig 4a represents the Log-Mel spectrogram of 30 seconds of Qatari dialectal speech. The mask of adapter-256 in Fig 4b emphasizes the parts of the input the model focuses on to make predictions. These regions are highlighted in red color. The difference between the masks of the adapter, frozen model, and fine-tuned encoder model have been plotted in Fig 4c and 4d. The bluish plot in Fig 4d compared to Fig 4c indicates that the adapter-256 and fine-tuned encoder model observe similar input regions compared to the adapter-256 and frozen Whisper.
## 5 Conclusion
This paper presents parameter-efficient approach for Arabic dialect identification, as one very first study on this low-resource application. The proposed method utilizes frozen pre-trained speech recognition models by incorporating trainable parameters at the input and latent space levels. By using only 30.95% of the ADI-17 training data and 2.5% of the model parameters, the proposed method achieves state-of-the-art accuracy on the ADI-17 benchmark, comparable to the performance of full fine-tuning. These results showcase the effectiveness of our efficient learning approach. Our study suggests that this learning paradigm can be extended to predict various under-resourced dialects with limited data. Further research on this approach may enable dialect prediction and other tasks for low-resource and long-tail data [34, 35], which deserves more studies.
Figure 4: (a) Log-Mel spectrogram of the audio input, (b) Mask of the adapter-256 model, (c) Difference between the masks of adapter-256 and frozen model and (d) Difference between the masks of adapter-256 and fine-tuned encoder model.
Figure 3: Performance comparison of fine-tuning methods using the development set against the number of samples per class. |
2304.06411 | Meta-Auxiliary Learning for Adaptive Human Pose Prediction | Predicting high-fidelity future human poses, from a historically observed
sequence, is decisive for intelligent robots to interact with humans. Deep
end-to-end learning approaches, which typically train a generic pre-trained
model on external datasets and then directly apply it to all test samples,
emerge as the dominant solution to solve this issue. Despite encouraging
progress, they remain non-optimal, as the unique properties (e.g., motion
style, rhythm) of a specific sequence cannot be adapted. More generally, at
test-time, once encountering unseen motion categories (out-of-distribution),
the predicted poses tend to be unreliable. Motivated by this observation, we
propose a novel test-time adaptation framework that leverages two
self-supervised auxiliary tasks to help the primary forecasting network adapt
to the test sequence. In the testing phase, our model can adjust the model
parameters by several gradient updates to improve the generation quality.
However, due to catastrophic forgetting, both auxiliary tasks typically tend to
the low ability to automatically present the desired positive incentives for
the final prediction performance. For this reason, we also propose a
meta-auxiliary learning scheme for better adaptation. In terms of general
setup, our approach obtains higher accuracy, and under two new experimental
designs for out-of-distribution data (unseen subjects and categories), achieves
significant improvements. | Qiongjie Cui, Huaijiang Sun, Jianfeng Lu, Bin Li, Weiqing Li | 2023-04-13T11:17:09Z | http://arxiv.org/abs/2304.06411v1 | # Meta-Auxiliary Learning for Adaptive Human Pose Prediction
###### Abstract
Predicting high-fidelity future human poses, from a historically observed sequence, is decisive for intelligent robots to interact with humans. Deep end-to-end learning approaches, which typically train a generic pre-trained model on external datasets and then directly apply it to all test samples, emerge as the dominant solution to solve this issue. Despite encouraging progress, they remain non-optimal, as the unique properties (_e.g._, motion style, rhythm) of a specific sequence cannot be adapted. More generally, at test-time, once encountering unseen motion categories (out-of-distribution), the predicted poses tend to be unreliable. Motivated by this observation, we propose a novel test-time adaptation framework that leverages two self-supervised auxiliary tasks to help the primary forecasting network adapt to the test sequence. In the testing phase, our model can adjust the model parameters by several gradient updates to improve the generation quality. However, due to catastrophic forgetting, both auxiliary tasks typically tend to the low ability to automatically present the desired positive incentives for the final prediction performance. For this reason, we also propose a meta-auxiliary learning scheme for better adaptation. In terms of general setup, our approach obtains higher accuracy, and under two new experimental designs for out-of-distribution data (unseen subjects and categories), achieves significant improvements.
## Introduction
Human pose forecasting, accurately predicting how a person will move in the near future, is a fundamental task in computer vision, which has enormous potential in machine intelligence, and human-robot interaction Gui et al. (2018); Wang et al. (2021); Liu et al. (2021); Piergiovanni et al. (2020); Martinez-Gonzalez et al. (2021); Sofianos et al. (2021).
Over the past few years, extensive literature has sprung up exploring this fascinating topic, with deep-learning based end-to-end approaches proving increasingly popular Li et al. (2020); Gui et al. (2018); Li et al. (2018, 2021). Researchers are prone to train on external large-scale datasets Ionescu et al. (2014) to achieve a generic pre-trained model, which is then indiscriminately applied to all test sequences with the same set of network weights in the inference stage Jain et al. (2016); Wei Mao (2021); Dang et al. (2021). These approaches have extensively investigated this issue from various perspectives, emerging as the mainstream solutions.
While the empirical result is encouraging, it is not optimal. In real-world applications, the inherent defects of the existing DNN-based models Ruiz et al. (2018); Gopalakrishnan et al. (2019); Barsoum et al. (2018); Kundu et al. (2019); Aliakbarian et al. (2021) cannot be overlooked, among which the major one is that the features learned from external datasets can hardly cover the unique attributes within a specific sequence, including motion style and rhythm, as well as the height and physical proportion of actors, etc. More generally, for diverse human motion, large-scale datasets have a low ability to cover all categories, which means that, at test time, unseen action categories are frequently encountered Ulyanov et al. (2018); Shin et al. (2021); Hao et al. (2021). In this case, the model tends to focus on the dominant distribution in training, while failing to take account of the unique patterns of new action categories (out-of-distribution), therefore, unreliable results may be yielded. This inability to adapt to the internal properties of a given sequence hinders the realistic application of the predictive algorithms Yuan
Figure 1: Comparison of a classic deep end-to-end model (top) with our approach (bottom) at test-time. Given an observed sequence \(\mathbf{X}_{1:T}\), typical approaches indiscriminately utilize the pre-trained model obtained from large-scale datasets to generate the prediction \(\tilde{\mathbf{Y}}_{1:\Delta T}\), which is sub-optimal, as the internal information within the specific sample is ignored. By contrast, in the test phase, our model learns to adapt to the unique properties of the test sample via several meta-learning steps. Here, the black poses denote the prediction, and the underlying orange ones are the GT.
et al., 2020; Mao et al., 2020; Shu et al., 2021).
To solve it, we propose a novel Test-Time Adaptation (TTA) approach. Concretely, our model falls into auxiliary learning, where the network consists of one primary task and two self-supervised auxiliary ones. The primary task (Pri.) focuses on mapping historical observations to the predicted poses. The auxiliary task-1 (Aux.1) is a simple binary classifier to distinguish whether the input sequence is a scrambled counterpart of the observation. As a contrast, some joints of the observed sequence are randomly removed to construct the corrupted sequence, and then Aux.2 aims to repair these missing joints. The Pri., Aux.1, and Aux.2 share most of the parameters, and are jointly trained to achieve a base model. Then, in the testing phase, Aux.1 and Aux.2 behave as a regularization to further update the shared weights to enhance the generalization for specific sequences.
Intuitively, the auxiliary task provides rich semantic cues to fine-tune the model parameters (Chi et al., 2021; Varsavsky et al., 2020). However, empirical results show that, a rough update of the base model may lead to the criticized _negative transfer_, as the invalid message may be exchanged (Xiao et al., 2018; Vafaeikia et al., 2020). To solve it, we design a Gate Sharing Unit (GSU), which learns to control the relative intensity of message transmission among tasks in both training and testing, to pass the favorable information, while hinder the redundant or even incorrect ones.
Even so, there is a legacy problem: how to ensure that the Pri. branch obtains better adapted parameters to ensure the forecasting performance of specific sequences. For this purpose, inspired by MXML (Liu et al., 2019; Chi et al., 2021), we integrate meta-learning into auxiliary learning to form meta-auxiliary learning. Our meta-objective is to optimize the whole network via meta-auxiliary learning so that the Pri. branch can better adapt to test sequences. Note that we call the pair composed of the observed and the future poses the 'task' in the meta-learning nomenclature. Moreover, for each observed sequence, the adapted parameters are different, and its specific motion patterns can be generalized.
Methodologically, to capture the spatio-temporal pattern of skeleton data, we introduce two virtual relay nodes into the sparse transformer, to form the Spatial Sparse-Relay Transformer (SS-RT) and Temporal Sparse-Relay Transformer (TS-RT) (Child et al., 2019; Aksan et al., 2021; Cai et al., 2020). The relay nodes are capable of receiving information from all human joints along with spatial and temporal aspects, to extract the global spatio-temporal correlations. With the sparse transformer and relay-nodes update, the newly designed SS-RT and TS-RT explicitly consider the human topology and temporal smoothness of motion sequences, as well as long-term correlations in space and time.
Our contributions are multifaceted: (1) We develop a test-time adaptation approach that leverages meta-auxiliary learning to enable fast and effective adaptation to the specific information within test sequences. (2) Both motion repairing and binary classification are introduced as our self-auxiliary tasks, which are exploited to automatically optimize the pre-trained model, without any extra manual labeling. (3) To avoid the negative transfer across multi-tasks, the GSUs are designed to allow valid information to be passed easily among tasks, while preventing useless one. (4) On two widely-used benchmarks, our model achieves state-of-the-art performance, and under out-of-distribution data, outperforms the existing methods by a large margin. To our knowledge, this is the first attempt to improve the prediction quality for unseen categories and subjects in the real world.
## Related Work
### Human Motion Forecasting
Deep end-to-end learning approaches have dominated this issue, with the attraction of providing high flexibility and exceptional performance (Martinez et al., 2017; Corona et al., 2020; Cui et al., 2020; Li et al., 2020). Researchers typically regard human motion forecasting as a special seq2seq generation problem, and propose a variety of RNN variants to extract the temporal pattern of 3D skeleton sequences (Tang et al., 2018; Gui et al., 2018; Chiu et al., 2019), which have yielded promising results. Despite this, due to the error accumulation and the failure of accessing the topological relationship, the predicted frame tends to converge to an unexpected and static pose (Fragkiadaki et al., 2015; Jain et al., 2016; Gopalakrishnan et al., 2019; Martinez et al., 2017).
Nowadays, various GNN-based models are being developed to extract the semantic connectivity of the 3D skeleton sequence, with promising results (Mao et al., 2019; Cui et al., 2020; Li et al., 2020, 2020; Dang et al., 2021; Ma et al., 2022; Zhong et al., 2022). However, GCNs are capable only of gathering information from the local neighbor joints, and have a limited capacity to capture long-term relationships.
Currently, researchers attempt to exploit the Transformer to achieve the long-range correlation, whereas, it fails to consider the meaningful topology and temporal smoothness of motion sequences, and brings more computational cost (Mao et al., 2020; Aksan et al., 2021; Guo et al., 2022). In contrast, our approach, which includes a sparse transformer and virtual relay nodes, allows us to explicitly focus on the meaningful local structure and temporal continuity while still extracting long-term correlations.
In real applications, the above approaches remain a significant limitation, _i.e._, the specific properties of test sequences cannot be adapted. This work aims to solve it.
**Test-time Adaptation.** To improve the generalization for diverse distributions, the test-time adaptation (TTA) scheme is recently proposed (Chi et al., 2021; Varsavsky et al., 2020; Hu et al., 2021; Shin et al., 2021). Typically, deep learning algorithms are trained on external datasets to produce a general model, and before making decisions, TTA resorts to auxiliary tasks to neatly fine-tune the weights according to the internal knowledge of test samples. Due to the utilization of both external and internal information, superior outcomes are achieved (He et al., 2021; Hao et al., 2021).
However, the existing test-time adaptation technologies remain a key challenge, that is, the auxiliary task may send inaccurate or even incorrect messages to the primary task (Vafaeikia et al., 2020; Cui et al., 2021; Xiao et al., 2018). To address it, we elaborate a simple but effective gated sharing unit (GSU) that adaptively release the important context while preventing others.
**Meta-learning.** Our work is related to mete-learning (learning to learn), particularly the model-agnostic version (MAML), which allows the pre-trained model to be adjusted to perform the fast adaptation of individual samples. Liu et al. (2022) uses MAML for multi-domain single image dehazing, with the meta-objective of learning consistency across the losses of different tasks. Along with MAML, Liu et al. (2019) presents the meta-auxiliary learning (MXML) framework, which generates labels for additional auxiliary tasks. Inspired by the MXML, Chi et al. (2021) also achieves a fast adaptation to improve the performance of the primary deblurring operation for unseen images. Our approach, which draws inspiration from these publications in part, involves the following two changes: we design two auxiliary tasks to identify more effective semantics; our auxiliary tasks are self-supervised for the automatic inference.
## Proposed Approach
Suppose that \(\mathbf{X}_{1:T}=[\mathbf{X}_{1},\mathbf{X}_{2},...,\mathbf{X}_{T}]\) is an observed sequence over horizon \(T\), where each \(\mathbf{X}_{t}=[\mathbf{j}_{1},\mathbf{j}_{2},...,\mathbf{j}_{N}]\in\mathbb{R}^{N\times D}\) records the 3D coordinate of \(N\) human joints in a frame. Current DNN-based models prone to directly train a mapping from the observation to the future sequence, \(\mathcal{M}:\mathbf{X}_{1:T}\rightarrow\mathbf{Y}_{1:\Delta T}\), with \(\mathbf{Y}_{1:\Delta T}=\{\mathbf{Y}_{1},\mathbf{Y}_{2},...,\mathbf{Y}_{\Delta T}\}\).
In contrast, our approach incorporates the following developments. (1) Two self-auxiliary tasks are introduced, sharing the majority of model weights and allowing collaborative training alongside the primary forecasting one. Additionally, both Aux. tasks are connected to the Pri. branch, and therefore the effective semantic clues can be provided as a high-order regularization. (2) To avoid negative transfer across tasks, we build the GSU to prevent the passage of erroneous/incorrect messages. (3) We first train on large-scale datasets to achieve a base model, and the ultimate goal is to further optimize it at test-time, to automatically adapt to the sample-specific properties, and then yield more realistic predicted results \(\tilde{\mathbf{Y}}_{1:\Delta T}=\{\tilde{\mathbf{Y}}_{1},\tilde{\mathbf{Y}}_{2 },...,\tilde{\mathbf{Y}}_{\Delta T}\}\). (4) In practical studies, the naive updates might not bring desired improvements. To solve it, a meta-auxiliary learning framework is proposed, which learns the better-adapted parameters for the effective test-time adaptation of a specific sequence.
### Network Architecture
The network architecture consists of one primary branch and two self-supervised auxiliary ones, as seen in Fig.2. For convenience, the following uses subscripts to indicate the spatial indexes, and superscripts for temporal indexes.
**Primary Branch.** The Pri. is intended to predict future motions, where its backbone comprises SS-RT and ST-RT to extract the spatio-temporal correlation of motion sequences.
_Spatial Sparse-Relay Transformer (SS-RT)_ is implemented to capture the spatial correlation. In contrast to the vanilla version Vaswani et al. (2017), we use the spatial sparse transformer (SST) to explicitly consider the skeletal structure Child et al. (2019). Moreover, we attach a virtual spatial-relay vertex, which utilizes a separate transformer, called spatial-relay transformer (SRT), to directly aggregate the global information in a frame, and distribute it to each one to consider the long-term correlation.
Let \(\mathbf{c}^{t}=\{\mathbf{c}^{t}_{1},\mathbf{c}^{t}_{2},...,\mathbf{c}^{t}_{N} \}\in\mathbb{R}^{N\times C_{in}}\). be the feature at \(t\)-th frame, and \(\mathbf{c}^{t}_{r}\) be a spatial-relay vertex. For each node \(\mathbf{c}^{t}_{i}\), we use 3 linear transformations to generate a query \(\mathbf{q}_{i}\in\mathbb{R}^{d}\), a key \(\mathbf{k}_{i}\in\mathbb{R}^{d}\) and a value \(\mathbf{v}_{i}\in\mathbb{R}^{d}\). The SST is used to consider the natural connectivity of the human skeleton:
\[\mathbf{c}^{rt}_{i}=\sum softmax(\frac{\mathbf{q}_{i}\cdot\mathbf{k}_{j}}{\sqrt{d}}) \mathbf{v}_{j},j\in\{i,\mathcal{N}_{i},r\}, \tag{1}\]
where \(\mathcal{N}_{i}\) is the neighbors of \(i\)th joint, and \(r\) stands for the label of the spatial-relay vertex. Then, the meaningful inductive bias of the human skeleton is expressly considered.
In addition, we make use of SRT to capture the long-term
Figure 2: **Illustration of our approach. It involves a primary (Pri.) task and two self-supervised auxiliary (Aux.) ones, sharing most of the parameters w.r.t \(\psi_{S.h.}\), except for the task-specific components w.r.t \(\{\psi_{Pri.},\psi_{Aux.1},\psi_{Aux.2}\}\). The Pri. task is concerned with mapping historical observations to the expected prediction. The objective of Aux.1 is to provide the correct label of the scrambled sequence, and Aux.2 is to repair the missing joints in the corrupted sequence, where both the scrambled and corrupted sequence are derived from the observation. With the proposed GSU, the valid contexts can be exchanged, while the invalid or incorrect ones are blocked. \(\sigma\), \(\delta\) denotes the sigmoid and LeakyReLU function, respectively. \(\otimes\) is element-wise product and \(\oplus\) is addition. The last observed frame is regarded as the seed pose (red rectangle).**
spatial correlation:
\[\small\small\small\small\small\small\small\small\small e^{\prime r}_{r}=\sum softmax(\frac{\mathbf{q}_{r}\cdot\mathbf{k}_{j}}{\sqrt{d}})\mathbf{v}_{j},j\!\in\!\{r\}\!\cup\!\{j\!:\!1\leq j\leq N\}. \tag{2}\]
By stacking the SST and SRT, our SS-RT is formed, which is capable of extracting the intrinsic connections of human joints, and meanwhile, capturing the long-term spatial correlation at intra-frame. The resulting output of SS-RT can be formalized as: \(\small\small\small\small\small\small e^{\prime t}=\{\mathbf{c}^{\prime t}_{1},\mathbf{c }^{\prime t}_{2},...,\mathbf{c}^{\prime t}_{N}\}\in\mathbb{R}^{N\times C_{out}}\).
_Temporal Sparse-Relay Transformer (TS-RT)_ consists of a temporal sparse transformer (TST) for extracting the local inter-frame smoothness, and a temporal-relay transformer (TRT) for long-term temporal dependency. Let \(\mathbf{c}_{v}=\{\mathbf{c}^{\prime 1}_{v},\mathbf{c}^{2}_{v},...,\mathbf{c}^{\prime T}_{v}\} \in\mathbb{R}^{T\times C_{in}}\) be the input hidden state, for \(v\in N\), with \(\mathbf{c}^{\prime i}_{v}\in\mathbb{R}^{C_{in}}\), and \(\mathbf{c}^{\prime}_{v}\) be the feature of temporal-relay node, 3 linear transformations are exploited to produce \(\mathbf{q}_{i}\in\mathbb{R}^{d}\), \(\mathbf{k}_{i}\in\mathbb{R}^{d}\) and \(\mathbf{v}_{i}\in\mathbb{R}^{d}\). The TST is defined as:
\[\small\small\small\small\small\small\small e^{\prime i}_{v}=\sum softmax(\frac{\mathbf{q}^{i}\cdot\mathbf{k}^{j}}{d})\mathbf{v}^{j},j\!\in\!\{i,i-1,i+1,r\}. \tag{3}\]
Then, the temporal-relay node is updated with the TRT:
\[\small\small\small\small\small\small\small e^{\prime r}_{v}=\sum softmax(\frac{\mathbf{q}^{r}\cdot\mathbf{k}^{j}}{d})\mathbf{v}^{j},j\!\in\!\{r\}\cup\{j\!:\!1\leq j\leq T\}. \tag{4}\]
The TST and TRT are stacked to create the TS-RT, where the output feature is \(\mathbf{c}^{\prime}_{v}=\{\mathbf{c}^{\prime 1}_{v},\mathbf{c}^{\prime 2}_{v},...,\mathbf{c}^{ \prime T}_{v}\}\in\mathbb{R}^{T\times C_{out}}\).
With the TST and TRT, the TS-RT enables the consideration of both local and global temporal correlation, which is crucial for human motion prediction. In both SS-RT and TS-RT, we set \(d=64\), and in keeping with recent progress [21, 20], we exploit \(H=8\) independent heads to stabilize the training.
Finally, as illustrated in Fig.2, the Pri. branch is composed of 9 shared blocks and a task-specific one, each of which is formed by a SS-RT and a TS-RT. _The detailed illustrations of SS-RT and TS-RT refer to the supplementary material._
Following the recent works [23, 24], the combination of \(L_{2}\) distance and bone length loss is exploited as the loss of the Pri.:
\[\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small\small \small
### Joint Training and Meta-auxiliary Learning
**Joint Training.** In our model, we have introduced a primary branch and two auxiliary ones. Since our model involves multiple branches, it can be directly trained, much as the multi-task learning solutions Cui et al. (2021); Li et al. (2020); Chi et al. (2021). The overall objective is:
\[\mathcal{L}=\mathcal{L}_{Pri.}+\mathcal{L}_{Aux.1}+\mathcal{L}_{ Aux.2}. \tag{10}\]
Once the training is complete on the external dataset, the pre-trained model, w.r.t. \(\mathcal{M}_{\psi}\), is attained, which is regarded as the initialization of the meta-auxiliary learning.
**Meta-auxiliary Learning.** Due to the failure of exploiting the internal properties of test samples, the pre-trained model learned from Eq.10 has a low ability to adapt to the unseen data. We solve this problem by using the proposed meta-auxiliary learning to obtain the optimal parameter that is conducive to adapting to a given motion sample.
For each to use, we decompose the model parameters \(\psi\) into the shared weights \(\psi_{Sh.}\) and task-specific ones \(\{\psi_{Pri.},\psi_{Aux.1},\psi_{Aux.2}\}\) for each branch. To enable the model parameters to be customized according to the unique distribution of test samples, we propose to use meta-auxiliary learning to create the adapted parameters. Concretely, inspired by Liu et al. (2019); Chi et al. (2021), our meta-auxiliary learning intends to learn the consistency of the parameters of our Aux. branches for the Pri. task, to ensure that the auxiliary tasks improve the performance of the Pri. task. In the inner loop of the meta-training phase, several gradient updates of the auxiliary losses are used to update the parameter of the whole network parameter \(\psi\), thereby performing effective adaptation on a specific sample. Given a training pair \((\mathbf{X}_{1:T}^{(k)},\mathbf{\tilde{X}}_{1:T}^{(k)})\), concerning the corrupted and repaired sequence, and \((p^{(k)},\tilde{p}^{(k)})\), w.r.t., the correct label of the disordered counterpart and the predicted one, it can be achieved:
\[\tilde{\psi}^{(k)}\!\!\leftarrow\!\psi\!-\!\alpha\nabla_{\!\psi}\left[\mathcal{ L}_{Aux.1}(p^{(k)};\!\tilde{p}^{(k)})\!+\!\mathcal{L}_{Aux.2}(\mathbf{X}_{1:T}^{ (k)},\tilde{\mathbf{X}}_{1:T}^{(k)})\right] \tag{11}\]
where \(\psi^{(k)}=\{\psi_{Sh.}^{(k)},\psi_{Pri.}^{(k)},\psi_{Aux.1}^{(k)},\psi_{Aux.2 }^{(k)}\}\) is the adapted parameter that is tailored by the specific observation. \(\alpha\) is the learning rate of the adaptation procedure. We notice that both Aux. branches have the same gradient descent direction and are optimized concurrently to ensure a synergistic impact for the adaptation.
Our approach strives to maximize the performance of the Pri. forecasting branch by adjusting the model parameters through the self-supervised auxiliary tasks. For this purpose, our meta-objective is formally denoted as:
\[\min_{\psi_{Sh.},\psi_{Pri.}}\sum_{k=1}^{K}\mathcal{L}_{Pri.}\left(\mathbf{Y}_{ 1:\Delta T}^{(k)},\tilde{\mathbf{Y}}_{1:\Delta T}^{(k)};\tilde{\psi}_{Sh.}^{ (k)},\tilde{\psi}_{Pri.}^{(k)}\right). \tag{12}\]
Here, \(\mathcal{L}_{Pri.}\) is computed suing the pair \((\mathbf{X}_{1:T}^{(k)},\mathbf{Y}_{1:\Delta T}^{(k)})\), while the optimization is over \(\psi=\{\psi_{Sh.},\psi_{Pri.},\psi_{Aux.1},\psi_{Aux.2}\}\) to achieve the updated parameter of the Pri. task. Eq.12 can be minimized using gradient descent algorithms:
\[\psi\leftarrow\psi-\beta\sum_{k=1}^{K}\nabla_{\psi}\mathcal{L}_{ Pri.}\left(\mathbf{Y}_{1:\Delta T}^{(k)},\tilde{\mathbf{Y}}_{1:\Delta T}^{(k)}; \tilde{\psi}_{Sh.}^{(k)},\tilde{\psi}_{Pri.}^{(k)}\right), \tag{13}\]
where \(\beta\) is the meta-learning rate. The overall meta-auxiliary learning procedure is conducted in Algorithm.1, in which the parameters of the Pri. task are updated in the outer loop, and the auxiliary parameters are updated in the inner loop. Regarding the testing phase, for a specific sequence, Eq.11 is directly used to obtain the adapted parameters \(\psi\), and then \(\{\psi_{Sh.},\psi_{Pri.}\}\) is used to improve the generalization capability of the primary forecasting task.
```
0: learning rates \(\alpha\), \(\beta\). pre-trained parameter \(\psi=\{\psi_{Sh.},\psi_{Pri.},\psi_{Aux.1},\psi_{Aux.2}\}\)
0: meta-auxiliary learned parameter
1: initialize the model with the pre-trained parameter \(\psi\)
2:while not convergedo
3: sample a training batch from the \(\{\mathbf{X}_{1:T}^{(k)},\mathbf{Y}_{1:\Delta T}^{(k)}\}_{k=1}^{K}\) ;
4:for each kdo
5: evaluate the auxiliary losses \(\mathcal{L}_{Aux.1},\mathcal{L}_{Aux.2}\);
6: update the adapted parameter: \(\tilde{\psi}^{(k)}\!\!=\!\psi\!-\!\alpha\nabla_{\!\psi}[\mathcal{L}_{Aux.1}(p^{( k)},\tilde{p}^{(k)})\!+\!\mathcal{L}_{Aux.2}(\mathbf{X}_{1:T}^{(k)}\!\! \tilde{\mathbf{X}}_{1:T}^{(k)})]\)
7:end
8: evaluate the primary task and update: \(\psi\leftarrow\psi-\beta\sum_{k=1}^{K}\nabla_{\psi}\mathcal{L}_{Pri.}(\mathbf{Y}_ {1:\Delta T}^{(k)},\tilde{\mathbf{Y}}_{1:\Delta T}^{(k)};\tilde{\psi}_{Sh.}^{ (k)},\tilde{\psi}_{Pri.}^{(k)})\)
9:end
```
**Algorithm 1**Meta-Auxiliary Training
### Implementation Details
As shown in Fig.2, our model includes a Pri. and two Aux. branches. The shared parts consist of \(9\) residual blocks, created by combining the outputs of SS-RT and TS-RT, and having the channel \(C_{in}=C_{out}=512\). In addition, the task-specific portions of the Pri. and Aux.2 are an additional block to map the feature into the original dimension. By contrast, the Aux.1 is a binary classifier, where its separate parts comprise a flatten layer, and 4 FC layers with channel numbers \(256,128,64,1\). Aux.1 takes a scrambled-order counterpart of the observation as the input, while for Aux.2, we randomly remove 20% of the joints from observations. To reduce the complexity, in a specific layer, the GSU is shared in terms of spatio-temporal features. Note that, the feature of the last layer of Pri., is directly connected to Aux.2, and passing through a flatten layer, is connected to Aux.1, so that the meta-auxiliary learning can update the whole parameters of the Pri. branch. We follow the current multi-task learning framework, and exploit the Adam optimizer to train our network, where the learning rate is initialized to \(0.001\), with a \(0.98\) decay every \(2\) epoch. The mini-batch size is \(16\). At the test-time adaptation, we fix the learning rate \(\alpha=\beta=2\times 10^{-5}\), and 6 gradient descents of Eq.11 are performed. Finally, the fine-tuned parameters are acquired, allowing for the adaptation of the internal properties of a specific sequence to achieve a better prediction, as shown in Fig.1. Our code will be publicly available.
## Experiments
### Preliminaries
**Dataset-1: H3.6M**Ionescu et al. (2014) involves 15 action categories performed by 7 professional human subjects (_S-1_, _S-5_, _S-6_, _S-7_, _S-8_, _S-9_, _S-11_). Each pose is represented as a
17-joint skeleton (\(N=17\)), and the sequences are downsampled to achieve 25 fps [11, 12].
**Dataset-2:** We also select 8 action categories from **CMU MoCap**. The pre-processing solution is consistent with the H3.6M dataset. For both H3.6M and CMU MoCap, the proposed model is implemented where the length of the observed sequence is equal to the prediction (\(T=\Delta T=25\)).
**Baselines.** To assess the effectiveness of the proposed approach, the following 5 state-of-the-art (SoTA) methods are selected as our baselines, including LTD [11], DMGNN [12], MSR [13], ST-Tr [14], and PGBIG [12]. LTD resorts to GCN to analyze the motion sequence in the frequency domain. DMGNN suggests using GCNs to encode the human topology and RNNs for decoding. MSR expands the multi-scale variant of the LTD approach. ST-Tr exploits the spatial-temporal transformer for human motion prediction. PGBIG is a recently introduced algorithm to generate a virtual initial guess to increase the prediction accuracy.
**Metric.** We test our model using the Mean Per Joint Position Error (MPJPE) in millimeters, in accordance with earlier work [12, 13].
**Experimental Setups.** We use 3 alternative setups to analyze our model, as stated in Table 1. **(a)** testing on \(S\)5_, while training on (_S_1_, \(S\)6_, \(S\)7_, \(S\)8_, \(S\)9_, \(S\)11_), for the general prediction, as same as the prior methods [11, 12, 13]; To verify the performance for out-of-distribution data, the following new strategies are exploited: **(b)** testing on _S_x_, training on the actions on the other subjects, for the adaptability on unseen subjects; **(c)** testing on _C_x_, training on the actions on the other categories, for the adaptability on unseen action categories. The prefix \(S\) indicates the _subject_, and \(C\) denotes the _category_. For fairness, we also apply the training/testing division in Table 1, but the hyperparameters remain unchanged, to re-train the baselines.
### Comparison with State-of-the-arts on H3.6M
**General predictive ability.** The existing predictors are normally tested on the actions of \(S\)5_ and trained on the other subjects. However, our key observation is that, the motion patterns of different individuals tend to be distinct; therefore, this distribution-shift deteriorates the performance of deep pre-trained models. As a comparison, at test-time, our model is able to be further optimized by meta-auxiliary learning, to achieve a better result. Consistent with the previous work [11, 12], we first use the **setup-(i)** (in Table 1) to evaluate the general predictive ability of our model. Table 2 reports the comparison of 3 representative activities. We observe that, our result tends to be better in almost all scenarios, which reveals that the dynamic characteristics of \(S\)5_ are potentially distinguishing from other subjects, and our model can adapt to them.
**Predictive ability on unseen subjects.** Intuitively, due to unique height and body proportion, even for the same category, the motion properties (_e.g._, styles or rhythms) of different subjects are potentially inconsistent. To further investigate the adaptation ability of different subjects, the experimental **setup-(ii)** is used. Concretely, we fine-tune the base model under the test actions of a specific subject-x (_S_x_), where the base model is learned from the others. Table 3 provides the average MPJPE of the end predicted pose (1000ms) of different unseen subjects. From the results, we observe that our model produces better predictions against the baseline models. It implies that the dynamic characteristics of different humans indeed involve distinct motion attributes. Moreover, our approach exploits the external large dataset, and meanwhile, can be tailored based on the internal information of test sequences via meta-auxiliary learning, to consistently yield a superior result for unseen subjects.
**Predictive ability on unseen categories.** Due to the diversity and uncertainty, human action involves unenumerable categories. Typically, the training dataset falls short of covering all action types. In practical applications, existing deep end-to-end algorithms face a major challenge, that is, once encountering the unseen category at test-time, their performance tends to decline sharply. However, our model is able to further optimize the base model learned from large datasets, to adapt to the unique attributes of a new action category. To verify it, we exploit the experimental **setup-(iii)**. Specifically, our approach and the baselines are evaluated under each specific category _C_x_ respectively, while the training is conducted on the remaining ones. From Fig.4, we
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Datasets & testing & training & purpose \\ \hline \multirow{3}{*}{H3.6M} & _(ii) 5.5_ & \(S\)1,\(S\)6\(\sim\)_S_9,\(S\)11_ & general predictive ability \\ \cline{2-3} & _(iii) 5.4_ & other subjects & \multirow{2}{*}{predictive ability on unseen subjects} \\ \cline{2-3} & _(iii) 5.4_ & other categories & \\ \hline CMU MoCap & _(iii) 5.4_ & other categories & \\ \hline \end{tabular}
\end{table}
Table 1: Experimental setups. As in the typical approaches, the **setup-(i)** is to evaluate the general predictive ability, while the **setup-(ii)(iii)(iv)** are newly designed to investigate the adaptability to out-of-distribution data.
\begin{table}
\begin{tabular}{|c|c c c c c|c c c|} \hline & \multicolumn{3}{c|}{walking} & \multicolumn{3}{c|}{eating} & \multicolumn{3}{c|}{smoking} \\ ms & 80 & 160 & 320 & 400 & 1000 & 80 & 160 & 320 & 400 & 1000 \\ \hline LTD & 12.23 & 20.39 & 38.46 & 59.8 & 8.4 & 16.93 & 33.20 & 40.7 & 77.8 & 79.6 & 12.6 & 13.9 & 38.9 & 72.6 \\ DMGNN & 17.30 & 37.54 & 66.62 & 59.8 & 10.1 & 24.3 & 43.9 & 86.7 & 9.0 & 17.6 & 32.1 & 40.3 & 72.2 \\ ST-Tr & 18.57 & 47.60 & 16.73 & 103.2 & 22.2 & 24.50 & 47.7 & 8.4 & 9.4 & 17.9 & 38.4 & 28.9 & 79.6 \\ MSR & 12.22 & 27.86 & 45.2 & 63.0 & 8.4 & 17.3 & 13.3 & 40.4 & 77.1 & 8.0 & 16.3 & 33.1 & 38.2 & 71.6 \\ PGBIG & **10.2** & 19.8 & 34.5 & 40.3 & 56.4 & **7.0** & **15.1** & 30.6 & 38.1 & 76.0 & **6.6** & **14.1** & 28.2 & 34.7 & 69.5 \\ \hline Ours & 10.8 & **18.9** & **33.2** & **38.1** & **52.3** & 8.8 & **15.4** & **28.5** & **36.7** & **71.6** & **6.6** & **13.5** & **26.7** & **32.0** & **67.5** \\ \hline \end{tabular}
\end{table}
Table 2: MPJPE comparisons on 3 activities from the H3.6M dataset, where the experimental design follows the conventions of predictive algorithms(_S_5_ is used for testing while the other is used for training). The best result is displayed in bold, while the second is underlined. We observe our model achieves the overall better results. It reveals that the dynamic characteristics of \(S\)5_ are slightly different from those of other subjects, and our approach is able to adapt to them.
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline Unseen Subjects & \(S\)1 & \(S\)6 & \(S\)7 & \(S\)8 & \(S\)9 & \(S\)11 \\ \hline LTD & 115.4 & 132.8 & 133.7 & 120.1 & 123.8 & 124.3 \\ DMGNN & 122.5 & 139.3 & 131.0 & 125.2 & 134.7 & 120.2 \\ ST-Tr & 133.6 & 147.5 & 134.2 & 128.0 & 140.2 & 124.5 \\ MSR & 115.7 & 131.0 & 123.1 & 116.5 & 118.8 & 116.2 \\ PGBIG & 113.2 & 127.3 & 124.4 & 118.3 & 114.6 & 112.0 \\ \hline Ours & **107.0** & **123.2** & **118.7** & **113.5** & **109.7** & **110.2** \\ \hline \end{tabular}
\end{table}
Table 3: Average MPJPE of a total of 15 activities at the end predicted pose (1000ms), evaluated on each unseen subject.
observe that our model brings superior results in all scenarios for such out-of-distribution data of unprecedented categories. It evidences that our model is indeed capable of adapting to the characteristics of unseen action categories.
Also, Fig.4 illustrates two qualitative comparisons between the proposed model and the SoTA PGBIG [14], for the _greeting_ activity of the unseen subject-11 (_S-11_) and unseen action category (_C_phoning_).
**Results on CMU MoCap.** We also evaluate the predictive ability on unseen categories from the CMU MoCap using the **setup-(iv)**. From Fig.5, the results show that our model substantially outperforms the baselines.
**Progressive results.** At test-time, with several gradient updates, our model allows us to learn to adapt to the internal properties of the test sequence. To better explain it, we show the progressive result by unfolding the TTA procedure after each gradient descent. The inference is run on _C_smoking_, and the training is run on the other categories. Fig.6 presents
\begin{table}
\begin{tabular}{|c|c c c c c c c|} \hline & & & & & & & \\ \hline LTD & 109.0 & 75.3 & 121.2 & 142.4 & 65.7 & 115.3 & 49.0 & 83.1 \\ DMGNN & 145.6 & 72.7 & 130.3 & 163.1 & 73.4 & 121.9 & 52.1 & 89.8 \\ ST-Tr & 150.2 & 77.1 & 131.0 & 153.2 & 76.8 & 130.4 & 60.3 & 95.6 \\ MSR & 101.1 & 66.6 & 117.4 & 138.6 & 56.3 & 110.4 & 45.2 & 74.9 \\ PCIG & 94.9 & 61.9 & 113.0 & 134.8 & 57.1 & 105.1 & 41.4 & 74.7 \\ \hline Ours & **87.4** & **53.1** & **97.3** & **120.4** & **46.2** & **93.2** & **35.8** & **62.5** \\ \hline \end{tabular}
\end{table}
Table 4: Average MPJPE of the end predicted pose (1000ms) of each unseen category _C_x_ from the CMU MoCap dataset.
Figure 4: Comparison of each unseen action category _C_x_ from the H3.6M dataset. We observe that, at test-time, our approach is able to be fine-tuned for a specific category _C_x_ to adapt to its internal properties, thus achieving the higher prediction accuracy.
Figure 5: Qualitative comparison on the greeting activity of unseen subjects _S-11_ (top) and unseen category _C_phoning_ (bottom). In each sub-figure, the first row is the SoTA PGBIG [14], followed by our result, where the blue pose refers to the prediction, and the underlying red is the GT. The green rectangles indicate the contrasting parts. We observe that, our predicted poses are closer to the GT, as it is tailored according to the specific sequence.
Figure 6: Results of the unfolding TTA process with the different number of gradient descents \(I=\{0,2,4,6\}\) on _C_smoking_. With the iteration, the coarse results tend to be close to the GT.
the 3 channels (_i.e._, x, y, z axes) of these intermediate results by the heat map, with more red denoting larger, and more blue, smaller values. We see that as the iteration goes on, the result gradually tends to be closer to the GT.
### Ablation Studies
Here, the following ablation experiments are conducted. We adapt our approach to each action category \(C\_x\) and take the average as the result, as in the **setup-(\(\mathbf{\ddot{u}}\mathbf{\ddot{u}}\))**.
_w/_ **GSUs _v.s. w/o_ **GSUs.** Intuitively, the GSU facilitates the transfer of useful information. It is confirmed in Table 5.
**Impact of Aux. branches.** Both the Aux.1 and Aux.2 branches behave as the complement to the Pri. task. To verify the effectiveness of the Aux. branches, we analyze the effects to the Pri. of the Aux.1 and Aux.2 on retaining one of them. As shown in Table 6(left), when Aux.1 and Aux.2 are introduced concurrently, a better result is achieved.
**Number of gradient descents.** Here, we provide the impact of the maximum number of gradient updates \(I=\{0,5,6,7\}\) at test-time adaptation. From Table 6(right), we observe that, overall, the larger \(I\) obtains smaller errors. When \(I=5\), the best result is yielded, and larger value brings no benefits.
## Conclusion
In this work, we have introduced a test-time adaptation model for human motion forecasting. It uses meta-auxiliary learning to ensure that the update of auxiliary tasks can bring superior adaptability and better performance to the main task on specific samples. At test-time, it resorts to meta-auxiliary learning to ensure that the updates of both auxiliary tasks can bring better adaptation and higher performance to the primary task on specific sequences. Extensive experiments show that our model consistently outperforms the SoTA approaches on unseen subjects and categories. It has revealed that our model is able to adapt to the dynamic characteristics of out-of-distribution data in the real world.
## Acknowledgements
This work was supported in part by the National Natural Science Foundation of China (62176125), in part by the Jiangsu Funding Program for Excellent Postdoctoral Talent (2022ZB269), in part by the Natural Science Foundation of Jiangsu Province (BK20220939), and in part by the China Postdoctoral Science Foundation (2022M721629).
|
2303.13905 | A note on the renormalization group approach to the Central Limit
Theorem | Two proofs of the Central Limit Theorem using a renormalization group
approach are presented. The first proof is conducted under a third moment
assumption and shows that a suitable renormalization group map is a contraction
over the space of probability measures with a third moment. The second proof
uses Lyapunov stability and works under a second moment condition. These are by
far not the most optimal proofs of the CLT, and the main interest of the proofs
are their existence, the CLT being the simplest case in which a renormalization
group argument should apply. None of the tools used in this note are new.
Similar proofs are known amongst expert in limit theorems, but explicit
references are not so easy to come by for non-experts in the field. | Sébastien Ott | 2023-03-24T10:39:26Z | http://arxiv.org/abs/2303.13905v3 | # A note on the renormalization group approach to the central limit theorem
###### Abstract.
A proof of the Central Limit Theorem using a renormalization group approach is presented. The proof is conducted under a third moment assumption and shows that a suitable renormalization group map is a contraction over the space of probability measures with a third moment. This is by far not the most optimal proof of the CLT, and the main interest of the proof is its existence, the CLT being the simplest case in which a renormalization group argument should apply. None of the tools used in this note are new. Similar proofs are known amongst expert in limit theorems, but explicit references are not so easy to come by for non-experts in the field.
## A word from the author
I am neither a renormalization group expert, nor a limit theorem expert. So it is perfectly possible that the results described this note appeared somewhere else 1, or that I missed relevant references to the "Renormalization group approach to CLT" story. In both cases, I would be extremely grateful to receive pointers towards the relevant references. After the appearance of (the first version of) this note on the arXiv, I received several emails with references and/or comments. The updated version is motivated by the content of those mails, I hope that the enhanced bibliography (which is still far from being exhaustive) will help other people interested in these questions as much as it helped me. Many thanks to all the people who took the time to send me information!
Footnote 1: Indeed, after the second version of this note appeared on the arXiv, I received a reference to a paper, [11], containing the proof presented in section 1. It is historically interesting to note that they introduce (and use) the Fourier based metrics: the paper is from 1984 and the Fourier based metrics where believed to have been introduced in [8], more than 10 years later.
Introduction/bibliographical review
This note is about providing a simple renormalization group style proof of the classical Central Limit Theorem. The CLT is likely to be the most well known feat of probability theory, so I do not intend to say anything new about it. The purpose of this work is more to perform a "sanity check" for the renormalization group approach: if one can not get the method to work in the simplest instance it should apply to, one has little chances of success in much more involved situations. This note is not the first result about the renormalization road to the CLT, and one can distinguish two general approaches: a "Lyapunov" method, consisting of solving the fixed point equation and, more or less explicitly, finding a suitable Lyapunov function for the discrete dynamical system \(\mu_{n+1}=T\mu_{n}\) (where \(T\) is the renormalization map); and a "Banach" method, consisting in finding a subset of the set of probability measures \(\Omega\), and a distance \(d\) on \(\Omega\) such that 1) \((\Omega,d)\) is complete 2) the renormalization operation is a contraction on \((\Omega,d)\). On the one side, the "Banach" method yields more quantitative results and _one_
_does not need to solve the fixed point equation_ to prove convergence towards something. On the other side, the "Lyapunov" method is more flexible and allows for weaker conditions. Section 1 contains an example of what I called a "Banach" approach, while Section 2 presents a "Lyapunov" approach.
### "Lyapunov" method
On the "perturbative side" (perturbation around the fixed point), one can find the articles [13, 4, 16] which deal with the transform of equation (4) (and its natural generalizations) that have as fixed points stable laws. Suitable neighbourhood of the fixed points are then shown to contract to the fixed point. Reviews/introductory texts (mostly focused on the Gaussian fixed point) can be found in [20, pages 131-132], [14], [15, Section 10.3]. A non-rigorous discussion on that problem can be found in [1]. The same set of ideas has been used to prove CLT for dependent fields, as this is not the main topic of this note, I will only mention a few early references: [3] deal with hierarchical spin models, their method influenced several of the papers mentioned previously; [9, 7] contain reviews of CLT for the "magnetization" of (weakly) dependent lattice random fields (the second also provides a general discussion about renormalization and CLT).
Another route has also been studied: the entropy can be used as a Lyapunov function for the dynamical system underlying the renormalization process, see [17] for the CLT using entropy, [5] for an "Lyapunov function" proof, and [12] for a book on the topic.
### "Banach" method
A Banach fixed point argument can be found in [19] (see the slides [18] for a discussion of the application to the CLT). The proof there is very similar to the one of Section 1: use of ideal metric (Zolotarev metric -introduced in [21, 22]-, in [18] versus Fourier based metric here) to prove contraction properties of the renormalization map.
### Present note
This note contain an example of each methods. As said above, the approaches taken here are not new: the renormalization group map is the one of [13] and is probably older, the contraction principle of Section 1 is included in the paper [10]: equations (6), (7) are morally the contraction used here (which is a property of certain ideal metrics, see Section 3). The argument presented in [19, 18] is the same as the one of Section 1 but with a different metric (see Section 3).
The argument of Section 2 is a simplified version of an argument that has been communicated to me by Jiwoon Park. It provides an example of the larger flexibility of the "Lyapunov" method, and applies with only a second moment condition.
### General comment
It is worth noting that a contraction principle is underlying most approaches to the CLT, but the latter is usually formulated as follows: one has a sequence of operators, \((T_{n})_{n\geq 1}\) (which acts by \(n\)-fold convolution and rescaling by \(\sqrt{n}\)), having the normal distribution as fixed point, and having "law-dependent contraction constant" going to \(0\) with \(n\), rather than iterating a fixed transform with uniform (over probability distribution) contraction constant.
Moreover, the proof of Section 2 can be seen as a slightly convoluted re-writing of the usual proof of the CLT through pointwise convergence of characteristic function, I nevertheless find it a nice example of global asymptotic stability of the fixed point of the renormalization process. The argument of Section 1 is more quantitative but more restrictive.
## 1. Renormalization road to CLT: a "Banach" result
As nothing is really new in the content of this text, I will work under a third moment assumption. The latter can be relaxed to a \(2+\epsilon\) moment assumption: the distance used (denoted \(\mathrm{d}_{3}\) in the text) is a particular case of the Fourier-based distances introduced in [8]. Replacing this distance by its \(\mathrm{d}_{2+\epsilon}\) version handles the extension (\(3\) is chosen for aesthetic reasons and to slightly simplify appendix A). A review on these metrics and their applications can be found in [6].
### Framework
Probability measures on \(\mathbb{R}\) will be denoted \(\nu,\mu\), the expectation under \(\nu\) will be denoted \(E_{\nu}\). Inside expected value, \(X\) will denote a random variable of the relevant law, and \((X,Y)\) a random vector of law \(\nu\otimes\mu\) when this case is considered. Let \(\mathcal{P}_{r}=\mathcal{P}_{r}(\mathbb{R})\) be the set of probability measures on \(\mathbb{R}\) with finite \(r\)th absolute moment. Denote
\[\mathcal{Q}_{3}:=\{\nu\in\mathcal{P}_{3}:\ E_{\nu}(X)=0,\ E_{\nu}(X^{2})=1\},\]
the set of centred, reduced probability measures with a third absolute moment. Equip \(\mathcal{Q}_{3}\) with the Fourier-based distance
\[\mathrm{d}_{3}(\nu,\mu)=\sup_{\xi\in\mathbb{R}^{*}}\frac{\big{|}\varphi_{\nu} (\xi)-\varphi_{\mu}(\xi)\big{|}}{|\xi|^{3}} \tag{1}\]
where \(\mathbb{R}^{*}=\mathbb{R}\setminus\{0\}\), and
\[\varphi_{\nu}(\xi)=E_{\nu}\big{(}\mathrm{e}^{\mathrm{i}X\xi}\big{)}, \tag{2}\]
is the characteristic function of \(\nu\) ([6] defines it with a \(-\) sign in the exponential).
**Lemma 1.1**.: \(\mathrm{d}_{3}\) _is a finite distance on \(\mathcal{Q}_{3}\), and convergence in \(\mathrm{d}_{3}\) implies weak convergence._
This result can be imported form [8, 6], but a proof is included in Appendix A. The goal will be to study convergence towards a normal distribution \(\mathcal{N}(0,1)\). Denote \(\gamma\) the normal law:
\[d\gamma(x)=\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2}dx. \tag{3}\]
One obviously has \(\gamma\in\mathcal{Q}_{3}\).
**Remark 1.1**.: _It is worth mentioning that \((\mathcal{Q}_{3},\mathrm{d}_{3})\) is not expected to be a complete metric space, but asking for \(3+\epsilon\) moments rather than \(3\) guarantees completeness (see [6, Proposition 2.7] and the follow-up discussion)._
### Renormalization transform
Denote \(\nu*\mu\) the convolution of \(\nu\) and \(\mu\) (the law of \(X+Y\) where \(X\sim\nu,Y\sim\mu\) are independent). Also denote \([\nu]_{\lambda}\) the law of \(\lambda X\), where \(X\sim\nu\). One then consider the following renormalization transformation on probability measures: \(T\nu\) is the law of \(2^{-1/2}(X+Y)\) where \(X,Y\) are independent random variables of law \(\nu\):
\[E_{T\nu}(f)=E_{\nu\otimes\nu}\Big{(}f\big{(}(X+Y)/\sqrt{2}\big{)}\Big{)}. \tag{4}\]
In words: \(T\) maps \(\nu\) to the renormalized convolution of \(\nu\) with itself (\(T\nu=[\nu^{*2}]_{2^{-1/2}}\)). Taking the sum is the "coarse graining" part of a renormalization step and dividing by \(\sqrt{2}\) is the "rescaling" part.
**Lemma 1.2**.: _If \(\nu\in\mathcal{Q}_{3}\), then \(T\nu\in\mathcal{Q}_{3}\)._
Proof.: From the definition, one has that if \(\nu\) has a second moment, and \(E_{\nu}(X)=0\), then \(E_{T\nu}(X)=0\), and \(E_{T\nu}(X^{2})=E_{\nu}(X^{2})\). Then,
\[E_{T\nu}(|X|^{3})=\frac{1}{2^{3/2}}E_{\nu\otimes\nu}(|X+Y|^{3})\leq\frac{16}{2^ {3/2}}E_{\nu}(|X|^{3})<\infty.\]
The goal of this note is to study the CLT, so the main interest of this transformation is
\[T\gamma=\gamma, \tag{5}\]
which is a standard consequence of the stability of the Gaussian distribution.
The link with the CLT is as follows: if \(X_{1},X_{2},\ldots\) form an i.i.d. sequence of law \(\nu\in\mathcal{Q}_{3}\), then \(T^{n}\nu\) is the law of
\[\frac{1}{2^{n/2}}\sum_{k=1}^{2^{n}}X_{k}\equiv\frac{1}{\sqrt{N}}\sum_{k=1}^{N }X_{k},\]
where \(N=2^{n}\).
### Contraction and CLT
**Theorem 1.3**.: _The application \(T\) is a contraction on \((\mathcal{Q}_{3},\mathrm{d}_{3})\) with contraction constant \(\leq 2^{-1/2}\). In particular (by (5)),_
\[\mathrm{d}_{3}(T^{n}\nu,\gamma)\leq 2^{-n/2}\mathrm{d}_{3}(\nu,\gamma). \tag{6}\]
Proof.: The proof is almost trivial. Let \(\mu,\nu\in\mathcal{Q}_{3}\). First, notice that
\[\varphi_{T\nu}(\xi)=E_{\nu\otimes\nu}(e^{\mathrm{i}\xi(X+Y)/\sqrt{2}})=\varphi _{\nu}(\xi/\sqrt{2})^{2}. \tag{7}\]
Then, writing the definition, one has
\[\mathrm{d}_{3}(T\nu,T\mu) =\sup_{\xi\in\mathbb{R}^{*}}\frac{\left|\varphi_{\nu}(\xi/\sqrt{ 2})^{2}-\varphi_{\mu}(\xi/\sqrt{2})^{2}\right|}{2^{3/2}|\xi/\sqrt{2}|^{3}}\] \[=2^{-3/2}\sup_{\xi\in\mathbb{R}^{*}}\frac{\left|\varphi_{\nu}( \xi)^{2}-\varphi_{\mu}(\xi)^{2}\right|}{|\xi|^{3}}\] \[=2^{-3/2}\sup_{\xi\in\mathbb{R}^{*}}\frac{\left|\varphi_{\nu}( \xi)-\varphi_{\mu}(\xi)\right|}{|\xi|^{3}}\big{|}\varphi_{\nu}(\xi)+\varphi_{ \mu}(\xi)\big{|}\] \[\leq 2^{-1/2}\sup_{\xi\in\mathbb{R}^{*}}\frac{\left|\varphi_{\nu}( \xi)-\varphi_{\mu}(\xi)\right|}{|\xi|^{3}}=2^{-1/2}\mathrm{d}_{3}(\nu,\mu),\]
as \(\left\|\varphi_{a}\right\|_{\infty}=1\) for any probability measure \(a\).
**Corollary 1.4** (Central Limit Theorem).: _Let \(\nu\in\mathcal{Q}_{3}\). Let \(X_{1},X_{2},\ldots\) be an i.i.d. sequence of law \(\nu\). Then,_
\[\frac{1}{\sqrt{N}}\sum_{k=1}^{N}X_{k}\xrightarrow{N\to\infty}\mathcal{N}(0,1), \tag{8}\]
_where the convergence is \(\mathrm{d}_{3}\) distance (and therefore also in law)._
The claim along the sequence \(N=1,2,4,8,\ldots\) (or any geometric sequence) follows directly from Theorem 1.3, Lemma 1.2, and the fact that \(\mathrm{d}_{3}\) is a _finite_ distance on \(\mathcal{Q}_{3}\) (by Lemma 1.1). Extending this to arbitrary sequences can be done in several ways, which shall not be exposed here. It is worth stressing out that extending the result to arbitrary sequences is never simpler than using stability of Gaussian and scaling+convolution properties of \(\mathrm{d}_{3}\) to obtain the CLT directly. Indeed,
\[\begin{split}\mathrm{d}_{3}(\nu_{1}*\nu_{2},\mu_{1}*\mu_{2})& \leq\mathrm{d}_{3}(\nu_{1},\mu_{1})+\mathrm{d}_{3}(\nu_{2},\mu_{2}),\\ \mathrm{d}_{3}([\nu]_{\lambda},[\nu]_{\lambda})&\leq \lambda^{3}\mathrm{d}_{3}(\nu,\mu),\end{split} \tag{9}\]
where \([\nu]_{\lambda}\) is the law of \(\lambda X,X\sim\nu\). The first point follows from \(|ab-cd|\leq|a-c||b|+|b-d||c|\), and the definition. Therefore, by the properties of the Gaussian,
\[\mathrm{d}_{3}([\nu^{*n}]_{n^{-1/2}},\gamma)=\mathrm{d}_{3}([\nu^{*n}]_{n^{-1/ 2}},[\gamma^{*n}]_{n^{-1/2}})\leq\frac{1}{n^{3/2}}\mathrm{d}_{3}(\nu^{*n}, \gamma^{*n})\leq\frac{\mathrm{d}_{3}(\nu,\gamma)}{n^{1/2}}.\]
## 2. A "Lyapunov proof" under a second moment condition
As said in the introduction, the following proof is a simplified version of an argument that has been sent to me by Jiwoon Park. It is presented here with his kind permission.
Let
\[\mathcal{Q}_{2}=\{\nu\in\mathcal{P}_{2}:\ E_{\nu}(X)=0,\ E_{\nu}( X^{2})=1\},\] \[\mathrm{d}_{2}(\nu,\mu)=\sup_{\xi\in\mathbb{R}^{*}}\frac{|\varphi_ {\nu}(\xi)-\varphi_{\mu}(\xi)|}{\xi^{2}}.\]
Then, \(\mathrm{d}_{2}\) is a (finite) distance on \(\mathcal{Q}_{2}\) (which metrizes weak convergence). The problem with \(\mathrm{d}_{2}\) is that it is only 2-ideal (see Section 3), so the argument of the previous Section can not be used with this metric (but still gives continuity of \(T\)). Yet one can still use it as a Lyapunov function.
**Theorem 2.1**.: _Let \(V:\mathcal{Q}_{2}\to\mathbb{R}_{+}\) be defined by \(V(\nu)=\mathrm{d}_{2}(\nu,\gamma)\). Then, \(V\) satisfies_
1. \(V(\gamma)=0\)_, and_ \(V(\nu)>0\) _for any_ \(\nu\in\mathcal{Q}_{2}\setminus\{\gamma\}\)_._
2. \(V\) _is continuous on_ \((\mathcal{Q}_{2},\mathrm{d}_{2})\)_._
3. \(V\) _has bounded level sets._
4. \(V(T\nu)<V(\nu)\) _for every_ \(\nu\in\mathcal{Q}_{2}\setminus\{\gamma\}\)_._
Proof.: Items 1, 2, and 3 are obvious as \(\mathrm{d}_{2}\) is a distance that metrizes weak convergence. Only item 4 requires some care. Let \(\nu\neq\gamma\). Then,
\[V(T\nu)=\sup_{\xi\in\mathbb{R}^{*}}\frac{|\varphi_{\nu}(\xi)-\varphi_{\gamma}( \xi)|}{\xi^{2}}\frac{|\varphi_{\nu}(\xi)+\varphi_{\gamma}(\xi)|}{2}\leq\sup_{ \xi\in\mathbb{R}^{*}}\frac{|\varphi_{\nu}(\xi)-\varphi_{\gamma}(\xi)|}{\xi^{2} }\frac{1+e^{-\xi^{2}/2}}{2}.\]
Now, by second order Taylor expansion,
\[\varphi_{*}(\xi)=1-\frac{\xi^{2}}{2}+\xi^{2}h_{*}(\xi)\]
with \(h_{*}(\xi)\xrightarrow{\xi\to 0}0\). Let \(a=V(\nu)=\mathrm{d}_{2}(\nu,\gamma)>0\), and \(b>0\) be such that \(\sup_{|\xi|<b}\max(|h_{\nu}(\xi)|,|h_{\gamma}(\xi)|)<\frac{a}{4}\). Then, the last term of the last display is less than
or equal to
\[\max\Big{(}\sup_{|\xi|\geq b}\frac{|\varphi_{\nu}(\xi)-\varphi_{\gamma} (\xi)|}{\xi^{2}}\frac{1+e^{-b^{2}/2}}{2},\sup_{\xi\in(-b,b)\setminus\{0\}}|h_{ \nu}(\xi)-h_{\gamma}(\xi)|\Big{)}\leq\\ \leq\max\Big{(}\frac{1+e^{-b^{2}/2}}{2}a,\frac{a}{2}\Big{)}<a,\]
proving the last item.
The immediate consequence is
**Corollary 2.2**.: _Let \(\nu\in\mathcal{Q}_{2}\). Let \(X_{1},X_{2},\ldots\) be an i.i.d. sequence of law \(\nu\). Then,_
\[\frac{1}{2^{n/2}}\sum_{k=0}^{2^{n}}X_{i}\xrightarrow{n\to\infty}\mathcal{N}(0,1). \tag{10}\]
Proof.: The proof is the standard proof that global asymptotic stability is implied by the existence of a suitable Lyapunov function. It is included for the reader convenience. Let \(V(\nu)=\mathrm{d}_{2}(\nu,\gamma)\). The claim is equivalent to \(V(T^{n}\nu)\to 0\) as \(n\to\infty\) for any \(\nu\in\mathcal{Q}_{2}\). Fix \(\nu\in\mathcal{Q}_{2}\). Define
\[D=\{\mu\in\mathcal{Q}_{2}:\ 0\leq V(\mu)\leq V(\nu)\}.\]
By items 3,2 of Theorem 2.1, \(D\) is compact. Moreover, as \(V(T\mu)\leq V(\mu)\), \(\nu_{n}\equiv T^{n}\nu\in D\) for all \(n\geq 0\). The sequence \(V(\nu_{n})\) is then a bounded decreasing sequence, denote \(a\geq 0\) its limit. By continuity of \(V\), every cluster point, \(\nu_{*}\), of \((\nu_{n})_{n}\) has \(V(\nu_{*})=a\). Now, if \((\nu_{n_{k}})_{k}\) is a converging subsequence of \((\nu_{n})_{n}\), so is \((\nu_{n_{k}+1})_{k}\) (by continuity of \(T\)). Moreover, the limit of \((\nu_{n_{k}+1})_{k}\) is \(T\nu_{*}\) where \(\nu_{*}\) is the limit of \((\nu_{n_{k}})_{k}\). So, \(a=V(T\nu_{*})\leq V(\nu_{*})=a\) with equality only if \(\nu_{*}=\gamma\) (by Theorem 2.1, item 4). So either \(\nu_{*}=\gamma\) or \(a=0\) which imply each other.
## 3. Additional remarks
### About the metric
The metrics \(\mathrm{d}_{3},\mathrm{d}_{2}\) (and their generalizations \(\mathrm{d}_{s}\), see [10]) as well as the Zolotarev metric mentioned in the introduction and used in [19, 18], belong to a class of metric on probability measures which all allow the same type of argument. These metric satisfy
\[d(\nu*\eta,\mu*\eta)\leq d(\nu,\mu), \tag{11}\] \[d([\nu]_{\lambda},[\mu]_{\lambda})\leq\lambda^{s}d(\nu,\mu), \tag{12}\]
for some \(s>0\). Such a metric with equality in (12) is call _\(s\)-ideal_. Suppose that the metric is defined as a supremum over a suitable class of test function of weighted expectation difference:
\[d(\nu,\mu)=\sup_{f\in F}w(f)\big{|}E_{\nu}(f)-E_{\mu}(f)\big{|}, \tag{13}\]
such metric is apparently said to have a _\(\zeta\)-structure_. Suppose that one can construct a metric \(d\) satisfying (11), (12) with \(s>2\), and (13). Further suppose that convergence in the topology induced by \(d\) implies weak convergence. Then one can perform the
same argument, the key being
\[d(\nu*\nu,\mu*\mu) =\sup_{f\in F}w(f)\big{|}E_{\nu\otimes\nu}(f(X+Y))-E_{\mu\otimes\mu} (f(X+Y))\big{|}\] \[\leq\sup_{f\in F}w(f)\big{|}E_{\nu\otimes\nu}(f(X+Y))-E_{\nu\otimes \mu}(f(X+Y))\big{|}\] \[\quad+\big{|}E_{\nu\otimes\mu}(f(X+Y))-E_{\mu\otimes\mu}(f(X+Y)) \big{|}\] \[\leq 2d(\nu,\mu).\]
It is also clear that the same type of argument as in Section 1 with \(s=2\) will not work so easily, so a different idea is required. Limitations of the method are discussed in [19]. One can also wonder if a "Banach" proof can even exist. An argument going in this direction is [2]: if \(T:\mathcal{X}\to\mathcal{X}\) is such that \(T,T^{2},T^{3},\ldots\) have a unique fixed point, one can construct a metric on \(\mathcal{X}\) such that \(T\) is a contraction (with arbitrary contraction constant). But the argument is non-constructive, uses the axiom of choice, and the metric has no chance of metrizing weak topology. I would tend to think that the general claim is wrong but that the following weaker claim could true: one can find a sequence \(\mathcal{Q}_{2}^{(1)}\subset\mathcal{Q}_{2}^{(2)}\subset\ldots\) converging to \(\mathcal{Q}_{2}\) (i.e.: \(\nu\in\mathcal{Q}_{2}\) implies there is \(n\geq 1\) such that \(\nu\in\mathcal{Q}_{2}^{(n)}\)) and a sequence of metrics \(d_{1},d_{2},\ldots\) on those spaces such that \(T\mathcal{Q}_{2}^{(n)}\subset\mathcal{Q}_{2}^{(n)}\) and \(T\) is a contraction on \((\mathcal{Q}_{2}^{(n)},d_{n})\) (and \((\mathcal{Q}_{2}^{(n)},d_{n})\) is complete, and \(d_{n}\) metrizes weak convergence on \(\mathcal{Q}_{2}^{(n)}\)...). Any comment on this question would be welcome!
## Acknowledgements
Thanks to Nicolas Curien for suggesting me to contact Ralph Neininger, and to Ralph Neininger for pointers to [19, 18], and comments on the general picture. Also thanks to Jiwoon Park for sending me the argument which led to Section 2. Finally, thanks to all the people who send me information and comments after the first version went online.
The author is supported by the Swiss NSF grant 200021_182237 and is a member of the NCCR SwissMAP.
## Appendix A Fourier based distance
This Section contains a proof of Lemma 1.1. The proof is by no mean new, it is included for the reader convenience.
First, note that \(\mathrm{d}_{3}\) is a distance: symmetry is obvious, and separation follows from the fact that two probability measures are the same if and only if they have the same characteristic function. Triangular inequality follows from triangular inequality for the absolute value, and from \(\sup f+g\leq\sup f+\sup g\).
Then, show that \(\mathrm{d}_{3}\) is finite on \(\mathcal{Q}_{3}\times\mathcal{Q}_{3}\). Take \(\nu,\mu\in\mathcal{Q}_{3}\). As \(\nu,\mu\) have a third moment, \(\varphi_{\nu},\varphi_{\mu}\) are three times continuously differentiable and they admit a Taylor expansion at \(0\):
\[\varphi_{*}(\xi)=1-\frac{\xi^{2}}{2}-\mathrm{i}\frac{E_{*}(X^{3})\xi^{3}}{6}+ h_{*}(\xi)\xi^{3}, \tag{14}\]
with \(h_{*}(\xi)\xrightarrow{\xi\to 0}0\) bounded uniformly over \([-1,1]\) (as it is continuous over \(\mathbb{R}\)), \(*\in\{\nu,\mu\}\). So,
\[\mathrm{d}_{3}(\nu,\mu)\leq 6^{-1}\sup_{0<|\xi|<1}\big{|}-\mathrm{i}E_{\nu}(X^{3} )+6h_{\nu}(\xi)+\mathrm{i}E_{\mu}(X^{3})-6h_{\mu}(\xi)\big{|}+2<\infty.\]
Finally, convergence in \(\mathrm{d}_{3}\) implies pointwise convergence of characteristic functions, as well as continuity of the limit at \(0\), which is equivalent to weak convergence by Levy's continuity theorem.
|
2307.10723 | Superconductivity Induced Ferromagnetism In The Presence of Spin-Orbit
Coupling | We investigate the behavior of magnetic impurities placed on the surface of
superconductor thin films with spin-orbit coupling. Our study reveals
long-range interactions between the impurities, which decay according to a
power law, mediated by the supercurrents. Importantly, these interactions
possess a ferromagnetic component when considering the influence of the
electromagnetic field, leading to the parallel alignment of the magnetic
moments in the case of two impurities. In a Bravais lattice of magnetic
impurities, superconductivity facilitates the establishment of ferromagnetic
order within specific parameter ranges. These findings challenge the
conventional understanding that ferromagnetism and superconductivity are
mutually exclusive phenomena. Our theoretical framework provides a plausible
explanation for the recently observed remanent flux in iron-based
superconductors, particularly Fe(Se,Te). | Yao Lu, I. V. Tokatly, F. Sebastian Bergeret | 2023-07-20T09:27:48Z | http://arxiv.org/abs/2307.10723v3 | # Superconductivity Induced Ferromagnetism In The Presence of Spin-Orbit Coupling
###### Abstract
We investigate the behavior of magnetic impurities placed on the surface of superconductor thin films with spin-orbit coupling. Our study reveals long-range interactions between the impurities, which decay according to a power law, mediated by the supercurrents. Importantly, these interactions possess a ferromagnetic component when considering the influence of the electromagnetic field, leading to the parallel alignment of the magnetic moments in the case of two impurities. In a Bravais lattice of magnetic impurities, superconductivity facilitates the establishment of ferromagnetic order within specific parameter ranges. These findings challenge the conventional understanding that ferromagnetism and superconductivity are mutually exclusive phenomena. Our theoretical framework provides a plausible explanation for the recently observed remanent flux and transport signature of ferromagnetism in iron-based superconductors, particularly Fe(Se,Te).
pacs: 74.20.-b, 74.25.-b, 74.25.-b _Introduction.-_ Ferromagnetic ordering is often seen as incompatible with conventional superconductivity due to the presence of an effective exchange field in ferromagnets. This exchange field has the effect of breaking up Cooper pairs, which are composed of electrons in a singlet state [1]. The coexistence of these two orders, however, does exist in hybrid superconductor/ferromagnet (S/F) structures[2; 3]. In these structures, the proximity effect plays a crucial role in enabling the coexistence of superconductivity and ferromagnetism. Singlet pairs can be transformed into triplet pairs through the exchange field of the F region. As a result, a local magnetic moment is produced, extending over distances of the order of superconducting coherence length, \(\xi_{s}\). This phenomenon is referred to as the magnetic or inverse proximity effect[4; 5]. The generated magnetic moment is oriented in the opposite direction to the magnetization of the ferromagnetic region. In the case of a small ferromagnetic island, this results in the screening of its magnetic moment[6]. If a second ferromagnetic region (F region) is positioned at a distance smaller than \(\xi_{s}\) from the first ferromagnet, the energetically favorable arrangement is an anti-parallel orientation of the magnetizations of the two F regions. This anti-parallel alignment serves as the basis for the FSF superconducting spin-valve[7; 8; 9; 10]. The studies on FSF structures with conventional superconductors indicate an anti-ferromagnetic coupling between the magnets, mediated by the inverse proximity effect. This coupling strength decreases exponentially with the distance between the ferromagnetic regions. This situation changes in thin superconducting films with spin-orbit coupling (SOC). The combination of the exchange field generated by a magnetic impurity, \(m_{1}\) in Fig. 1), and the SOC results in the spontaneous generation of anomalous currents through the spin-galvanic effect. In the case of a Rssba SOC they flow perpendicular to the magnetization (green arrows in Fig. 1) [11; 12]. These anomalous currents are spatially localized[13] over the coherence length from the impurity. Charge conservation implies the emergence of a phase gradient, ensuring \(\mathbf{\nabla}\cdot\mathbf{j}=0\) and the appearance of circulating currents [14] (black arrows in Fig. 1) If we assume that a magnetic moment \(\mathbf{m}_{1}\) points in the positive \(x\) direction (Fig. 1), then it generates a non-local circular supercurrent which flows in the negative \(y\) direction at the position of the magnetic impurity \(\mathbf{m}_{2}\) and to the positive \(y\) direction at the position of \(\mathbf{m}_{3}\). The orientations of \(\mathbf{m}_{2,3}\) are determined by minimization of the free energy: to reduce the kinetic energy of the superflow they will generate anoma
Figure 1: Schematic picture of magnetic impurities on top of a superconductor thin film with spin-orbit coupling. The red arrows represent the magnetization of the impurities. The green arrows are the exchange field-induced localized anomalous currents. The black loop represents the total current induced by the exchange field, phase gradient, and electromagnetic field.
lous currents that suppress the currents induced by \(\mathbf{m}_{1}\). Consequently, \(\mathbf{m}_{2}\) will point in the positive \(x\) direction and \(\mathbf{m}_{3}\) to the negative \(x\) direction. In other words, the supercurrent mediated magnetic interaction is ferromagnetic between \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) while it is antiferromagnetic for \(\mathbf{m}_{1}\) and \(\mathbf{m}_{3}\). Thus, in general, for two magnetic impurities, \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) the magnetic interaction will have the form:
\[F_{\bar{I}}=J_{\perp}\mathbf{m}_{1\perp}\mathbf{m}_{2\perp}-J_{\parallel}\mathbf{m}_{1 \parallel}\mathbf{m}_{2\parallel}\;, \tag{1}\]
where \(\perp\) (\(\parallel\)) denotes the component in the direction perpendicular (parallel) to \(\mathbf{r}=\mathbf{r}_{1}-\mathbf{r}_{2}\), and both \(J\perp\) and \(J_{\parallel}\) are positive. Previous studies on Rashba superconductors[14; 15], specifically regarding the current distribution around a magnetic impurity and the induced magnetic interaction have obtained an interaction resembling the 2D dipole-dipole interaction (DDI) form with \(J_{\perp}=J_{\parallel}\), which does not result in either a ferromagnetic or an anti-ferromagnetic ground state for two impurities. However, those studies have neglected the influence of the electromagnetic (EM) field. On the other hand, we know from Pearl's seminal work [16] that the EM field plays a crucial role in determining the current distribution in conventional superconducting thin films. This leads to the natural question of the effect of the EM field on the magnetic coupling between impurities in a superconductor with spin-orbit coupling (SOC).
In this work, we present a theory elucidating the impact of the electromagnetic field on the magnetic coupling between impurities in a superconductor with SOC. We demonstrate that the presence of EM fields alters drastically the spatial dependence of the couplings \(J_{\perp(\parallel)}\). We establish that the supercurrent-mediated magnetic interaction exhibits the form of a DDI that is generated by the so-called Keldysh potential [17; 18], and interpolates between the 2D and 3D DDI. It can also be viewed as a 2D DDI combined with a ferromagnetic interaction, leading to a ferromagnetic ground state for two impurities. Furthermore, we emphasize the crucial role of the electromagnetic field in a 2D impurity lattice. Without the electromagnetic field, the interaction energy density becomes unphysically divergent as the system size approaches infinity. However, when the electromagnetic field is taken into account, the energy density converges in the limit of large system size. In the remainder of this paper, we provide a detailed derivation of these results and discuss recent experimental results suggesting a superconducting-induced ferromagnetic order of impurities in Fe(SeTe) [19].
Theory.-We first consider magnetic impurities on top of a two-dimensional superconducting system with SOC, which can, for example, be a thin film on a substrate or superconductivity induced at the surface of a topological insulator. We assume all the magnetic impurities are polarized in the in-plane directions. In these systems, the inversion symmetry is broken because of the presence of the intrinsic polar vector - the normal \(\hat{z}\) to the transport plane. Additionally, the exchange field \(\mathbf{h}\) induced by the magnetic impurity locally breaks the time-reversal symmetry. The breaking of these two symmetries allows for the existence of a spontaneous current, known as the anomalous current, given by[20; 21]\(\mathbf{j}=-e^{2}D\mathbf{a}\), where \(D\) is the 2D superfluid weight and \(\mathbf{a}\) is the effective gauge field \(\mathbf{a}=\frac{1}{e}\mathbf{T}\mathbf{h}\times\hat{z}\). Here \(\Gamma\) is a constant that depends on the details of the system. For example, in a superconductor with strong Rashba SOC \(\Gamma=\alpha/v_{F}^{2}\) (see section 4 of supplementary material [22]), where \(\alpha\) is the SOC strength; or \(\Gamma=1/v_{F}\) in a Dirac material[23; 24].
The total free energy change due to the supercurrent is given by
\[F=\int d^{3}\mathbf{r}\frac{1}{8}D\left[\mathbf{\nabla}\phi-2e\mathbf{A}-2e\mathbf{a}\right]^{ 2}\delta(z)+\frac{1}{2\mu_{0}}\mathbf{B}^{2}\;. \tag{2}\]
The first term on the right-hand side is the free energy of the superconductor \(F_{SC}\) and the second term is the electromagnetic field contribution \(F_{EM}\), where \(\mu_{0}\) is the magnetic constant, \(e\) is the electron charge, \(\mathbf{A}\) is the vector potential of the electromagnetic field \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\). We assume the superconductor film is located at \(z=0\). Note that \(\mathbf{A}\) lives in 3 dimensions while \(\mathbf{\nabla}\phi\) and \(\mathbf{a}\) are defined in the 2D superconductor. Taking the derivative of \(F_{SC}\) with respect to \(-\mathbf{A}\), one obtains the 2D supercurrent flowing in the plane of the superconductor,
\[\mathbf{j}=\frac{1}{2}eD\left[\mathbf{\nabla}\phi-2e\mathbf{A}-2e\mathbf{a}\right]\delta(z). \tag{3}\]
Minimizing the free energy with respect to \(\phi\) gives the charge conservation \(\mathbf{\nabla}\cdot\mathbf{j}=0\). In the following, we choose the Coulomb gauge \(\mathbf{\nabla}\cdot\mathbf{A}=0\). This implies that the phase gradient cancels the longitudinal part \(\mathbf{a}_{l}\) of the effective gauge field, which is written in the momentum \(\mathbf{q}\) space as, \(\mathbf{q}\phi=2e\mathbf{a}_{l}=2e\mathbf{q}(\mathbf{q}\cdot\mathbf{a})/q^{2}\). The current of Eq. (S3) is then fully determined by the trasverce component \(\mathbf{a}_{t}=\mathbf{a}-\mathbf{a}_{l}\), that is, \(\mathbf{j}=-e^{2}D(\mathbf{A}+\mathbf{a}_{t})\).
By minimizing the total free energy with respect to the vector potential \(\frac{\partial F}{\partial\mathbf{A}}=0\), we obtain the Maxwell equation \(\mu_{0}\mathbf{j}=\mathbf{\nabla}\times\mathbf{B}\), which in the Coulomb gauge reads,
\[\mathbf{\nabla}^{2}\mathbf{A}=e^{2}\mu_{0}D\left[\mathbf{A}+\mathbf{a}_{t}\right]\delta(z). \tag{4}\]
The solution of this equation for a given distribution of the effective gauge field \(\mathbf{a}(\mathbf{r})\) determines the induced vector potential and the charge current. At the solution point, by substituting the Maxwell equation back into Eq. (S1), we get the change of free energy due to supercurrents generated by the external exchange field, (see section 1 of the supplementary material [22])
\[F=-\frac{1}{2}\int d^{2}\mathbf{r}\,\mathbf{j}\mathbf{a}. \tag{5}\]
Remarkably, the free energy is determined by the supercurrent in the regions in which \(\mathbf{a}(\mathbf{r})\) is finite, _i.e._ the regions where the exchange field \(\mathbf{h}(\mathbf{r})\) is finite.
We now consider a set of magnetic regions (impurities) located in the superconducting plane at the points \(\mathbf{r}_{i}\), such that the distance between the regions is much larger than their size. In this case, the distribution of the induced supercurrents almost everywhere as well as the change of the free energy become independent on the size/shape of the impurities and the corresponding exchange field can be approximated as \(\mathbf{h}(\mathbf{r})=\sum_{i}J_{0}\mathbf{m}_{i}\delta(\mathbf{r}-\mathbf{r}_{i})\). Here \(\mathbf{m}_{i}\) is the total magnetization (spin) of the ith impurity, and \(J_{0}\) is the electron-impurity exchange coupling.
By solving the Maxwell-London equation (4) and inserting the supercurrent \(\mathbf{j}=-e^{2}D(\mathbf{A}+\mathbf{a}_{t})\) into Eq. (5) we can identify the part of the free energy responsible for the supercurrent-induced long-range magnetic interaction (see section 2 of supplementary material [22]):
\[F_{I}=-\frac{Z}{2}\sum_{i\neq i}(\mathbf{m}_{i}\cdot\nabla)(\mathbf{m}_{j}\cdot\nabla) V(r_{ij})=\frac{1}{2}\sum_{i\neq j}F_{I}^{ij}, \tag{6}\]
where \(Z=DJ_{0}^{2}\Gamma^{2}/2\pi\), \(r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\) is the distance between magnetic impurities, and \(V(r)\) is the dimensionless Keldysh potential,
\[V(r)=\frac{\pi}{2}\left[H_{0}\left(\frac{r}{r_{0}}\right)-Y_{0}\left(\frac{r }{r_{0}}\right)\right]. \tag{7}\]
Here \(H_{0}\) is the Struve function, \(Y_{0}\) is the second kind Bessel function, and \(r_{0}\) is the Pearl length \(r_{0}=2/e^{2}D\mu_{0}\)[16].
The appearance of the Keldysh potential is quite remarkable. Usually, it describes the electrostatic potential of a point charge confined to a polarizable insulating plane [17; 18]. It interpolates between the 2D (\(V\sim-\ln r\)) and the 3D (\(V\sim 1/r\)) forms of the Coulomb potential with the crossover scale given by \(r_{0}\) in Eq. (7). Here it plays a similar role by interpolating between the 2D and 3D form of the induced DDI between the impurity spins.
Working out the derivatives in the pairwise part \(F_{I}^{ij}\) of the interaction energy Eq. (6) we find,
\[F_{I}^{ij}=J_{\perp}(r_{ij})\mathbf{m}_{i\perp}\mathbf{m}_{j\perp}-J_{\parallel}(r_{ ij})\mathbf{m}_{i\parallel}\mathbf{m}_{j\parallel}\;, \tag{8}\]
with
\[J_{\perp}(r)=-Z\frac{1}{r}\frac{dV}{dr},\quad J_{\parallel}(r)=Z\frac{d^{2}V} {dr^{2}}. \tag{9}\]
To gain insight into the \(r\)-dependence, it is instructive to use a highly accurate representation of \(V(r)\) in terms of elementary functions [18], which yields
\[J_{\perp}=Z\frac{r_{0}}{r^{2}(r+r_{0})},\quad J_{\parallel}=Z\frac{r_{0}(2r+ r_{0})}{r^{2}(r+r_{0})^{2}}. \tag{10}\]
The interaction as a function of the Pearl length is shown in Fig. 2a. The important general property is that for any finite \(r_{0}\), one obtains \(J_{\parallel}>J_{\perp}\).
Let us analyze the case of two impurities. For convenience, we write the interaction as
\[F_{I}=\frac{J_{\perp}+J_{\parallel}}{2}\left(m_{1\perp}m_{2\perp}-m_{1 \parallel}m_{2\parallel}\right)-\frac{J_{\parallel}-J_{\perp}}{2}\mathbf{m}_{1} \cdot\mathbf{m}_{2}. \tag{11}\]
The first term on the right-hand side alone does not generate a difference in the free energies of the ferromagnetic and anti-ferromagnetic states but rather leads to degenerate ground states with \(\theta_{1}=-\theta_{2}\), where \(\theta_{1}\) and \(\theta_{2}\) are the angles between the impurities magnetic moments and \(\mathbf{r}\) as shown in Fig. 2b. The second term, with the form of isotropic ferromagnetic interaction, breaks the ground state degeneracy, resulting in a ferromagnetic ground state. This ferromagnetic interaction stems from the \(\mathbf{B}^{2}\) term in the free energy, which was ignored in previous works [14; 15]. In superconductors with large Rashba SOC \(\Gamma\) and \(D\) are given by \(\Gamma=\alpha/v_{F}^{2}\), and \(D=\frac{1}{2}\pi v_{F}^{2}N_{0}T\sum_{n}\frac{\Lambda^{2}}{(\omega_{n}^{2}+ \Delta^{2})^{3/2}}\) (see section 3 of supplementary material [22]). In the limit where the EM field
Figure 2: a) Superconductivity-induced magnetic interaction as a function of Pearl length. The red and blue colors denote \(J_{\parallel}\) and \(J_{\perp}\), respectively. The circles are the exact results calculated from Eq. (9) and the lines are the approximate values obtained from Eq. (10). b) Ground states of two magnetic impurities with and without the EM field. c) Distribution of the supercurrent induced by the two magnetic impurities. The distance between the two impurities is \(r=r_{0}/2\).
can be neglected \(\mu_{0}\to 0\) and \(r_{0}\to\infty\), we get \(J_{\parallel}=J_{\perp}\) and recover the result of Ref. [15].
The supercurrent \(\mathbf{j}(\mathbf{r})\) induced by a magnetic impurity with magentization \(\mathbf{m}\) at the origin, takes the form [22]
\[\mathbf{j}(\mathbf{m},\mathbf{r})=\frac{1}{\Gamma J_{0}}\left[J_{\parallel}\mathbf{m}\times \hat{z}-(J_{\parallel}+J_{\perp})(\mathbf{m}\times\hat{z}\cdot\hat{r})\hat{r}\right], \tag{12}\]
where \(\hat{r}\equiv\mathbf{r}/|\mathbf{r}|\). Due to the linearity of the problem, the total current induced by all impurities is given by sum \(\mathbf{j}_{\rm tot}=\sum_{i}\mathbf{j}(\mathbf{m}_{i},\mathbf{r}-\mathbf{r}_{i})\). The current distribution for two impurities is shown in Fig. 2c. Experimentally, the supercurrent distribution can be determined by measuring the local current-induced magnetic field[25] The magnetic field \(\mathbf{B}=\mathbf{\nabla}\times\mathbf{A}\) induced by one impurity at the origin, and evaluated at \(z=0\) reads,
\[\mathbf{B}(\mathbf{m},\mathbf{r})=\frac{\Gamma J_{0}}{4\pi}\left[\frac{2}{(r+r_{0})^{3}}+ \frac{2r+r_{0}}{r^{2}(r+r_{0})^{2}}\right](\mathbf{m}\cdot\hat{r})\hat{z}. \tag{13}\]
The total magnetic field at \(z=0\) is then \(\mathbf{B}_{\rm tot}=\sum_{i}\mathbf{B}(\mathbf{m}_{i},\mathbf{r}-\mathbf{r}_{i})\).
_Superconductivity induced ferromagnetism in a 2D Bravais lattice.-_ Next, we consider an infinite Bravais lattice of magnetic impurities, as shown in Fig. 3a. Below we assume \(d_{1}=d_{2}\) and the shape of the lattice is controlled by the angle \(\theta\). It is a triangular lattice when \(\theta=\pi/3\) and a square lattice when \(\theta=\pi/2\). Here we concentrate on the case where \(\theta\) is between \(\pi/3\) and \(\pi/2\). In this system, the interaction energy density is given by \(E=\frac{1}{2V}\sum_{i\neq j}F_{I}^{ij}\), where \(V\) is the area of the 2D lattice. We notice that the inclusion of the EM field is crucial for computing energy. Otherwise, the interaction \(F_{I}^{ij}\) scales as \(1/r^{2}\) and the energy density unphysically diverges in the thermodynamics limit. With the screening effect of the EM field, according to Eq. (10), \(F_{I}^{ij}\) crosses over from the 2D to 3D DDI and decays as \(1/r^{3}\) at \(r\to\infty\), resulting in convergent and extensive energy.
The 3D DDI induced ordered state in a 2D electric dipole lattice has been studied in [26]. It has been shown that a ferroelectric state forms in a triangular lattice, while a square lattice favors a layered anti-ferroelectric state. Since the superconductivity-induced magnetic interaction has a form between 2D and 3D DDI, we expect similar ground states in our model. By minimizing the energy we find that at \(T=0\) the ground state can be either a ferromagnetic state, Fig. 3a, induced by the net ferromagnetic interaction \(\frac{J_{\parallel}-J_{\perp}}{2}\), or layered antiferromagnetic state, Fig. 3b, due to the 2D DDI \(\frac{J_{\parallel}+J_{\perp}}{2}\). At finite temperatures, the order parameter \(\Delta(T)\) needs to be determined self-consistently. The superfluid weight \(D\) and the Pearl length \(r_{0}\) are calculated using \(\Delta(T)\). The corresponding temperature dependence of \(J_{\parallel}\) and \(J_{\perp}\) is shown in Fig. 3c. With increasing temperature, the superfluid weight is decreased, leading to the suppression of the magnetic interaction. In addition, the relative value of the net ferromagnetic interaction \(\frac{J_{\parallel}-J_{\perp}}{2}\) compared with the 2D DDI \(\frac{J_{\parallel}+J_{\perp}}{2}\) becomes smaller at higher temperatures, suggesting a transition to an antiferromagnetic phase at some finite temperature. The phase diagram obtained numerically is shown in Fig. 3d. At \(T=0\), the triangular lattice (\(\theta=\pi/3\)) has a ferromagnetic ground state, while the square lattice is antiferromagnetic. By setting \(\mu_{0}\to 0\), we find that the ground state is always an antiferromagnetic state without the magnetic field.
_Conclusion.-_ To conclude, we have demonstrated that superconductivity in materials with SOC can induce long-range magnetic interaction. The effects of London-Pearl screening are crucial for proving this result. The exact analytic solution shows that the induced magnetic interaction has a form of 2D DDI combined with a ferromagnetic coupling. We also demonstrate that in an oblique Bravais lattice of magnetic impurities, the superconductivity can induce a ferromagnetic state in certain parameter regimes. To the best of our knowledge, these results provide the first example of singlet superconductivity-induced ferromagnetic coupling.
In principle, our predictions can be tested in any superconductor that exhibits a sizable Rashba SOC or in a Dirac superconductor with magnetic impurities. Notably, recent experiments [27; 28; 29; 30; 19], provides compelling evidence of observing such a ferromagnetic state. Specifically, in Ref. [19], a hysteretic magnetization was observed in Fe(SeTe) with Fe impurities. The magneti
Figure 3: a-b) Schematic picture of the ferromagnetic and the layered anti-ferromagnetic states of a Bravais magnetic impurity lattice. c) Temperature dependence of the interaction strength. The red line and blue line denote \(J_{\parallel}\) and \(J_{\perp}\), respectively in units of \(J_{\perp}(T=0)\). The distance between the two impurities is \(r=r_{0}(T=0)/2\). d) Phase diagram for a 2D impurity lattice. The lattice constant here is \(d_{1}=d_{2}=r_{0}(T=0)/50\). When \(T\) is close to \(T_{c}\) (grey shaded region), the Pearl length becomes too long and there might be numerical uncertainties due to the finite size effect.
zation disappears at temperatures above the superconducting critical temperature, indicating that the ferromagnetism is induced by the superconductivity. This observation can be considered as evidence of supercurrent-mediated magnetic interaction in the presence of surface Dirac states. The realistic values for the parameters of the surface Dirac band in Fe(Te,Se) [19] are given by [31; 32]: \(v_{F}=0.216eV\AA\), \(E_{F}=4.5meV\), \(\Delta=1.5meV\) and \(\mathcal{J}_{0}=50meV\), \(|\mathbf{m}|=5\) and the distance between the nearest impurities is \(d=2nm\). Here \(\mathcal{J}_{0}\) is the exchange interaction defined in a lattice model. In our continuous model, the exchange interaction is \(J_{0}=\mathcal{J}_{0}\mathcal{A}\), where \(\mathcal{A}\) is the effective area of the impurity which is assumed to be the lattice constant square \(\mathcal{A}=(0.4nm)^{2}\). We obtain at \(T=0\) the supercurrent-mediated magnetic interaction is roughly \(2meV\), of the same order as the superconducting gap. Thus, the thermal fluctuation can be neglected for low enough temperatures \(T\ll\Delta\). Note that the effective magnetic interaction is proportional to \(\mathcal{A}^{2}\), so it can be easily enhanced by increasing the size of the magnetic impurity, for example using islands of a ferromagnetic insulator with a large size. According to Ref. [27; 30], the ferromagnetism dwells on the surface of the superconductor, supporting our theory that the SOC is crucial for the formation of the ferromagnetic state.
**Acknowledgements** Y.L. and F.S.B. acknowledge financial support from Spanish AEI through project PID2020-114252GB-I00 (SPIRIT), TED2021-130292B-C42, and the Basque Government through grant IT-1591-22 and IKUR strategy program. I.V.T. acknowledges support by Grupos Consolidad s UPV/EHU del Gobierno Vasco (Grant IT1453-22) and by the grant PID2020-112811GB-I00 funded by MCIN/AEI/10.13039/501100011033.
|
2304.09722 | Size-biased diffusion limits and the inclusion process | We consider the inclusion process on the complete graph with vanishing
diffusivity, which leads to condensation of particles in the thermodynamic
limit. Describing particle configurations in terms of size-biased and
appropriately scaled empirical measures of mass distribution, we establish
convergence in law of the inclusion process to a measure-valued Markov process
on the space of probability measures. In the case where the diffusivity
vanishes like the inverse of the system size, the derived scaling limit is
equivalent to the well known Poisson-Dirichlet diffusion, offering an
alternative viewpoint on these well-established dynamics. Moreover, our
approach covers all scaling regimes of the system parameters and yields a
natural extension of the Poisson-Dirichlet diffusion to infinite mutation rate.
We also discuss in detail connections to known results on related Fleming-Viot
processes. | Paul Chleboun, Simon Gabriel, Stefan Grosskinsky | 2023-04-19T15:08:40Z | http://arxiv.org/abs/2304.09722v3 | # Size-biased diffusion limits and the inclusion process
###### Abstract.
We consider the inclusion process on the complete graph with vanishing diffusivity, which leads to condensation of particles in the thermodynamic limit. Describing particle configurations in terms of size-biased and appropriately scaled empirical measures of mass distribution, we establish convergence in law of the inclusion process to a measure-valued Markov process on the space of probability measures. In the case where the diffusivity vanishes like the inverse of the system size, the derived scaling limit is equivalent to the well known Poisson-Dirichlet diffusion, offering an alternative viewpoint on these well-established dynamics. Moreover, our approach covers all scaling regimes of the system parameters and yields a natural extension of the Poisson-Dirichlet diffusion to infinite mutation rate. We also discuss in detail connections to known results on related Fleming-Viot processes.
Key words and phrases:Inclusion Process, Condensation, Poisson-Dirichlet, Infinitely-Many-Neutral-Alleles 2010 Mathematics Subject Classification: Primary: 60K35; Secondary: 82C22; 82C26 S. Gabriel thanks Tommaso Rosati for helpful discussions and pointing out the reference [10]. S. Gabriel is supported by the Warwick Mathematics Institute Centre for Doctoral Training, and acknowledges funding from the University of Warwick and EPSRC through grant EP/R513374/1.
Introduction
The study of the stochastic processes in the stochastic process is a fundamental problem in the theory of stochastic processes. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, and the stochastic process is a stochastic process, respectively. The stochastic process is a stochastic process, respectively.
write \(\mu_{\#}\eta\) instead of \(\mu^{(\eta)}\) in order to avoid overloaded notation. The above mappings \(\mu^{(\cdot)}\) and \(\mu^{(\cdot)}_{L,N}\) do not preserve spatial information of particle configurations, but this also does not enter the dynamics on the complete graph. Note that the map (4) was already mentioned in [1], however, only to prove denseness of the domain of functions considered there.
In order to describe the limiting dynamics, we consider the domain of functions
\[\mathcal{D}(\mathcal{L}_{\theta})=\text{sub-algebra of $C(E)$ generated by functions $\mu\mapsto\mu(h)$}\,,\ h\in C^{3}([0,1])\,. \tag{6}\]
The pre-generator of the corresponding superprocess acting on a function \(H(\mu)=\mu(h_{1})\cdots\mu(h_{n})\in\mathcal{D}(\mathcal{L}_{\theta})\) then reads
\[\mathcal{L}_{\theta}H(\mu):= 2\sum_{1\;\leqslant\;k<l\;\leqslant\;n}\big{(}\mu(Bh_{k}Bh_{l}) -\mu(Bh_{k})\mu(Bh_{l})\big{)}\prod_{j\neq k,l}\mu(h_{j}) \tag{7}\] \[+\sum_{1\;\leqslant\;k\;\leqslant\;n}\mu(A_{\theta}h_{k})\prod_{ j\neq k}\mu(h_{j})\,,\]
with \(Bh(z):=h(z)+zh^{\prime}(z)=(zh(z))^{\prime}\). Here, the first part is usually referred to as interaction term and \(A_{\theta}\) denotes the single-particle operator of the form
\[A_{\theta}h(z):= (1-z)(Bh)^{\prime}(z)+\theta(Bh(0)-Bh(z)) \tag{8}\] \[= z(1-z)h^{\prime\prime}(z)+(2-z(2+\theta))h^{\prime}(z)+\theta(h( 0)-h(z))\,.\]
The operator \(Bh(z)\) should be thought of as a'size-biased derivative', which appears due to our choice of embedding (2). For example, a single site containing a mass fraction \(z\in[0,1]\) will be represented by a point-mass \(z\delta_{z}\). Thus, change in \(z\) will result both in a change of the amount of mass and its position.
Our first result identifies the process described by \(\mathcal{L}_{\theta}\) as the correct scaling limit of the inclusion process.
**Theorem 1.1**.: _Let \(\rho\in(0,\infty)\) and \(d=d(L)\) such that \(dL\to\theta\in[0,\infty)\). If \(\eta^{(L,N)}(0)\) is such that \(\mu_{\#}\eta^{(L,N)}(0)\stackrel{{ D}}{{\longrightarrow}}\mu_{0}\in E\), then_
\[\Big{(}\mu_{\#}\eta^{(L,N)}(t)\Big{)}_{t\;\geqslant\;0}\stackrel{{ D}}{{\longrightarrow}}(\mu_{t})_{t\;\geqslant\;0}\,,\quad\text{in $D([0,\infty),E)$}\,,\ \text{as $N/L\to\rho$}\,. \tag{9}\]
_Here \((\mu_{t})_{t\;\geqslant\;0}\) denotes the measure-valued process on \(E\) (3) generated by \(\mathcal{L}_{\theta}\), cf. (7), with initial value \(\mu_{0}\)._
In Proposition 2.4 we prove that the closure of \((\mathcal{L}_{\theta},\mathcal{D}(\mathcal{L}_{\theta}))\) is indeed the generator of a Feller process on the state space \(E\).
For the case \(\theta=\infty\), i.e. \(dL\) diverging, we expect clusters on the scale \(1/d\), cf. [1]. In this case we consider the embedding
\[\hat{\mu}^{(\cdot)}=\hat{\mu}^{(\cdot)}_{L,N}:\Omega_{L,N}\to \mathcal{M}_{1}(\mathbb{R}_{+})\quad\text{with}\quad\hat{\mu}^{(\eta)}_{L,N}:= \sum_{x=1}^{L}\frac{\eta_{x}}{N}\delta_{dL\frac{\eta_{x}}{N}}\,, \tag{10}\]
mapping particle configurations into the space of probability measures \(\mathcal{M}_{1}(\mathbb{R}_{+})\), again with the topology induced by weak convergence of measures. In contrast to \(\theta<\infty\), any measure in \(\mathcal{M}_{1}(\mathbb{R}_{+})\) can be approximated by particle configurations using (10), which is why we do not have to restrict ourselves to a strict subset of probability measures as above.
The lack of compactness of \(\mathbb{R}_{+}\) now allows for diverging rescaled masses of particle configurations in the thermodynamic limit, thus when \(dL\to\infty\), we expect the scaling limit to be a measure-valued process on \(\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\), with \(\overline{\mathbb{R}}_{+}=[0,\infty]\). We include \(\infty\) to describe mass on larger scales than \(1/d\). Indeed the correct limit turns out to be a process on \(\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\) without interaction and single-particle operator
\[\hat{A}h(z):= (Bh)^{\prime}(z)+(Bh(0)-Bh(z)) \tag{11}\] \[= zh^{\prime\prime}(z)+(2-z)h^{\prime}(z)+(h(0)-h(z))\,,\]
acting on \(h\) in the domain
\[\mathcal{D}(\hat{A}):=\{h\,:\,h(\infty)=0\text{ and }h|_{\mathbb{R}_{+}}\in C _{c}^{3}(\mathbb{R}_{+})\}\cup\{\text{constant functions}\}\subset C(\overline{ \mathbb{R}}_{+})\,. \tag{12}\]
Slowing down the evolution of the inclusion process appropriately, we get the following result.
**Theorem 1.2**.: _Let \(\rho\in(0,\infty)\) and \(d=d(L)\to 0\) such that \(dL\to\infty\). If \(\hat{\mu}_{\#}\eta^{(L,N)}(0)\stackrel{{ D}}{{\longrightarrow}}\hat{ \mu}_{0}\in\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\), then_
\[\Big{(}\hat{\mu}_{\#}\eta^{(L,N)}\big{(}\tfrac{t}{dL}\big{)}\Big{)}_{t\; \geqslant\;0}\stackrel{{ D}}{{\longrightarrow}}(\hat{\mu}_{t})_{t \;\geqslant\;0}\,,\quad\text{in }D([0,\infty),\mathcal{M}_{1}(\overline{\mathbb{R}}_{+}))\,,\text{ as }N/L\to\rho\,. \tag{13}\]
_Here \((\hat{\mu}_{t})_{t\;\geqslant\;0}\) denotes the measure-valued process on \(\mathcal{M}(\overline{\mathbb{R}}_{+})\) with initial value \(\hat{\mu}_{0}\) generated by_
\[\hat{\mathcal{L}}H(\mu)=\sum_{1\;\leqslant\;k\;\leqslant\;n}\mu(\hat{A}h_{k}) \prod_{\begin{subarray}{c}m=1\\ m\neq k\end{subarray}}^{n}\mu(h_{m})\quad\text{with }H(\mu)=\mu(h_{1})\cdots\mu(h_{n}),\ h_{k}\in \mathcal{D}(\hat{A}). \tag{14}\]
The operator \(\hat{\mathcal{L}}\) may be interpreted as a Fleming-Viot process without interaction. This is in contrast to the generator \(\mathcal{L}_{\theta}\), which does not have a Fleming-Viot interpretation, since the interaction term is not of the form
\[2\sum_{1\;\leqslant\;k<l\;\leqslant\;n}\big{(}\mu(h_{k}h_{l})-\mu(h_{k})\mu(h _{l})\big{)}\prod_{j\neq k,l}\mu(h_{j})\,.\]
We will see that the limiting dynamics are deterministic with absorbing state \(\hat{\mu}=\operatorname{Exp}(1)\in\mathcal{M}_{1}(\mathbb{R}_{+})\). In fact, the statement of Theorem 1.2 can be reformulated into a hydrodynamic limit, cf. Proposition 3.5. Moreover, if \(\hat{\mu}_{0}[\mathbb{R}_{+}]=1\), then \(\hat{\mu}_{t}[\mathbb{R}_{+}]=1\) for every \(t\;\geqslant\;0\), cf. Corollary 3.7, i.e. mass does not escape to larger scales.
The two theorems fully determine the dynamics of the inclusion process, with vanishing diffusivity, on complete graphs in the thermodynamic limit with density \(\rho\in(0,\infty)\). For a discussion of the boundary cases \(\rho\in\{0,\infty\}\), we refer to Section 4. For measure valued processes with generators (7) and (14) the evolution w.r.t. a simple test function (in the appropriate domain) is given by
\[d\mu_{t}(h)=\mu_{t}(A_{\theta}h)\,dt+dM_{t}^{(h)}\quad\text{and}\quad d\hat{ \mu}_{t}(h)=\hat{\mu}_{t}(\hat{A}h)\,dt\,, \tag{15}\]
respectively. Here \(t\mapsto M_{t}^{(h)}\) is a martingale with quadratic variation
\[\big{[}M^{(h)}\big{]}_{t}=\int_{0}^{t}\Big{(}\mu_{s}\big{(}(Bh)^{2}\big{)}-\mu _{s}(Bh)^{2}\Big{)}\,ds\,,\]
given by the interaction term in \(\mathcal{L}_{\theta}\) with \(n=2\) and \(h_{1}=h_{2}=h\). Since \(t\mapsto\hat{\mu}_{t}(h)\) solves a simple ODE without martingale part it is continuous for all \(h\in\mathcal{D}(\hat{A})\), so that the process \((\hat{\mu}_{t})_{t\;\geqslant\;0}\) has
continuous paths. Continuity of the process \((\mu_{t})_{t\,\geqslant\,0}\), and thus of the martingale \((M_{t}^{(h)})_{t\,\geqslant\,0}\), follows from the equivalence with the Poisson-Dirichlet diffusion, cf. Proposition 1.3.
Taking expectations of the first term in (15), we see that \(\bar{\mu}_{t}:=\mathbb{E}_{\mu_{0}}[\mu_{t}]\) satisfies
\[\frac{d}{dt}\bar{\mu}_{t}(h)=\bar{\mu}_{t}(A_{\theta}h)\,,\quad\text{for all }h\in C^{2}([0,1])\,,\]
which agrees with the time evolution of \((\mathbf{E}_{\mu_{0}}[h(Z(t))])_{t\geqslant 0}\) for a process \((Z(t))_{t\,\geqslant\,0}\) on \([0,1]\) with generator \(A_{\theta}\) and initial distribution \(\mu_{0}\). As a consequence, we have the following dualities
\[\mathbb{E}_{\mu_{0}}[\mu_{t}(h)]=\mathbf{E}_{\mu_{0}}[h(Z(t))]\quad\text{and} \quad\hat{\mu}_{t}(h)=\mathbf{E}_{\mu_{0}}[h(\hat{Z}(t))]\quad\forall t\, \geqslant\,0\,, \tag{16}\]
where \((\hat{Z}(t))_{t\,\geqslant\,0}\) is a process on \(\overline{\mathbb{R}}_{+}\) with generator \(\hat{A}\). In the latter case \(\hat{\mu}_{t}\) itself is deterministic for fixed initial condition \(\mu_{0}\) since it solves the ODE (15). Both processes \((Z(t))_{t\,\geqslant\,0}\) and \((\hat{Z}(t))_{t\,\geqslant\,0}\) are one-dimensional diffusions with resetting to \(0\), which will be used in Sections 2 and 3 to study properties of the measure-valued processes.
#### 1.1.2. A size-biased viewpoint on the Poisson-Dirichlet diffusion
The process described in Theorem 1.1 is a measure-valued process which provides an alternative description of the infinitely-many-neutral-alleles diffusion model introduced by Ethier and Kurtz in their seminal work [1]. Note that the process is also commonly referred to as Poisson-Dirichlet diffusion, which we will use throughout the paper. The classical Poisson-Dirichlet diffusion is a Feller process on \(\overline{\nabla}\) with pre-generator1
Footnote 1: The original formulation of the pre-generator in [1] includes a multiplicative factor of \(\frac{1}{2}\) which we omitted here.
\[\mathcal{G}_{\theta}f=\sum_{i,j=1}^{\infty}p_{i}(\delta_{i,j}-p_{j})\partial_ {p_{i}p_{j}}^{2}f-\theta\sum_{i=1}^{\infty}p_{i}\partial_{p_{i}}f\,, \tag{17}\]
acting on functions in the domain
\[\mathcal{D}_{mon}(\mathcal{G}_{\theta}):=\text{sub-algebra of }C(\overline{\nabla})\text{ generated by }\ 1,\varphi_{2},\varphi_{3},\ldots\,, \tag{18}\]
where \(\varphi_{m}(p):=\sum_{i=1}^{\infty}p_{i}^{m}\) for \(m\,\geqslant\,2\). \(\mathcal{G}_{\theta}\) acts on such test functions with the convention that occurring sums on the r.h.s. of (17) are evaluated on \(\nabla\) and extended to \(\overline{\nabla}\) by continuity. The name Poisson-Dirichlet diffusion is adequate, since its unique invariant distribution is the _Poisson-Dirichlet distribution_\(\operatorname{PD}(\theta)\).
The Poisson-Dirichlet distribution is a one-parameter family of probability measures supported on \(\nabla:=\big{\{}p\in\overline{\nabla}\,:\,\sum_{i=1}^{\infty}p_{i}=1\big{\}} \subset\overline{\nabla}\). It was first introduced by Kingman [14] as a natural limit of Dirichlet distributions. However, there is a more intuitive construction of the Poisson-Dirichlet distribution using a stick-breaking procedure, see for example [13]. Later, the distribution was identified as the unique stationary measure of the split-merge dynamics [15, 16] and the Poisson-Dirichlet diffusion [1]. Despite it being introduced in the field of population genetics, the Poisson-Dirichlet distribution has since then also appeared in statistical mechanics [1, 2, 17] and recently in interacting particle systems [1, 2].
Naturally, one can consider the mapping of the Poisson-Dirichlet diffusion under \(\mu^{(\cdot)}\) which yields a process on \(E\subset\mathcal{M}_{1}([0,1])\). Indeed, this push-forward process agrees with the process generated by \(\mathcal{L}_{\theta}\).
**Proposition 1.3**.: _Let \((\mu_{t})_{t\;\geqslant\;0}\) be the measure-valued process generated by \(\mathcal{L}_{\theta}\) (7), then_
\[(\mu_{t})_{t\;\geqslant\;0}\stackrel{{ D}}{{=}}\left(\mu^{(X(t))} \right)_{t\;\geqslant\;0}\,, \tag{19}\]
_where \((X(t))_{t\;\geqslant\;0}\) denotes the corresponding Poisson-Dirichlet diffusion generated by \(\mathcal{G}_{\theta}\) (17). In particular, the following properties translate immediately from \((X(t))_{t\;\geqslant\;0}\) to \((\mu_{t})_{t\;\geqslant\;0}\):_
1. _The process_ \((\mu_{t})_{t\;\geqslant\;0}\) _has a unique stationary distribution, which is reversible. It is given by_ \(\mathbf{P}=\mu_{\#}\mathrm{PD}(\theta)\)_, i.e. the the law of_ \[\mu^{(X)}=\sum_{i=1}^{\infty}X_{i}\delta_{X_{i}}\,,\quad X\sim\mathrm{PD}( \theta)\,.\] (20)
2. _The process_ \((\mu_{t})_{t\;\geqslant\;0}\) _has continuous sample paths in_ \(E\)_._
3. _For any initial value_ \(\mu_{0}\in E\)_, we have_ \[\mathbb{P}(\mu_{t}(\{0\})=0\ \forall t>0)=1\,.\] (21)
Together with Theorem 1.1, this yields the following corollary.
**Corollary 1.4**.: _Let \(\rho\in(0,\infty)\) and \(d=d(L)\) such that \(dL\to\theta\in[0,\infty)\). If \(\eta(0)^{(L,N)}\) is such that \(\frac{1}{N}\hat{\eta}(0)^{(L,N)}\stackrel{{ D}}{{\longrightarrow}}X(0) \in\overline{\nabla}\), then_
\[\frac{1}{N}\left(\hat{\eta}(t)^{(L,N)}\right)_{t\;\geqslant\;0}\stackrel{{ D}}{{\longrightarrow}}(X(t))_{t\;\geqslant\;0}\,,\quad\text{ as }N/L\to\rho\,. \tag{22}\]
_Here \((X(t))_{t\;\geqslant\;0}\) denotes the Poisson-Dirichlet diffusion on \(\overline{\nabla}\) with parameter \(\theta\) and initial value \(X(0)\), generated by \(\mathcal{G}_{\theta}\), cf. (17), and \(\hat{\eta}\) denotes the ordered particle configuration._
Because the thermodynamic limit is a joint limit as \(N,L\to\infty\), we are able observe non-trivial dynamics for \(\theta=\infty\). The measure valued process generated by \(\hat{\mathcal{L}}\) is the natural extension of the process \(\mathcal{L}_{\theta}\) (and thus to the Poisson-Dirichlet diffusion generated by \(\mathcal{G}_{\theta}\)) when \(\theta\to\infty\). A first indication for this relationship can already be observed on the level of stationary distributions. From Proposition 1.3(i), we recall that the stationary distribution w.r.t. \(\mathcal{L}_{\theta}\) is given by the size-biased sample of \(\mathrm{PD}(\theta)\). Consider \(X^{(\theta)}\sim\mathrm{PD}(\theta)\) and sample an index \(I\in\mathbb{N}\) such that
\[I=i\quad\text{with probability }X^{(\theta)}_{i}\,, \tag{23}\]
i.e. we pick the index \(I\) with size-bias. It is well known [10, Theorem 2.7] that \(X^{(\theta)}_{I}\sim\mathrm{Beta}(1,\theta)\). Moreover, \(\mathrm{Exp}(1)\) is the absorbing state of the deterministic dynamics induced by \(\hat{\mathcal{L}}\), cf. Corollary 3.9. Now, the following connection between a Beta and an Exponential distribution holds:
\[\theta\,\mathrm{Beta}(1,\theta)\stackrel{{ D}}{{ \longrightarrow}}\mathrm{Exp}(1)\,,\quad\text{as }\theta\to\infty\,. \tag{24}\]
Hence, as \(\theta\to\infty\), the rescaled size-biased sample \(\theta\,X^{(\theta)}_{I}\) converges weakly to an \(\mathrm{Exp}(1)\) random variable.
This relationship can also be made sense of on the level of processes, summarised in the following diagram:
We analyse the inclusion process \(\big{(}\mathfrak{L}_{L,N},\Omega_{L,N}\big{)}\) and consider the two cases \(\theta<\infty\) (Theorem 1.1) and \(\theta=\infty\) (Theorem 1.2), with appropriate embeddings of configurations in the space of probability measures. In the case \(\theta<\infty\) the limiting process is equivalent to the Poisson-Dirichlet diffusion generated by \(\mathcal{G}_{\theta}\) (Proposition 1.3). Furthermore, our size-biased approach allows for a meaningful limit when \(\theta=\infty\), identifying a natural extension for models with Poisson-Dirichlet diffusion limit, under appropriate rescaling of time and space. For this matter, we introduce the scaling operator \(S_{\theta}:E\to\mathcal{M}_{1}(\mathbb{R}_{+})\), which linearly scales measures on the unit interval to measures on the interval \([0,\theta]\), i.e.
\[S_{\theta}:\mu(dz)\mapsto\mu(d^{z}_{\theta})\,. \tag{25}\]
**Theorem 1.5**.: _Let \((\mu^{\theta}_{t})_{t\;\geqslant\;0}\) be the process generated by \(\mathcal{L}_{\theta}\) and \((\hat{\mu}_{t})_{t\;\geqslant\;0}\) be the process generated by \(\hat{\mathcal{L}}\). If \(S_{\theta}\mu^{\theta}_{0}\stackrel{{ D}}{{\longrightarrow}}\hat{ \mu}_{0}\in\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\), then_
\[\big{(}S_{\theta}\mu^{\theta}_{t/\theta}\big{)}_{t\;\geqslant\;0}\stackrel{{ D}}{{\longrightarrow}}(\hat{\mu}_{t})_{t\;\geqslant\;0}\,,\quad\text{ in }C([0,\infty),\mathcal{M}_{1}(\overline{\mathbb{R}}_{+}))\,,\quad\text{ as }\theta\to\infty\,, \tag{26}\]
_where we consider the topology induced by weak convergence on \(\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\)._
**Remark 1.6**.: _In the above theorem we saw that scaling of space (of order \(\theta\)) is necessary to observe a meaningful limit as \(\theta\to\infty\). Similarly, one could scale the Kingman simplex to \(\theta\overline{\nabla}\). However, in the limit we lack the property of distinguishing between separate scales. Consider for example \(\theta=n^{2}\), \(n\in\mathbb{N}\), and the sequence_
\[p^{(\theta)}=\tfrac{1}{2}\big{(}\underbrace{\tfrac{1}{\sqrt{\theta}},\ldots, \tfrac{1}{\sqrt{\theta}}}_{\sqrt{\theta}\text{ times}},\underbrace{\tfrac{1}{ \theta},\ldots,\tfrac{1}{\theta}}_{\theta\text{ times}},0,\ldots\big{)}\in \nabla\,. \tag{27}\]
_Then \(\theta\,p^{(\theta)}_{i}\to\infty\) for every \(i\in\mathbb{N}\). On the other hand,_
\[S_{\theta}\mu^{(p^{\theta})}=\tfrac{1}{2}\delta_{\frac{1}{2}}+\tfrac{1}{2} \delta_{\frac{1}{2}\sqrt{\theta}}\stackrel{{ D}}{{\longrightarrow}}\tfrac{1}{2} \delta_{\frac{1}{2}}+\tfrac{1}{2}\delta_{\infty}\in\mathcal{M}_{1}(\overline {\mathbb{R}}_{+})\,, \tag{28}\]
_which captures both the amount of diverging mass and information on scales \(\frac{1}{\theta}\). This highlights the fact that considering the space \(E\), instead of \(\overline{\nabla}\), is essential for a detailed analysis of the Poisson-Dirichlet diffusion in the boundary case \(\theta\to\infty\)._
### Comparison to the literature
After its introduction [10], the inclusion process has been subject to study as an interesting model of mass transport on its own [11]. In particular, in the context of condensation in stochastic particle systems it is a model of major interest. In short, a particle system exhibits condensation if a positive fraction of particles concentrates on sites with a vanishing volume fraction. Such sites with diverging occupation
(and their occupying particles) are called the _condensate_, the remaining particles and sites are said to be the _background_ or _bulk_ of the system. In [11], the existence of a non-trivial condensate was proven; we refer to the same reference for an exact definition of the condensation phenomenon. While the dynamics of the bulk for occupation numbers of order one is covered by general results on the propagation of chaos for particle systems [10], the scope of the present article is to determine the dynamics of the condensate. However, we want to stress that our analysis does not rely on previous results on condensation. In fact, the clustering of particles on diverging scales is an implicit consequence of the scaling limits, presented in Theorems 1.1 and 1.2.
For the inclusion process, the condensation phenomenon was first studied in [13], however for spatially inhomogeneous systems on a finite lattice with diverging number of particles. In homogeneous systems, condensation is a consequence of increasing particle interactions relative to diffusion as \(d\to 0\). The finite lattice case has been subject to further studies in [13, 14, 15]. When considering a thermodynamic limit, i.e. diverging lattice size \(L\) and number of particles \(N\) with finite limiting density \(\rho\), condensation was studied heuristically in [10]. They considered a one-dimensional periodic lattice with totally asymmetric dynamics and vanishing diffusion rate s.t. \(\theta=0\). A modified model with stronger particle interactions, leading to instantaneous condensation even in one spatial dimension has been considered in [16, 17].
On a rigorous level, the thermodynamic limit of stationary distributions has been treated in [11]. Under the assumption that \(d\to 0\) as \(L\to\infty\), we have the following cases:
* if \(dL\to 0\), then the condensate is given by a single cluster, and if in addition \(dL\log L\to 0\) this cluster is holding all the particles;
* if \(dL\to\theta\in(0,\infty)\), then the condensate concentrates on macroscopic scales and is distributed according to a Poisson-Dirichlet distribution \(\mathrm{PD}(\theta)\);
* on the other hand if \(dL\to\infty\), the condensate is located on mesoscopic scales and the clusters are independent, more precisely, \[d(\tilde{\eta}_{1},\dots,\tilde{\eta}_{n})\stackrel{{ D}}{{ \longrightarrow}}\mathrm{Exp}(\tfrac{1}{\rho})^{\otimes n}\,.\] (29)
Here \(\tilde{\eta}\) denotes a size-biased sample w.r.t. \(\eta\in\Omega_{L,N}\), i.e. \(\tilde{\eta}_{k}=\eta_{\sigma(k)}\), for some random permutation \(\sigma\) chosen iteratively as follows: first
* \(\sigma(1)=x\) with probability \(\tfrac{\eta_{x}}{N}\), \(x\in\{1,\dots,L\}\),
and for any following index \(k=1,\dots,L\)
* \(\sigma(k)=x\) with probability \(\tfrac{\eta_{x}}{N-\sum_{j=1}^{k-1}\eta_{\sigma(j)}}\), \(x\in\{1,\dots,L\}\setminus\{\sigma(1),\dots,\sigma(k-1)\}\).
We refer to [11, Definition 2] for details.
The above result holds for any irreducible and spatially homogeneous dynamics on diverging finite graphs, where the inclusion process has stationary product measures. Because condensation in homogeneous systems only occurs if \(d\to 0\), [11] almost entirely characterised the stationary condensates for the inclusion process, and in particular, the results hold for the complete graph dynamics we consider here. It was proven in [10] that perturbations of the transition rates (in the case \(\theta\ \leqslant\ 1\)) still give rise to a Poisson-Dirichlet distributed condensate.
Both Theorem 1.1 and Theorem 1.2 complete the picture of condensation behaviour of the inclusion process on complete graphs outside of stationarity. Moreover, our results link the
inclusion process dynamics directly with the Poisson-Dirichlet diffusion, when \(\theta<\infty\), which allows for enhanced understanding of the latter dynamics.
In their work [10], Ethier and Kurtz proved that the closure of \((\mathcal{G}_{\theta},\mathcal{D}_{mon}(\mathcal{G}_{\theta}))\), cf. (17), gives rise to a generator of a diffusion process on \(\overline{\nabla}\), which is the natural scaling limit of a finite-dimensional Wright-Fisher model when sending the number of individuals and types to infinity separately. The restriction to test functions in \(\mathcal{D}_{mon}(\mathcal{G}_{\theta})\) turns out to be convenient, but makes it difficult to understand the precise dynamics of the infinite dimensional process. In [10], also an enlarged domain of test-functions of the form
\[p\mapsto\sum_{i=1}^{\infty}h(p_{i})\,,\quad h\in C^{2}([0,1])\text{ with }h(0)=h^{\prime}(0)=0\,, \tag{30}\]
was considered. However, this does not improve the understanding of the dynamics on an intuitive level, which is particularly due to the fact that'sums are evaluated on \(\nabla\) and extended to \(\overline{\nabla}\) by continuity'. In this paper, we instead propose to consider functions of the form
\[p\mapsto h(0)+\sum_{i=1}^{\infty}p_{i}(h(p_{i})-h(0))\,,\quad h\in C^{2}([0,1] )\,. \tag{31}\]
For a fixed \(p\in\overline{\nabla}\), the r.h.s. is the expectation w.r.t. the probability measure \(\mu^{(p)}\), recall (4). Moreover, note that the functions \(\varphi_{m}\) are of the form (31) with \(h(p)=p^{m-1}\).
The usual approach in the literature, when constructing the Poisson-Dirichlet diffusion, is to take the large \(L\)-limit of an \(L\)-dimensional diffusion model. Alternatively, discrete models have been considered but then first convergence to the \(L\)-dimensional diffusion model, when \(N\to\infty\), is proven. See for example [10, 11, 12]. To the authors best knowledge, the present article is the first to consider a thermodynamic limit, which is taking both size of the system and number of particles to infinity at the same time while keeping the density approximately constant. This makes sense both from a physical and population genetics perspective. In particular, taking a joint limit allows for interesting dynamics in the case \(\theta=\infty\) which could otherwise not be considered, recall Theorem 1.5.
The Poisson-Dirichlet diffusion was treated previously as a measure-valued process in \(\mathcal{M}_{1}([0,1])\) in [11, 10], where it was considered as a Fleming-Viot process with mutation operator
\[A_{FV}h(u)=\theta\int_{0}^{1}[h(v)-h(u)]\,dv\,. \tag{32}\]
Here the elements in \([0,1]\) are interpreted as types, and uniform jumps at rate \(\theta\) in the mutation operator correspond to mutation events. These dynamics can be derived in the thermodynamic limit from the inclusion process on a complete graph with \(dL\to\theta\in(0,\infty)\), using the embedding
\[\eta\in\Omega_{L,N}\mapsto\sum_{x=1}^{L}\frac{\eta_{x}}{N}\delta_{\frac{x}{L}} \in\mathcal{M}([0,1])\,. \tag{33}\]
This describes the spatial distribution of mass on the rescaled lattice and is different from the approach in the present paper, where we ignore spatial information and only keep track of the mass distribution, cf. (4). On the other hand, our approach is more robust and allows for an extended analysis of the model for \(\theta=\infty\). The embedding (33) has been considered in [14] for the inclusion process on a complete graph of fixed size. They study convergence to equilibrium in the long-time limit and with diverging mass \(N\to\infty\). The derivation of a Fleming-Viot process
with mutation (32) in the thermodynamic limit is relatively straightforward if the inclusion process is formulated in terms of particle positions, which is presented briefly in Appendix B.
Applying our approach to other geometries may be possible for dense random graphs along the lines of [1], which have diverging degrees leading to a self-averaging effect similar to the complete graph. In general, spatial models are difficult to treat since the inclusion process after the embedding (4) is not Markovian. Consider for example nearest-neighbour dynamics on a regular lattice, then it is known that the random-walk and the inclusion part of the dynamics have two different time scales, see [1], and more sophisticated methods are necessary to treat this case.
### Outline of the paper
In Section 2 we show that \(\mathcal{L}_{\theta}\) generates a Feller process and prove Theorem 1.1. We make use of explicit approximations of the inclusion process generator and the Trotter-Kurtz approximation theorem. Moreover, we prove the equivalence of the Poisson-Dirichlet diffusion and our scaling limit in Section 2.2. Lastly, we discuss the advantages of considering size-biased dynamics in Section 2.3. In Section 3 we determine the scaling limit when \(\theta=\infty\), following a similar approach as in the case \(\theta<\infty\). We finish the section by proving the convergence \(\frac{1}{\theta}\mathcal{L}_{\theta}\to\hat{\mathcal{L}}\) stated in Theorem 1.5. Lastly, we discuss boundary cases \(\rho\in\{0,\infty\}\), fluctuations and open problems in Section 4.
## 2. Scaling limits in the case \(dL\to\theta<\infty\)
### The measure-valued process
In this section, we will prove that the measure-valued process generated by \(\mathcal{L}_{\theta}\) (7) is a Feller process on the state space \(E\) (3). Furthermore, we deduce weak convergence on path space for the the inclusion process configurations embedded in the space of probability measures on the unit interval.
#### 2.1.1. Approximation of infinitesimal dynamics
The key result of this section is the following convergence result on the level of pre-generators
**Proposition 2.1**.: _Let \(\rho\in(0,\infty)\) and \(d=d(L)\) such that \(dL\to\theta\in[0,\infty)\). For every \(H\in\mathcal{D}(\mathcal{L}_{\theta})\), cf. (6), we have with \(\mathfrak{L}_{L,N}\) defined in (1)_
\[\lim_{N/L\to\rho}\sup_{\eta\in\Omega_{L,N}}\big{|}\mathfrak{L}_{L,N}H(\mu^{( \cdot)})(\eta)-(\mathcal{L}_{\theta}H)(\mu^{(\eta)})\big{|}=0\,. \tag{34}\]
We split the proof of Proposition 2.1 into two parts. First, we only consider test functions of elementary form \(H(\mu)=\mu(h)\), which corresponds to measuring a single observable \(h\in C^{3}([0,1])\). We then extend the convergence result to arbitrary test functions in the domain, which requires to understand correlations between several observables. As usual, it turns out that only pairwise correlations contribute to leading order.
**Lemma 2.2**.: _Let \(\rho\in(0,\infty)\) and \(d=d(L)\) such that \(dL\to\theta\in[0,\infty)\). Consider \(H\in\mathcal{D}(\mathcal{L}_{\theta})\) of the elementary form \(H(\mu)=\mu(h)\), for some \(h\in C^{3}([0,1])\). Then_
\[\lim_{N/L\to\rho}\sup_{\eta\in\Omega_{L,N}}\big{|}\mathfrak{L}_{L,N}H(\mu^{( \cdot)})(\eta)-\mu^{(\eta)}(A_{\theta}h)\big{|}=0\,, \tag{35}\]
_where \(A_{\theta}\) is the single-particle generator, introduced in (8)._
Proof.: Let \(h\in C^{3}([0,1])\) and define \(H(\mu):=\mu(h)\). For the sake of convenience we introduce the notation \(\tilde{h}(z):=z\,h(z)\). Thus,
\[H(\mu^{(\eta)})=H(\mu_{\#}\eta)=\mu^{(\eta)}(h)=\sum_{x=1}^{L}\frac{\eta_{x}}{N} h(\tfrac{\eta_{x}}{N})=\sum_{x=1}^{L}\tilde{h}(\tfrac{\eta_{x}}{N})\,, \tag{36}\]
which allows us to write
\[\begin{split} H(\mu_{\#}\eta^{x,y})-H(\mu_{\#}\eta)& =\tilde{h}(\tfrac{\eta_{y}+1}{N})-\tilde{h}(\tfrac{\eta_{y}}{N})+ \tilde{h}(\tfrac{\eta_{x}-1}{N})-\tilde{h}(\tfrac{\eta_{x}}{N})\\ &=\frac{1}{N}\tilde{h}^{\prime}(\tfrac{\eta_{y}}{N})+\frac{1}{2} \frac{1}{N^{2}}\tilde{h}^{\prime\prime}(\tfrac{\eta_{y}}{N})-\frac{1}{N} \tilde{h}^{\prime}(\tfrac{\eta_{x}}{N})\\ &\quad+\frac{1}{2}\frac{1}{N^{2}}\tilde{h}^{\prime\prime}(\tfrac {\eta_{x}}{N})+\frac{1}{6}\frac{1}{N^{3}}\tilde{h}^{\prime\prime\prime}(\xi)\,, \end{split} \tag{37}\]
using a second-order Taylor approximation of \(\tilde{h}\), with \(\xi\in[0,1]\). Therefore, we have uniformly over configurations \(\eta\)
\[\mathfrak{L}_{L,N}H(\mu^{(\cdot)})(\eta)=\sum_{\begin{subarray}{c }x,y\in\Lambda\\ x\neq y\end{subarray}}\eta_{x}(d+\eta_{y})\Big{[}\frac{1}{N}\tilde{h}^{\prime} (\tfrac{\eta_{y}}{N})+ \frac{1}{2}\frac{1}{N^{2}}\tilde{h}^{\prime\prime}(\tfrac{\eta_{y}}{N})\] \[-\frac{1}{N}\tilde{h}^{\prime}(\tfrac{\eta_{x}}{N})+\frac{1}{2} \frac{1}{N^{2}}\tilde{h}^{\prime\prime}(\tfrac{\eta_{x}}{N})\Big{]}+o(1)\,.\]
We split the sum into two parts, by analysing terms with coefficients \(d\,\eta_{x}\) and \(\eta_{x}\eta_{y}\) separately. We begin with the latter:
* The contribution of inclusion rates \(\eta_{x}\eta_{y}\) is limited to \[\sum_{\begin{subarray}{c}x,y\in\Lambda\\ x\neq y\end{subarray}}\frac{\eta_{x}}{N}\frac{\eta_{y}}{N}\tilde{h}^{\prime \prime}(\tfrac{\eta_{x}}{N})=\sum_{x\in\Lambda}\frac{\eta_{x}}{N}\Big{(}1- \frac{\eta_{x}}{N}\Big{)}\tilde{h}^{\prime\prime}(\tfrac{\eta_{x}}{N})\,,\] (38) due to exact cancellation of the first-order terms \(\tilde{h}^{\prime}\).
* On the other hand, contributions of the random-walk dynamics induced by rates \(d\,\eta_{x}\) are given by \[d\sum_{\begin{subarray}{c}x,y\in\Lambda\\ x\neq y\end{subarray}}\frac{\eta_{x}}{N}\left[\tilde{h}^{\prime}(\tfrac{\eta_ {y}}{N})+\frac{1}{2}\frac{1}{N}\tilde{h}^{\prime\prime}(\tfrac{\eta_{y}}{N})- \tilde{h}^{\prime}(\tfrac{\eta_{x}}{N})+\frac{1}{2}\frac{1}{N}\tilde{h}^{ \prime\prime}(\tfrac{\eta_{x}}{N})\right]\] \[=dL\,\sum_{x\in\Lambda}\frac{\eta_{x}}{N}\left[\frac{1}{L}\sum_{y\neq x }\tilde{h}^{\prime}(\tfrac{\eta_{y}}{N})-\frac{L-1}{L}\tilde{h}^{\prime}( \tfrac{\eta_{x}}{N})\right]+o(1)\,,\] (39) because second-order terms \(\tilde{h}^{\prime\prime}\) vanish in the thermodynamic limit due to \[\left|\begin{subarray}{c}d\sum_{\begin{subarray}{c}x,y\in\Lambda\\ x\neq y\end{subarray}}\frac{\eta_{x}}{2N^{2}}\big{(}\tilde{h}^{\prime\prime}( \tfrac{\eta_{x}}{N})+\tilde{h}^{\prime\prime}(\tfrac{\eta_{y}}{N})\big{)}\\ \end{subarray}\right|\;\leqslant\;dL\,\sum_{x\in\Lambda}\frac{\eta_{x}}{N^{2}} \|\tilde{h}^{\prime\prime}\|_{\infty}\;\leqslant\;\frac{dL}{N}\|\tilde{h}^{ \prime\prime}\|_{\infty}\to 0\,.\] (40) Furthermore, we can absorb errors arising from replacing \(\frac{L-1}{L}\tilde{h}^{\prime}\) with \(\tilde{h}^{\prime}\), into \(o(1)\).
Now, combining (38) and (39) yields
\[\mathfrak{L}_{L,N}H(\mu^{(\cdot)})(\eta)=\sum_{x\in\Lambda}\frac{\eta_{x}}{N} \Big{(}1-\frac{\eta_{x}}{N}\Big{)}\tilde{h}^{\prime\prime}(\tfrac{\eta_{x}}{N})+ dL\left[\tilde{h}^{\prime}(0)-\sum_{x\in\Lambda}\frac{\eta_{x}}{N}\tilde{h}^{\prime}( \tfrac{\eta_{x}}{N})\right]+o(1)\,, \tag{41}\]
where we additionally used Lemma A.2 to conclude the uniform approximation \(\frac{1}{L}\sum_{y\in\Lambda\,,y\neq x}\tilde{h}^{\prime}(\tfrac{\eta_{y}}{N}) =\tilde{h}^{\prime}(0)+o(1)\). Rewriting (41) in terms of \(\mu^{(\eta)}\), we have
\[\mathfrak{L}_{L,N}H(\mu^{(\cdot)})(\eta)=\mu^{(\eta)}\big{(}z(1- z)h^{\prime\prime}(z)\] \[\qquad\qquad\qquad+2(1-z)h^{\prime}(z)+dL(h(0)-h(z)-zh^{\prime}(z ))\big{)}+o(1)\,, \tag{42}\]
where we used that \(\tilde{h}^{\prime}(z)=h(z)+zh^{\prime}(z)=Bh(z)\) and \(\tilde{h}^{\prime\prime}(z)=2h^{\prime}(z)+zh^{\prime\prime}(z)=(Bh)^{\prime} (z)\). Lastly, since \(\|Bh\|_{\infty}<\infty\) and \(dL\to\theta\), we indeed have
\[\mathfrak{L}_{L,N}H(\mu^{(\cdot)})(\eta)=\mu^{(\eta)}(A_{\theta}h)+o(1)\,, \tag{43}\]
uniformly over all \(\eta\in\Omega_{L,N}\). This concludes the proof.
**Remark 2.3**.: _Note that the equivalence to the Poisson-Dirichlet diffusion can already be observed in (41) when considering \(h\) to be of the form \(h(z)=z^{m-1}\), \(m\ \geqslant\ 2\). In this case \(H(\mu^{(\eta)})=\mu^{(\eta)}(h)=\sum_{x=1}^{L}\tilde{h}(\tfrac{\eta_{x}}{N})= \varphi_{m}(\tfrac{\eta}{N})\), cf. (18), and_
\[\mathfrak{L}_{L,N}H(\mu^{(\cdot)})(\eta)\simeq\mathcal{G}_{\theta}\varphi_{m}( \tfrac{\tilde{\eta}}{N})\,. \tag{44}\]
After having proved the statement of Proposition 2.1 for specific functions, we can now proceed with the proof of the full statement.
Proof of Proposition 2.1.: Let \(H\in\mathcal{D}(\mathcal{L}_{\theta})\). Without loss of generality we may assume \(H\) has the form
\[H(\mu)=\mu(h_{1})\cdots\mu(h_{n})\,,\quad h_{k}\in C^{3}([0,1])\,,\quad 1\ \leqslant\ k\ \leqslant\ n\,, \tag{45}\]
since linear combinations of such functions can be treated by linearity of the operators and the triangle inequality. Thus, considering \(\eta\in\Omega_{L,N}\) and the configuration after one particle jumped from \(x\) to \(y\), we have
\[H\left(\mu_{\#}\eta^{x,y}\right) =\prod_{k=1}^{n}\left(\mu_{\#}\eta^{x,y}\right)(h_{k})\] \[=\prod_{k=1}^{n}\left[\widetilde{h}_{k}(\tfrac{\eta_{y}+1}{N})- \widetilde{h}_{k}(\tfrac{\eta_{y}}{N})+\widetilde{h}_{k}(\tfrac{\eta_{x}-1}{N })-\widetilde{h}_{k}(\tfrac{\eta_{x}}{N})+\mu^{(\eta)}(h_{k})\right]\,.\]
Now, expanding the product yields
\[H\left(\mu_{\#}\eta^{x,y}\right) =H\left(\mu_{\#}\eta\right)+\sum_{k=1}^{n}\left[\widetilde{h}_{k }(\tfrac{\eta_{y}+1}{N})-\widetilde{h}_{k}(\tfrac{\eta_{y}}{N})+\widetilde{h }_{k}(\tfrac{\eta_{x}-1}{N})-\widetilde{h}_{k}(\tfrac{\eta_{x}}{N})\right] \prod_{\begin{subarray}{c}l=1\\ l\neq k\end{subarray}}^{n}\mu^{(\eta)}(h_{l})\] \[\qquad+2\sum_{1\ \leqslant\ k<l\ \leqslant\ n}\left[\widetilde{h}_{k}( \tfrac{\eta_{y}+1}{N})-\widetilde{h}_{k}(\tfrac{\eta_{y}}{N})+\widetilde{h}_{k }(\tfrac{\eta_{x}-1}{N})-\widetilde{h}_{k}(\tfrac{\eta_{x}}{N})\right] \tag{46}\] \[\qquad\qquad\times\left[\widetilde{h}_{l}(\tfrac{\eta_{y}+1}{N}) -\widetilde{h}_{l}(\tfrac{\eta_{y}}{N})+\widetilde{h}_{l}(\tfrac{\eta_{x}-1}{ N})-\widetilde{h}_{l}(\tfrac{\eta_{x}}{N})\right]\prod_{\begin{subarray}{c}j=1\\ j\neq k,l\end{subarray}}^{n}\mu^{(\eta)}(h_{j})+r(\eta)\,,\]
with \(r\) denoting the remainder. This expansion allows us to split
\[\mathfrak{L}_{L,N}H(\mu^{(\cdot)})(\eta)=\sum_{\begin{subarray}{c}x,y=1\\ x\neq y\end{subarray}}^{L}\eta_{x}(d+\eta_{y})\left[H(\mu_{\#}\eta^{x,y})-H(\mu_{ \#}\eta)\right] \tag{47}\]
into three parts:
* First, we make use of Lemma 2.2 which yields \[\sum_{\begin{subarray}{c}x,y=1\\ x\neq y\end{subarray}}^{L}\eta_{x}(d+\eta_{y})\sum_{k=1}^{n}[\widetilde{h}_{k} (\tfrac{\eta_{y}+1}{N})-\widetilde{h}_{k}(\tfrac{\eta_{y}}{N})+\widetilde{h}_{ k}(\tfrac{\eta_{x}-1}{N})-\widetilde{h}_{k}(\tfrac{\eta_{x}}{N})]\prod_{ \begin{subarray}{c}l=1\\ l\neq k\end{subarray}}^{n}\mu^{(\eta)}(h_{l})\] \[=\sum_{k=1}^{n}\mathfrak{L}_{L,N}\big{(}\mu^{(\cdot)}(h_{k})\big{)}( \eta)\prod_{\begin{subarray}{c}l=1\\ l\neq k\end{subarray}}^{n}\mu^{(\eta)}(h_{l})=\sum_{k=1}^{n}\mu^{(\eta)}(A_{ \theta}h_{k})\prod_{\begin{subarray}{c}l=1\\ l\neq k\end{subarray}}^{n}\mu^{(\eta)}(h_{l})+o(1)\,.\]
* Next, we prove that the remainder \(r\) has no contribution. More precisely, for any non-negative sequence \(a_{N}\) satisfying \(N^{2}\,a_{N}\to 0\), i.e. \(a_{N}\) lies in \(o(\tfrac{1}{N^{2}})\), we have \[a_{N}\sum_{\begin{subarray}{c}x,y=1\\ x\neq y\end{subarray}}^{L}\eta_{x}(d+\eta_{y})\;\leqslant\;a_{N}\,(dL+N)N\to 0\,.\] (48) This includes, in particular, the remainder \(r(\eta)\) because each summand lies in \(o(\tfrac{1}{N^{3}})\), recall that each square bracket in (46) vanishes uniformly like \(N^{-1}\), cf. (37).
* Lastly, we derive the interaction part where two observables are affected by the transition of a particle. Again, we perform a Taylor approximation for each of the two square brackets appearing in (46). Due to (48), together with (37), it suffices to consider only products of first-order terms \(\tilde{h}^{\prime}\). Therefore, we are left with \[\frac{2}{N^{2}}\sum_{1\;\leqslant\;k<l\;\leqslant\;n}\sum_{ \begin{subarray}{c}x,y=1\\ x\neq y\end{subarray}}^{L}\eta_{x}(d+\eta_{y})\big{[}\tilde{h}^{\prime}_{k}( \tfrac{\eta_{y}}{N})-\tilde{h}^{\prime}_{k}(\tfrac{\eta_{x}}{N})\big{]}\big{[} \tilde{h}^{\prime}_{l}(\tfrac{\eta_{y}}{N})-\tilde{h}^{\prime}_{l}(\tfrac{ \eta_{x}}{N})\big{]}\prod_{\begin{subarray}{c}j=1\\ j\neq k,l\end{subarray}}^{n}\mu^{(\eta)}(h_{j})+o(1)\,.\] For the same reason we include the random-walk interactions coming from \(d\,\eta_{x}\) in \(o(1)\), and finally arrive at \[2\sum_{1\;\leqslant\;k<l\;\leqslant\;n}\sum_{\begin{subarray}{c}x,y=1\\ x\neq y\end{subarray}}^{L}\frac{\eta_{x}}{N}\frac{\eta_{y}}{N}\big{[}\tilde{h} ^{\prime}_{k}(\tfrac{\eta_{y}}{N})-\tilde{h}^{\prime}_{k}(\tfrac{\eta_{x}}{N}) \big{]}\big{[}\tilde{h}^{\prime}_{l}(\tfrac{\eta_{y}}{N})-\tilde{h}^{\prime}_{ l}(\tfrac{\eta_{x}}{N})\big{]}\prod_{\begin{subarray}{c}j=1\\ j\neq k,l\end{subarray}}^{n}\mu^{(\eta)}(h_{j})+o(1)\] \[=2\sum_{1\;\leqslant\;k<l\;\leqslant\;n}\big{(}\mu^{(\eta)}(\tilde{h }^{\prime}_{k}\tilde{h}^{\prime}_{l})-\mu^{(\eta)}(\tilde{h}^{\prime}_{k})\mu^ {(\eta)}(\tilde{h}^{\prime}_{l})\big{)}\prod_{\begin{subarray}{c}j=1\\ j\neq k,l\end{subarray}}^{n}\mu^{(\eta)}(h_{j})+o(1)\,,\] where we expanded the product of square brackets and added the (non-contributing) diagonal \(x=y\), before writing the expression in terms of \(\mu^{(\eta)}\). Also, recall that \(\tilde{h}^{\prime}=Bh\).
Overall, combining the three bullets above, we derive
\[\mathfrak{L}_{L,N}H(\mu^{(\cdot)})(\eta)=\mathcal{L}_{\theta}H(\mu^{(\eta)})+o( 1)\,, \tag{49}\]
uniformly in \(\eta\in\Omega_{L,N}\). This finishes the proof.
#### 2.1.2. Convergence to the measure-valued process
The measure-valued process takes values in the space \(E=\mu^{(\nabla)}\subset\mathcal{M}_{1}([0,1])\), cf. (3). Due to Lemma A.1\(E\) itself is closed, thus, compact w.r.t. the topology induced by weak convergence of probability measures. Because this topology coincides with the subspace topology, the Hausdorff property of \(E\) is inherited from \(\mathcal{M}_{1}([0,1])\).
In this section, we show that the dynamics described by \(\mathcal{L}_{\theta}\) give rise to a Feller process and prove Theorem 1.1, which states that the process arises naturally as the scaling limit of the inclusion process.
**Proposition 2.4**.: _For \(\theta\in[0,\infty)\) the linear operator \((\mathcal{L}_{\theta},\mathcal{D}(\mathcal{L}_{\theta}))\) is closable and its closure generates a Feller process on the state space \(E\subset\mathcal{M}_{1}([0,1])\)._
The proof follows along the lines of [1, Theorem 2.5] where they proved existence of the Poisson-Dirichlet diffusion.
Proof.: Throughout the proof we will make use of the sub-domain
\[\mathcal{D}_{mon}(\mathcal{L}_{\theta}):= \Big{\{}\text{sub-algebra of }C(E)\text{ generated by functions}\] \[\mu\mapsto\mu(h)\text{ with }h(z)=z^{m}\,,m\in\mathbb{N}_{0}\Big{\}} \subset\mathcal{D}(\mathcal{L}_{\theta})\,. \tag{50}\]
First note that, due to the Stone-Weierstrass theorem, \(\mathcal{D}_{mon}(\mathcal{L}_{\theta})\) (and therefore \(\mathcal{D}(\mathcal{L}_{\theta})\)) is dense in \(C(E)\) since it separates points: consider \(\mu,\sigma\in E\) such that \(\mu\neq\sigma\), then \(\mu(z^{m})\neq\sigma(z^{m})\) for some \(m\in\mathbb{N}\) since otherwise all moments, and hence \(\mu\) and \(\sigma\), agree.
Next, dissipativity of \((\mathcal{L}_{\theta},\mathcal{D}(\mathcal{L}_{\theta}))\) follows from \((\mathfrak{L}_{L,N})_{L,N}\), since for any \(H\in\mathcal{D}(\mathcal{L}_{\theta})\) we have
\[\|(\lambda-\mathfrak{L}_{L,N})H(\mu^{(\cdot)})\|_{\Omega_{L,N}, \infty}\;\geqslant\;\lambda\|H(\mu^{(\cdot)})\|_{\Omega_{L,N},\infty}\quad \forall\lambda>0. \tag{51}\]
The left hand side is upper bounded by
\[\|(\lambda-\mathcal{L}_{\theta})H\|_{E,\infty}+\|\mathcal{L}_{ \theta}H(\mu^{(\cdot)})-\mathfrak{L}_{L,N}H(\mu^{(\cdot)})\|_{\Omega_{L,N}, \infty}\,, \tag{52}\]
with the second term vanishing due to Proposition 2.1. On the other hand, using Lemma A.4, we have
\[\sup_{\eta\in\Omega_{L,N}}|H(\mu^{(\eta)})|\to\sup_{p\in\overline{ \nabla}}|H(\mu^{(p)})|=\|H\|_{E,\infty}\,. \tag{53}\]
In the remainder of the proof, we first conclude that \(\mathcal{D}_{mon}(\mathcal{L}_{\theta})\) is a core for \(\mathcal{L}_{\theta}\), using the fact that \(\mathcal{L}_{\theta}\) is triangulisable. The full statement then follows immediately by an extension argument. For that purpose, we define subspaces
\[D_{n}(\mathcal{L}_{\theta}):=\{H\in\mathcal{D}_{mon}(\mathcal{L}_{\theta})\, :\,deg(H)\;\leqslant\;n\}, \tag{54}\]
where \(deg(H)=m_{1}+\cdots m_{k}\) if \(H\) is of the form \(\mu(z^{m_{1}})\cdots\mu(z^{m_{k}})\), \(m_{j}\in\mathbb{N}\,\text{for}\,1\;\leqslant\;j\;\leqslant\;k.\) When \(H\) is given by linear combinations of such products, the degree denotes the maximum degree of the products. Note that \(\big{(}D_{n}(\mathcal{L}_{\theta})\big{)}_{n\;\geqslant\;1}\) defines an increasing sequence with limit \(\mathcal{D}_{mon}(\mathcal{L}_{\theta})\). It is only left to show that \(\mathcal{L}_{\theta}\) maps elements of \(D_{n}(\mathcal{L}_{\theta})\) back into itself. This is, however, immediate since both parts of the generator \(\mathcal{L}_{\theta}\) (7) map polynomials of a certain degree back into polynomials of the same degree. Hence, using [1, Proposition 1.3.5], we conclude that \((\mathcal{L}_{\theta},\mathcal{D}_{mon}(\mathcal{L}_{\theta}))\) is indeed closable and gives rise to a strongly continuous contraction semigroup \((T_{t})_{t\;\geqslant\;0}\) on \(C(E)\).
Now, we can easily verify that also \(\mathcal{D}(\mathcal{L}_{\theta})\) is a core, using [1, Proposition 1.3.1], since
\[\mathcal{R}(\lambda-\mathcal{L}_{\theta}|_{\mathcal{D}(\mathcal{L}_{\theta})}) \supset\mathcal{R}(\lambda-\mathcal{L}_{\theta}|_{\mathcal{D}_{mon}( \mathcal{L}_{\theta})})\,, \tag{55}\]
is dense for some \(\lambda>0\). Since generators are maximal dissipative, we know that the closures w.r.t. both cores must agree, cf. [1, Proposition 1.4.1] and hence give rise to the same semigroup \((T_{t})_{t\;\geqslant\;0}\). It is only left to show that the semigroup is positive and conservative, in particular \(E\) is invariant under the dynamics \(\mathcal{L}_{\theta}\). In order to see this, we apply Trotter's theorem, see e.g. [1, Theorem 1.6.1], which concludes that Proposition 2.1 implies
\[\lim_{N/L\to\rho}\sup_{\eta\in\Omega_{L,N}}\big{|}\mathfrak{T}_{t}^{(L,N)}H(\mu ^{(\cdot)})(\eta)-(T_{t}H)(\mu^{(\eta)})\big{|}=0\,,\quad\forall H\in C(E)\,,t \;\geqslant\;0\,, \tag{56}\]
where \(\mathfrak{T}^{(L,N)}\) denotes the semigroup generated by \(\mathfrak{L}_{L,N}\). Now, both positivity and conservation follow from \((\mathfrak{T}^{(L,N)})_{L,N}\). This concludes the proof.
**Remark 2.5**.: _It is natural to ask why one should go through the inconveniences of extending the core from \(\mathcal{D}_{mon}(\mathcal{L}_{\theta})\) to \(\mathcal{D}(\mathcal{L}_{\theta})\). However, we will see in the next subsection that the extended core allows for a better interpretation of the underlying dynamics in the Poisson-Dirichlet diffusion._
The proof of our first main result, namely, convergence of the inclusion process (when embedded in the space of probability measures) to the measure-valued process characterised by \(\mathcal{L}_{\theta}\), is now an immediate consequence of a classical convergence theorem.
Proof of Theorem 1.1.: We apply [1, Theorem 4.2.11] together with (56), which immediately concludes the desired convergence result.
Finally, we can use Theorem 1.1 to proof convergence of the inclusion process to the Poisson-Dirichlet diffusion.
Proof of Corollary 1.4.: Every function \(\varphi_{m}\) can be written in terms of an expectation
\[p\mapsto\mu^{(p)}(h_{m})\,,\]
with \(h_{m}(z):=z^{m-1}\). Thus, we have
\[\big{(}\varphi_{m}\big{(}\tfrac{1}{N}\hat{\eta}^{(L,N)}(t)\big{)}\big{)}_{t\; \geqslant\;0}=\big{(}(\mu_{\#}\eta^{(L,N)}(t))(h_{m})\big{)}_{t\;\geqslant\;0 }\stackrel{{ D}}{{\longrightarrow}}(\mu_{t}(h_{m}))_{t\; \geqslant\;0}\,, \tag{57}\]
using Theorem 1.1. This convergence can be extended to arbitrary elements in \(\mathcal{D}_{mon}(\mathcal{G}_{\theta})\), in particular such sequences are tight. Thus, by [1, Theorem 3.9.1], the sequence \(\big{(}\big{(}\tfrac{1}{N}\hat{\eta}^{(L,N)}(t)\big{)}_{t\;\geqslant\;0} \big{)}_{L,N}\) is tight and has subsequential limits. As convergence of finite dimensional marginals follows from (57), we conclude the statement together with Proposition 1.3, which is proved in the next subsection.
### Equivalence of the measure-valued process with PD-diffusion
In this section we prove Proposition 1.3 and investigate the equivalence of the measure-valued process generated by \(\mathcal{L}_{\theta}\) (7) and the Poisson-Dirichlet diffusion on the simplex \(\overline{\nabla}\), generated by \(\mathcal{G}_{\theta}\) (17). We already saw in the proof of Lemma 2.2, cf. (41), the similarity of dynamics of \(\mathcal{L}_{\theta}\) and \(\mathcal{G}_{\theta}\). Indeed, a simple calculation shows that the two can be linked: Using the embedding (4) we get for all \(p\in\overline{\nabla}\) and \(H(\mu)=\mu(h)\), with \(h\in\mathcal{D}(A)\),
\[\mathcal{L}_{\theta}H(\mu^{(p)})= \mu^{(p)}(A_{\theta}h)=\big{(}1-\|p\|_{1}\big{)}A_{\theta}h(0)\] \[+\sum_{i=1}^{\infty}p_{i}\Big{(}p_{i}(1-p_{i})h^{\prime\prime}(p_ {i})+\big{(}2(1-p_{i})-\theta p_{i}\big{)}h^{\prime}(p_{i})+\theta(h(0)-h(p_{ i}))\Big{)}\,.\]
Defining now \(f(p):=\mu^{(p)}(h)\), we have
\[\mathcal{L}_{\theta}H(\mu^{(p)})=2h^{\prime}(0)(1-\|p\|_{1})+\mathcal{G}_{\theta} f(p)\,, \tag{58}\]
where we used that \(\partial_{p_{i}}f(p)=-h(0)+p_{i}h^{\prime}(p_{i})+h(p_{i}))\) and \(\partial_{p_{i}p_{j}}f(p)=\delta_{ij}\big{(}2h^{\prime}(p_{i})+p_{i}h^{\prime \prime}(p_{i})\big{)}\) and \(\mathcal{G}_{\theta}\) as defined in (17).
**Remark 2.6**.: _In [1], the authors extended the domain of \(\mathcal{G}_{\theta}\) from \(\mathcal{D}(\mathcal{G}_{\theta})\) to the sub-algebra of \(C(\overline{\nabla})\) generated by functions of the form \(p\mapsto\sum_{i=1}^{\infty}g(p_{i})\), with \(g\in C^{2}([0,1])\) such that \(g(0)=g^{\prime}(0)=0\). This yields a similar expression as (58), cf. [1, 10]. However, the expression again only made sense with the convention that sums are evaluated on \(\nabla\) and extended by continuity, in which case the first summand in (58) disappears._
Proof of Proposition 1.3.: In order to show the equivalence of the two processes, it suffices to restrict ourselves to the domains generated by monomials as defined in (18) and (50). Every function \(H\in\mathcal{D}_{mon}(\mathcal{L}_{\theta})\) can be mapped to \(f_{H}\in\mathcal{D}_{mon}(\mathcal{G}_{\theta})\) (and vise versa). Let \(H\in\mathcal{D}_{mon}(\mathcal{L}_{\theta})\) be of the form \(H(\mu)=\mu(h_{1})\cdots\mu(h_{n})\), where \(h_{k}(z):=z^{m_{k}-1}\), then \(f_{H}=\varphi_{m_{1}}\cdots\varphi_{m_{n}}\) where we recall \(\varphi_{m}(p)=\sum_{i=1}^{\infty}p_{i}^{m}\). Then
\[(\mathcal{L}_{\theta}H)(\mu^{(p)}) =2\sum_{1\;\leqslant\;k<l\;\leqslant\;n}m_{k}m_{l}\big{(}\mu^{(p) }(h_{k}h_{l})-\mu^{(p)}(h_{k})\mu^{(p)}(h_{l})\big{)}\prod_{j\neq k,l}\mu^{(p) }(h_{j}) \tag{59}\] \[\qquad+\sum_{1\;\leqslant\;k\;\leqslant\;n}\mu^{(p)}(A_{\theta}h _{k})\prod_{j\neq k}\mu^{(p)}(h_{j})\,,\]
where we used that \(Bh_{k}=m_{k}\,h_{k}\). Rewriting the r.h.s. in terms of \(\varphi\)'s, we have
\[(\mathcal{L}_{\theta}H)(\mu^{(\cdot)}) =2\sum_{1\;\leqslant\;k<l\;\leqslant\;n}m_{k}m_{l}\big{(}\varphi _{m_{k}+m_{l}-1}-\varphi_{m_{k}}\varphi_{m_{l}}\big{)}\prod_{j\neq k,l}\varphi _{m_{j}}\] \[\qquad+\sum_{1\;\leqslant\;k\;\leqslant\;n}\mathcal{G}_{\theta} \varphi_{m_{k}}\prod_{j\neq k}\varphi_{m_{j}}\,,\]
where we used \(\mu^{(p)}(A_{\theta}h_{k})=\mathcal{G}_{\theta}\varphi_{m_{k}}\) from (58). Thus, \((\mathcal{L}_{\theta}H)(\mu^{(\cdot)})\) agrees with \(\mathcal{G}_{\theta}f_{H}\) on \(\mathcal{D}_{mon}(\mathcal{G}_{\theta})\), cf. [1, 10]. Let \((X(t))_{t\;\geqslant\;0}\) be the Poisson-Dirichlet diffusion, then for every \(H\in\mathcal{D}_{mon}(\mathcal{L}_{\theta})\)
\[H(\mu^{(X(t))})-\int_{0}^{t}\mathcal{L}_{\theta}H(\mu^{(X_{s})})\,ds=f_{H}(X(t ))-\int_{0}^{t}\mathcal{G}_{\theta}f_{H}(X_{s})\,ds \tag{60}\]
defines a martingale in \(t\). Thus, \((\mu^{(X(t))})_{t\;\geqslant\;0}\) solves the martingale problem for \((\mathcal{L}_{\theta},\mathcal{D}_{mon}(\mathcal{L}_{\theta}))\).
Now, it is almost immediate that properties (i) - (iii) in Proposition 1.3 hold for the measure valued process \((\mu_{t})_{t\;\geqslant\;0}\). First, let \(G,H\in\mathcal{D}_{mon}(\mathcal{L}_{\theta})\) and choose corresponding \(f_{G},f_{H}\in\mathcal{D}_{mon}(\mathcal{G}_{\theta})\) as above. We know that \((\mathcal{L}_{\theta}G)(\mu^{(p)})=\mathcal{G}_{\theta}f_{G}(p)\). Writing \(\nu=\mathrm{PD}(\theta)\) for simplicity, we have \(\mathbf{P}=\mu_{\#}\mathrm{PD}(\theta)\) with
\[\mathbf{P}(H\mathcal{L}_{\theta}G)=\nu\big{(}H(\mu^{(p)})(\mathcal{L}_{\theta}G )(\mu^{(p)})\big{)}=\nu(f_{H}\mathcal{G}_{\theta}f_{G})\,. \tag{61}\]
It is known that \(\mathrm{PD}(\theta)\) is the unique invariant distribution w.r.t. \(\mathcal{G}_{\theta}\)[1, Theorem 4.3]; which is also reversible. Together with the above display, this yields \(\mathbf{P}(H\mathcal{L}_{\theta}G)=\mathbf{P}(G\mathcal{L}_{\theta}H)\).
Continuity of the trajectories in (ii) follows from the diffusion property of \((X(t)(\omega))_{t\;\geqslant\;0}\) and continuity of the map \(\mu^{(\cdot)}\), together with the fact \((\mu^{(X(t))})_{t\;\geqslant\;0}\stackrel{{ d}}{{=}}(\mu_{t})_{t \;\geqslant\;0}\).
Lastly, (iii) is a consequence of
\[\mathbb{P}\left[\mu_{t}(\{0\})=0\quad\forall t>0\right]=\mathbb{P}\left[\mu^{(X(t ))}(\{0\})=0\quad\forall t>0\right]=\mathbb{P}\left[X(t)\in\nabla\quad\forall t>0 \right]=1\,,\]
where we used [1, Theorem 2.6] in the last step.
**Remark 2.7**.: _Naturally, we could have proven convergence of the inclusion process to the Poisson-Dirichlet diffusion directly and then defined the measure-valued dynamics using the embedding via \(\mu^{(\cdot)}\). This would have slightly shortened the exposition in the present section, since it would not have been necessary to verify existence of the limiting dynamics. We refrained from doing so for the sake of a better understanding of the underlying dynamics in the measure-valued process, in particular on the extended domain \(\mathcal{D}(\mathcal{L}_{\theta})\)._
Recall the generator of the single-particle dynamic (8)
\[A_{\theta}h(z):=z(1-z)h^{\prime\prime}(z)+(2(1-z)-\theta z)h^{\prime}(z)+ \theta(h(0)-h(z))\,,\quad h\in C^{2}([0,1])\,,\]
which characterises a Feller process on the unit interval. The process evolves according to a diffusion with an additional renewal mechanism due to jumps to zero.
**Lemma 2.8**.: _The Beta distribution \(\mathrm{Beta}(1,\theta)\) is the unique invariant distribution with respect to \(A_{\theta}\)._
Proof.: For \(\theta=0\), we interpret the degenerate distribution \(\mathrm{Beta}(1,0)\) as the Dirac point mass \(\delta_{1}\). The statement is then clear since \(A_{\theta}h(1)=0\). Hence, in this case with \(\theta=0\) the point mass is even reversible.
Now, let \(\theta>0\) and consider \(H(\mu):=\mu(h)\), then by Proposition 1.3(i)
\[0=\mathbf{P}(\mathcal{L}_{\theta}H)=\int\mu(A_{\theta}h)\,\mathbf{P}(d\mu)= \mathbb{E}\Big{[}\sum_{i=1}^{\infty}X_{i}A_{\theta}h(X_{i})\Big{]}=\mathbb{E} [A_{\theta}h(\tilde{X}_{1})]\,, \tag{62}\]
where \(X\sim\mathrm{PD}(\theta)\). It is well known that the first size-biased marginal \(\tilde{X}_{1}\) is \(\mathrm{Beta}(1,\theta)\)-distributed [1, Theorem 2.7].
Uniqueness of the Beta distribution is due to Harris recurrence of the process, see e.g. [13]. For the case \(\theta>0\), the resetting mechanism guarantees that the process returns to zero infinitely often almost surely. On the other hand for \(\theta=0\), \(A_{\theta}\) agrees with a Jacobi diffusion, see (65) below. The corresponding process runs into the absorbing state \(z=1\) in finite time, independent of the initial condition.
Due to the jumps to zero, one does not expect that \(\mathrm{Beta}(1,\theta)\), \(\theta>0\), is reversible w.r.t. \(A_{\theta}\). Indeed, this can be verified easily by considering the example \(h(x)=x\) and \(g(x)=x^{2}\), in which case
\[\mathrm{Beta}(1,\theta)(gA_{\theta}h)=\frac{8\theta\Gamma(\theta+1)}{\Gamma( \theta+4)}\neq\frac{6\theta\Gamma(\theta+1)}{\Gamma(\theta+4)}=\mathrm{Beta} (1,\theta)(hA_{\theta}g)\,,\quad\forall\theta>0\,. \tag{63}\]
### The advantage of a size-biased evolution
Throughout the previous sections, we have seen two viewpoints of the same dynamics. The classical Poisson-Dirichlet diffusion considers a ranked configuration space. However, this obscures the dynamics on microscopic scales, which results e.g. into defining the r.h.s. of the generator \(\mathcal{G}_{\theta}\) (17) to be 'evaluated on \(\nabla\) and extended to \(\overline{\nabla}\) by continuity'. Alternatively, one can consider unordered dynamics, i.e. observing the evolution from a fixed position, or with a size-biased viewpoint. The Poisson-Dirichlet diffusion concentrates immediately on configurations consisting of macroscopic-sized fragments, which can
only concentrate on a vanishing fraction of the volume. Hence, an unordered state space can only describe dynamics up to a certain point when the mass present at the observed positions disappears. The goal of this section is to emphasise that a size-biased viewpoint allows for both, a complete description of the macroscopic dynamics while observing interaction with the microscopic scale.
#### 2.3.1. Time evolution on fixed sites
First, we look at arbitrary finite positions and observe the evolution of masses on them. As the inclusion process is spatially homogeneous, we may choose for simplicity \(\eta\mapsto(\eta_{1},\ldots,\eta_{n})\).
We start by only considering the evolution on the first site. Performing similar approximations as in Section 2, we can see for an arbitrary function \(h\in C^{3}([0,1])\)
\[\mathfrak{L}_{L,N}h(\tfrac{(\cdot)_{1}}{N})(\eta)=A_{Jac(\theta)}h(\tfrac{ \eta_{1}}{N})+o(1)\,, \tag{64}\]
where
\[A_{Jac(\theta)}h(z):=z(1-z)h^{\prime\prime}(z)-\theta z\,h^{\prime}(z)\,. \tag{65}\]
In fact, we can see \(A_{Jac(\theta)}\) emerging in (41) when fixing a position \(x\). The operator \(A_{Jac(\theta)}\) is the generator of a Jacobi-diffusion, cf. [13], and describes the evolution of a single chunk of mass located at a given position.
To describe the evolution on the first \(n\) positions we introduce for \(i=0,1,\ldots,n\)
\[\xi_{i}=\xi_{i}(\eta):=\begin{cases}\eta_{i}&\text{if }1\;\leqslant\;i\; \leqslant\;n\\ N-\sum_{j=1}^{n}\xi_{j}&\text{if }i=0\,,\end{cases} \tag{66}\]
where \(\xi_{0}\) is the remaining mass in the system outside sites \(1,\ldots,n\). Thus, the rescaled vector \(\tfrac{1}{N}\xi\) lies in \(\Delta_{n+1}:=\{p\in[0,1]^{n+1}\,:\,\sum_{i=0}^{n}p_{i}=1\}\). Again, by approximation of the generators, one can show that
\[\tfrac{1}{N}(\xi(t))_{t\;\geqslant\;0}\overset{D}{\longrightarrow}\mathrm{WF} _{n+1}(\theta,0,\ldots,0)\,. \tag{67}\]
Here, \(\mathrm{WF}_{n+1}(\theta,0,\ldots,0)\) denotes the Wright-Fisher diffusion on \(\Delta_{n+1}\) which is characterised by the generator
\[A_{WF_{n+1}(\theta,\mathbf{0})}h(z_{0,\ldots,n}) =\sum_{i,j=0}^{n}z_{i}(\delta_{i,j}-z_{j})\partial_{z_{i}z_{j}}^{ 2}h(z_{0,\ldots,n}) \tag{68}\] \[\qquad+\theta\sum_{i=1}^{n}z_{i}(\partial_{z_{0}}h-\partial_{z_{ i}}h)(z_{0,\ldots,n})\,,\]
acting on those \(h\) that have an extension to \(\mathbb{R}^{n+1}\) which is twice continuously differentiable.
However, we can already see for a single observable that the Jacobi-diffusion has an absorbing state at \(z=0\), which it will run into in finite time almost surely [14, Section 7.10]. Similarly, the Wright-Fisher diffusion will be absorbed at \((1,0,\ldots 0)\in\Delta_{n+1}\), after which the process does not capture the dynamics of the infinite-dimensional process anymore as all the mass has moved away from the first \(n\) sites.
For the Poisson-Dirichlet diffusion, the relationship to the Jacobi and Wright-Fisher diffusion has been studied in greater generality in the two-parameter setting [13]. They use a Fleming-Viot construction of the process, cf. (33). Because they start from the Poisson-Dirichlet diffusion on \(\overline{\nabla}\), there is no underlying graph structure and instead of placing mass at a fixed
position, they choose a uniform random variable on \([0,1]\) which determines the position of the point mass.
Moreover, it is interesting to note that the boundary behaviour of the Jacobi diffusion agrees with the one of the PD-diffusion. More precisely, in the case of the Jacobi diffusion the state \(1\) can only be reached if and only if \(\theta<1\), see e.g. [16, Theorem 4.1] or [14, Section 7.10]. Similarly, the PD-diffusion \((X(t))_{t\;\geqslant\;0}\) hits the finite dimensional sub-simplices \(\nabla\cap\{\sum_{i=1}^{n}p_{i}=1\}\), for any \(n\;\geqslant\;1\), if and only if \(\theta<1\)[15].
#### 2.3.2. Duality and size-biased time evolution
We recall from Dynkin's formula, i.e. taking expectations of the first term in (15),
\[\frac{d}{dt}\mathbb{E}_{\mu_{0}}\big{[}\mu_{t}(h)\big{]}=\mathbb{E}_{\mu_{0}} \big{[}\mu_{t}(A_{\theta}h)\big{]}\,,\quad\forall h\in\mathcal{D}(A)\,,\]
which implies the duality (16)
\[\mathbb{E}_{\mu_{0}}[\mu_{t}(h)]=\mathbf{E}_{\mu_{0}}[h(Z(t))]\,,\quad\forall h \in\mathcal{D}(A)\,, \tag{69}\]
where \((Z(t))_{t\;\geqslant\;0}\) is a process on \([0,1]\) with generator \(A_{\theta}\), cf. (11), and initial distribution \(\mu_{0}\). The identity can be extended to all \(h\in C([0,1])\) by standard arguments, see e.g. [1, Section 6]. In analogy to known duality properties of the microscopic particle system [1, 2] this can for example be used to get closed evolution equations for moments. Due to size-biasing \(h(z)=z\) describes the expected second moment of the mass distribution and we get
\[\frac{d}{dt}\mathbb{E}_{\mu_{0}}[\mu_{t}(z)]=\mathbf{E}_{\mu_{0}}[A_{\theta}Z( t)]=\mathbf{E}_{\mu_{0}}\Big{[}2\big{(}1-(1+\theta)Z_{t}\big{)}\Big{]}=2-2(1+ \theta)\mathbb{E}_{\mu_{0}}[\mu_{t}(z)]\.\]
This has an exponential solution which converges to the stationary point \(\frac{1}{1+\theta}\), the expected second moment of the GEM(\(\theta\)) distribution. Dualities of this form were previously considered in [13] and [10], see also [1, Section 6] for a summary. In Proposition 3.4 we will see that for \(dL\to\infty\) dualities of the form (69) extend directly to nonlinear test functions \(H(\mu)\), due to the absence of an interaction part in the generator \(\hat{\mathcal{L}}\), cf. (14). In the present case one can still construct higher dimensional dual processes, but their rates depend explicitly on an a priori chosen test function and there is no generic simple choice since the interaction part in \(\mathcal{L}_{\theta}\) (7) depends on \(Bh\) rather than \(h\).
We stress once more the difference in point of view: whereas in previous works, see [1] and references therein, the dual particles encode the position of clusters on the underlying lattice, in our size-biased approach the state of dual particles characterises the fragmentation of mass in a given configuration/partition. This allows for observing the dynamics of macroscopic cluster size distributions, while tracking only a finite number of dual particles.
The duality in (69) is also interesting from a computational point of view, as it allows to continuously track the expected behaviour of the infinite dimensional process using only a finite-dimensional diffusion, without running into any absorbing states as is the case when observing a fixed set of lattice sites. A simple example is the second moment of cluster sizes in the Poisson-Dirichlet diffusion at time \(t\) which is given by \(\mathbf{E}_{\mu_{0}}[Z(t)]\) as mentioned above.
## 3. The diffusion limit in the case \(dL\to\infty\)
The case of \(dL\to\infty\) may be considered as an interpretation of the Poisson-Dirichlet diffusion with infinite mutation rate \(\theta=\infty\). Clearly, this corresponds to an infinite drift towards zero in
the single particle operator \(A_{\theta}\), cf. (8). Thus, in order to see non-trivial dynamics, we have to rescale time appropriately. Recalling (42), we see that
\[\frac{1}{dL}\mathfrak{L}_{L,N}H(\mu^{(\cdot)})(\eta)=\mu^{(\eta)}\Big{(}(h(0)-h( z)-zh^{\prime}(z))\Big{)}+o(1)\,, \tag{70}\]
where \(H(\mu)=\mu(h)\), \(h\in C^{3}([0,1])\). The time-change also eradicates the interaction term in the corresponding limiting measure-valued process. We are left with a process that pushes mass (deterministically) from the interval \((0,1]\) onto zero. Hence, mass will not accumulate on the macroscopic scale, instead we need to consider an appropriate mesoscopic scale to see the actual dynamics of the fast mixing mechanism.
In [10] it was proven that, at stationarity, mass accumulates on the mesoscopic scale of order \(d^{-1}\), when \(\rho\in(0,\infty)\), cf. (29). Thus the embedding of particle configurations into \(\mathcal{M}_{1}(\mathbb{R}_{+})\) via (10) with \(\hat{\mu}^{(\eta)}=\sum_{x=1}^{L}\frac{\eta_{x}}{N}\delta_{\frac{dL}{N}}\eta_ {x}\) is an appropriate a-priori choice.2 In order to take particle configurations with mass lying on larger scales than \(N/(dL)\) into account, we will consider probability measures \(\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\) on the one-point compactification, instead of restricting ourselves to the positive real line. We equip \(\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\) with the topology induced by weak convergence, thus, \(\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\) is compact.
Footnote 2: For the case \(\rho\in(0,\infty)\), also the choice \(\delta_{d\eta_{x}}\) is appropriate and leads to a \(\rho\) dependent limit. However, for the boundary cases \(\rho\in\{0,\infty\}\) the given choice turns out to be the correct one.
### Deriving the diffusion limit
Once more we rely on the Trotter-Kurtz approximation to conclude the scaling limit in Theorem 1.2. We follow the same steps as in Section 2, carried out below for completeness.
**Proposition 3.1**.: _Let \(H\in\mathcal{D}(\hat{\mathcal{L}})\), then_
\[\lim_{N/L\to\rho}\sup_{\eta\in\Omega_{L,N}}\left|\frac{1}{dL}\mathfrak{L}_{L, N}H(\hat{\mu}^{(\cdot)})(\eta)-\hat{\mathcal{L}}H(\hat{\mu}^{(\eta)})\right|=0\,. \tag{71}\]
Below, we will show that the interaction term of the limiting measure-valued process indeed vanishes. First, we only consider test functions of the form \(\mu\mapsto\mu(h)\).
**Lemma 3.2**.: _Let \(\rho\in(0,\infty)\) and \(H(\mu)=\mu(h)\), with \(h\in\mathcal{D}(\hat{A})\) (12). Then_
\[\lim_{N/L\to\rho}\sup_{\eta\in\Omega_{L,N}}\left|\frac{1}{dL}\mathfrak{L}_{L, N}H(\hat{\mu}^{(\cdot)})(\eta)-\hat{\mu}^{(\eta)}(\hat{A}h)\right|=0\,, \tag{72}\]
_where \(\hat{A}\) is the single-particle generator defined in (11)._
Proof.: Without loss of generality, we assume that \(h\in C^{3}_{c}(\mathbb{R}_{+})\). For simplicity of notation we will write \(p_{x}=dL\frac{\eta_{x}}{N}\). Following the same steps as in the proof of Lemma 2.2, we have
\[\frac{1}{dL}\mathfrak{L}_{L,N}H(\hat{\mu}^{(\cdot)})(\eta) =\frac{1}{2N^{2}}\sum_{\begin{subarray}{c}x,y=1\\ x\neq y\end{subarray}}^{L}\eta_{x}\eta_{y}\big{(}\widetilde{h}^{\prime\prime}( p_{x})+\widetilde{h}^{\prime\prime}(p_{y})\big{)} \tag{73}\] \[\qquad+\frac{1}{L\,N}\sum_{\begin{subarray}{c}x,y=1\\ x\neq y\end{subarray}}^{L}\eta_{x}\big{(}\widetilde{h}^{\prime}(p_{y})- \widetilde{h}^{\prime}(p_{x})\big{)}+o(1)\,,\]
where we gained an additional factor \((dL)^{-1}\) by rewriting \(\frac{\eta_{x}}{N}h(d\eta_{x})=\frac{1}{dL}\tilde{h}(p_{x})\). Here we used again the fact that second-order terms in the second sum have a vanishing contribution and
first-order terms in the first sum cancel exactly, cf. proof of Lemma 2.2. Display (73) can be written as
\[\frac{1}{dL}\mathfrak{L}_{L,N}H(\mu^{(\cdot)})(\eta) =\sum_{x\in\Lambda}\frac{\eta_{x}}{N}\Big{(}1-\frac{p_{x}}{dL} \Big{)}\tilde{h}^{\prime\prime}(p_{x}) \tag{74}\] \[\qquad+\left[\frac{1}{L}\sum_{y\in\Lambda}\tilde{h}^{\prime}(p_{ y})-\sum_{x\in\Lambda}\frac{\eta_{x}}{N}\tilde{h}^{\prime}(p_{x})\right]+o(1)\,.\]
Using again Lemma A.2, we have \(\frac{1}{L}\sum_{y\in\Lambda}\tilde{h}^{\prime}(p_{y})=\tilde{h}^{\prime}(0)+ o(1)\). Hence,
\[\frac{1}{dL}\mathfrak{L}_{L,N}H(\mu^{(\cdot)})(\eta) =\hat{\mu}^{(\eta)}\left(\tilde{h}^{\prime\prime}(p)+[\tilde{h}^{ \prime}(0)-\tilde{h}^{\prime}(p)]\right)+o(1)=\hat{\mu}^{(\eta)}\big{(}\hat{A} h\big{)}+o(1)\,, \tag{75}\]
where we additionally used the fact that \(\tilde{h}^{\prime\prime}\) is bounded and of compact support, thus, \(\frac{p}{dL}\tilde{h}^{\prime\prime}(p)\) vanishes in the thermodynamic limit because \(dL\to\infty\).
**Remark 3.3**.: _Considering the state space \(\mathcal{M}_{\,\leqslant\,1}(\mathbb{R}_{+})\) instead of \(\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\), (74) suggests that \(\hat{A}\) should act via_
\[H\mapsto\big{(}\mu\mapsto\mu(\hat{A}h)+(1-\mu(1))h(0)\big{)}\,, \tag{76}\]
_on functions \(H:\mathcal{M}_{\,\leqslant\,1}(\mathbb{R}_{+})\mapsto\mathbb{R}\) of the form \(H(\mu)=\mu(h)\). The extra term takes into account the transfer of mass from larger scales, i.e. above \(N/(dL)\), which is pushed onto microscopic scales, cf. Corollary 3.7. However, \(\mu\mapsto\mu(1)\) is not a continuous function on \(\mathcal{M}_{\,\leqslant\,1}(\mathbb{R}_{+})\). Instead the mass transport from larger scales is implicit in the generator \(\hat{\mathcal{L}}\), as we will see below._
Proof of Proposition 3.1.: It suffices to consider functions \(H\in\mathcal{D}(\hat{\mathcal{L}})\) of the form \(H(\mu)=\mu(h_{1})\cdots\mu(h_{n})\), with \(h_{k}\in C_{c}^{3}(\mathbb{R})\). The interaction term of the operator \(\hat{\mathcal{L}}\) is again given by the second-order term of the following expansion
\[H(\hat{\mu}_{\#}\eta^{x,y}) =H(\hat{\mu}^{(\eta)})\] \[\qquad+\frac{1}{dL}\sum_{k=1}^{n}[\widetilde{h}_{k}(p_{y}+\tfrac {dL}{N})-\widetilde{h}_{k}(p_{y})+\widetilde{h}_{k}(p_{x}-\tfrac{dL}{N})- \widetilde{h}_{k}(p_{x})]\prod_{\begin{subarray}{c}l=1\\ l\neq k\end{subarray}}^{n}\hat{\mu}^{(\eta)}(h_{l})\] \[\qquad+\frac{1}{(dL)^{2}}\sum_{1\,\leqslant\,k<l\,\leqslant\,n}[ \widetilde{h}_{k}(p_{y}+\tfrac{dL}{N})-\widetilde{h}_{k}(p_{y})+\widetilde{h}_ {k}(p_{x}-\tfrac{dL}{N})-\widetilde{h}_{k}(p_{x})] \tag{77}\] \[\qquad\qquad\times[\widetilde{h}_{l}(p_{y}+\tfrac{dL}{N})- \widetilde{h}_{l}(p_{y})+\widetilde{h}_{l}(p_{x}-\tfrac{dL}{N})-\widetilde{h}_ {l}(p_{x})]\prod_{\begin{subarray}{c}j=1\\ j\neq k,l\end{subarray}}^{n}\hat{\mu}^{(\eta)}(h_{j})\] \[\qquad+\frac{1}{(dL)^{3}}r(\eta)\,.\]
In the first order term each summand can be treated individually using Lemma 3.2, it only remains to check that both second-order term and remainder have no contribution.
Using a first-order Taylor expansion yields the following bound
\[\left|\widetilde{h}(p_{y}+\tfrac{dL}{N})-\widetilde{h}(p_{y})+ \widetilde{h}(p_{x}-\tfrac{dL}{N})-\widetilde{h}(p_{x})\right|\] \[\qquad=\tfrac{dL}{N}\left|\tilde{h}^{\prime}(p_{y})-\tilde{h}^{ \prime}(p_{x})+\frac{1}{2}\tfrac{dL}{N}(\tilde{h}^{\prime\prime}(\xi_{y})+ \tilde{h}^{\prime\prime}(\xi_{x}))\right|\] \[\qquad\leqslant\ 2\tfrac{dL}{N}(\|\tilde{h}^{\prime}\|_{\infty}+\| \tilde{h}^{\prime\prime}\|_{\infty}),\]
where \(\xi_{x},\xi_{y}\in[0,dN]\) corresponding to the associated remainder term. Hence, the second-order term in (77), after applying \(\tfrac{1}{dL}\mathfrak{L}_{L,N}\), is upper bounded (up to a constant) by
\[\frac{1}{dL}\sum_{1\;\leqslant\;k<l\;\leqslant\;n}\frac{1}{N^{2}}\sum_{ \begin{subarray}{c}x,y=1\\ x\neq y\end{subarray}}^{L}\eta_{x}(d+\eta_{y})(\|h^{\prime}_{k}\|_{\infty}+\|h ^{\prime\prime}_{k}\|_{\infty})(\|h^{\prime}_{l}\|_{\infty}+\|h^{\prime\prime }_{l}\|_{\infty})\prod_{\begin{subarray}{c}j=1\\ j\neq k,l\end{subarray}}^{n}\|h_{j}\|_{\infty}\,,\]
which vanishes as \(dL\to\infty\). For the same reason, higher order terms in the expansion (77) have no contribution either.
The closure of \((\hat{A},\mathcal{D}(\hat{A}))\) generates a Feller semigroup on \(\overline{\mathbb{R}}_{+}\), thus, the closure of \((\hat{\mathcal{L}},\mathcal{D}(\hat{\mathcal{L}}))\) generates a Fleming-Viot process on the compact space \(\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\), with trajectories in \(C([0,\infty),\mathcal{M}_{1}(\overline{\mathbb{R}}_{+}))\), in the absence of interaction [1, Theorem 2.3]. We now have everything at hand to prove our second main result, Theorem 1.2.
Proof of Theorem 1.2.: Once more, we apply [1, Theorem 4.2.11] together with Proposition 3.1, which immediately concludes the desired convergence (13) in \(D([0,\infty),\mathcal{M}_{1}(\bar{\mathbb{R}}_{+}))\). Note that any fixed initial condition \(\mu\in\mathcal{M}_{1}(\bar{\mathbb{R}}_{+})\) can be approximated by particle configurations using the embedding \(\hat{\mu}^{(\cdot)}\), cf. Lemma A.5. This completes the proof.
### Duality and the hydrodynamic limit
The absence of interaction in \(\hat{\mathcal{L}}\) leads to a deterministic evolution of \((\hat{\mu}_{t}(h))_{t\;\geqslant\;0}\), \(h\in\mathcal{D}(\hat{A})\). This is a consequence of Dynkin's formula as derived in (15), since the process solves the ODE \(d\hat{\mu}_{t}(h)=\hat{\mu}_{t}(\hat{A}h)\,dt\). Hence, the evolution of \((\hat{\mu}_{t}(h))_{t\;\geqslant\;0}\) can be described by a single particle evolving according to the process generated by \(\hat{A}\), averaged over its initial condition \(\hat{\mu}_{0}\). See also the duality mentioned in (16) which we extend in the next result. Unlike in the case of \(dL\to\theta<\infty\), we can fully characterise the semigroup of \(\hat{\mathcal{L}}\) by considering only the evolution w.r.t. the single particle generator \(\hat{A}\).
**Proposition 3.4**.: _Let \(g\in C(\overline{\mathbb{R}}_{+}^{n})\) and define \(G(\mu):=\mu^{\otimes n}(g)\), then for any \(\hat{\mu}_{0}\in\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\)_
\[G\left(\hat{\mu}_{t}\right)=\mathbf{E}_{\hat{\mu}_{0}^{\otimes n}}\left[g( \hat{Z}(t))\right]\,,\quad\forall t\;\geqslant\;0\,, \tag{78}\]
_where \((\hat{\mu}_{t})_{t\geqslant 0}\) evolves w.r.t. \(\hat{\mathcal{L}}\) and \((\hat{Z}(t))_{t\;\geqslant\;0}\) is the process consisting of \(n\) independent copies generated by the single-particle generator \(\hat{A}\), cf. (11). In particular, for \(n=1\) we have \(\hat{\mu}_{t}=Law(\hat{Z}(t))\) whenever the initial conditions agree in the sense that \(\hat{Z}(0)\sim\hat{\mu}_{0}\)._
Proof.: Following precisely the same steps as in [1, Section 6] using the resolvent operator, one can conclude the duality
\[\mathbb{E}_{\hat{\mu}_{0}}\left[G\left(\hat{\mu}_{t}\right)\right]=\mathbf{E}_ {\hat{\mu}_{0}^{\otimes n}}\left[g(\hat{Z}(t))\right]\,,\quad\forall t\; \geqslant\;0\,, \tag{79}\]
with \(g\in C(\overline{\mathbb{R}}_{+}^{n})\) and \(G(\mu):=\mu^{\otimes n}(g)\). This is essentially a direct consequence of the absence of an interaction term in the generator \(\hat{\mathcal{L}}\), cf. (14), implying for \(H(\mu)=\mu(h_{1})\cdots\mu(h_{n})\), \(h_{i}\in\mathcal{D}(\hat{A})\),
\[\hat{\mathcal{L}}H(\mu)=\sum_{1\;\leqslant\;k\;\leqslant\;n}\mu(\hat{A}h_{k}) \prod_{\begin{subarray}{c}l=1\\ l\neq k\end{subarray}}^{n}\mu(h_{l})\,.\]
Now, let us consider the case \(n=1\) for which the identity reads
\[\hat{\mu}_{t}(h)=\mathbb{E}_{\hat{\mu}_{0}}\left[\hat{\mu}_{t}(h)\right]= \mathbf{E}_{\hat{\mu}_{0}}\left[h(\hat{Z}(t))\right]\,,\quad\forall t\; \geqslant\;0\,,\;h\in C(\overline{\mathbb{R}}_{+})\,, \tag{80}\]
where we additionally used the fact that \(\hat{\mu}_{t}(h)\) is deterministic, cf. (15). In particular, (80) implies that \(\hat{\mu}_{t}=Law(\hat{Z}(t))\) and the measure-valued evolution \((\hat{\mu}_{t})_{t\geqslant 0}\) is indeed deterministic. Hence, the expected value on the left-hand side of (79) has no affect and can be dropped.
The duality result in Proposition 3.4, and equivalently Theorem 1.2, can be interpreted in the sense of a hydrodynamic limit.
**Proposition 3.5** (Hydrodynamic limit).: _Consider the process \((\hat{\mu}_{t})_{t\;\geqslant\;0}\) generated by \(\hat{\mathcal{L}}\) with initial data \(\hat{\mu}_{0}\in\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\). Then for every \(t>0\), \(\hat{\mu}_{t}\) has a Lebesgue-density \(f(t,\cdot)\) on \(\mathbb{R}_{+}\). The evolution of the density \((f(t,\cdot))_{t>0}\) solves_
\[\begin{cases}\partial_{t}f(t,z)=z\,\partial_{z}^{2}f(t,z)+z\,\partial_{z}f(t,z )\\ \lim_{z\to 0}f(t,z)=1\end{cases} \tag{81}\]
_with \(\lim_{t\to 0}\int_{0}^{\infty}h(z)f(t,z)\,dz=\hat{\mu}_{0}(h)\) for every \(h\in C_{c}(\mathbb{R}_{+})\)._
**Remark 3.6**.: _The diffusion part of the generator \(\hat{A}\), given by_
\[\hat{A}_{D}h(z):=zh^{\prime\prime}(z)+(2-z)h^{\prime}(z)\,, \tag{82}\]
_generates the so called Cox-Ingersoll-Ross model [11], which is a well studied diffusion process in mathematical finance and population genetics._
Proof.: Consider first the case for an initial condition that has no atom at infinity, i.e. \(\|f_{0}\|_{L^{1}(\mathbb{R}_{+})}=1\), and let \(z_{0}\in[0,\infty)\). The Cox-Ingersoll-Ross model generated by \(\hat{A}_{D}\) is known to have a density \(g(t,\cdot|z_{0})\) for any positive time and initial data [11, Display below (3.2)]. In fact, it is explicitly given and for \(z_{0}=0\) it evaluates to
\[g(t,z|0)=\frac{z}{(2\ell_{t})^{2}}e^{-z(2\ell_{t})^{-1}}\quad\text{with}\quad \ell_{t}:=\tfrac{1}{2}(1-e^{-t})\,. \tag{83}\]
Furthermore, for any \(t>0\) we have \(g_{t}(\cdot|z_{0})\big{|}_{(0,\infty)}\in C^{\infty}((0,\infty))\)[11, Proposition 3.2]. The resetting mechanism is given by a Poisson jump process, thus, [1, Theorem 1] guarantees that also the process \((\hat{Z}(t))_{t\;\geqslant\;0}\) generated by \(\hat{A}\) has a density which is given by
\[f(t,z|z_{0})=e^{-t}g(t,z|z_{0})+\int_{0}^{t}e^{-s}g(s,z|0)\,ds\,. \tag{84}\]
We note that \(f(t,\cdot|z_{0})\) inherits the regularity properties of \(g(t,\cdot|z_{0})\) on \((0,\infty)\). This follows from the change of variable \(r=z/(2\ell_{s})\) with \(\frac{dr}{ds}=-\frac{z}{(2\ell_{s}^{2})}e^{-s}\) which yields for every \(z>0\)
\[f(t,z|z_{0})=e^{-t}g(t,z|z_{0})+\int_{\frac{z}{2\ell_{t}}}^{\infty}e^{-r}\,dr=e ^{-t}g(t,z|z_{0})+e^{-z(2\ell_{t})^{-1}}\,. \tag{85}\]
Thus, \(f(t,\cdot|z_{0})\in C^{\infty}((0,\infty))\).
Now, it is only left to verify that \((f(t,\cdot))_{t\;\geqslant\;0}\) indeed solves the given PDE. Using integration by parts, we see that for any \(h\in C^{2}_{c}(\mathbb{R}_{+})\), we have
\[\mu_{t}(\hat{A}h)=\int_{0}^{\infty}f(t,z)\,\hat{A}h(z)\,dz=\int_{0}^{\infty} \hat{A}^{*}f(t,z)\,h(z)\,dz \tag{86}\]
with the adjoint action defined as
\[\hat{A}^{*}f(z):=zf^{\prime\prime}(z)+zf^{\prime}(z)+\delta_{0}(z)\big{(}1-f(z) \big{)}\,. \tag{87}\]
Hence, the density \(f=\big{(}f(t,\cdot)\big{|}_{(0,\infty)}\big{)}_{t\;\geqslant\;0}\) in (85) solves the PDE
\[\partial_{t}f=\hat{A}^{*}f\,,\quad f(0,\cdot)=\delta_{z_{0}}\,. \tag{88}\]
It is easy to see from (85) that \(\lim_{z\to 0}f(t,z)=1\) for any \(t>0\) since \(g(t,0|z_{0})=0\) for all \(z_{0}\;\geqslant\;0\), thus, the boundary term in (87) vanishes and we are left with the PDE in the statement.
Consider now the case of \(f_{0}=\delta_{\infty}\). The only way \((\hat{Z}(t))_{t\;\geqslant\;0}\) can escape infinity is via the resetting mechanism. Thus, (85) still applies with \(g(t,z|z_{0})\) replaced by \(\delta_{\infty}\), since \(\mathbb{P}(\hat{Z}(t)=\infty)\) is equal to the probability that the process has not jumped yet. One can check that also \(f(\cdot|\infty)\) solves the given PDE on \((0,\infty)\) with the correct boundary condition. The result for arbitrary initial conditions now follows by integrating over the densities w.r.t. \(\mu_{0}\) and Leibniz rule.
Given the explicit form of the density (85), we can read of the evolution of the mass process \((\mu_{t}[\mathbb{R}_{+}])_{t\;\geqslant\;0}\).
**Corollary 3.7**.: _We have_
\[\hat{\mu}_{t}[\mathbb{R}_{+}]=1-(1-\hat{\mu}_{0}[\mathbb{R}_{+}])e^{-t}\,. \tag{89}\]
Proof.: We integrate the PDE from Proposition 3.5 in space, which yields
\[\partial_{t}\|f(t,\cdot)\|_{L^{1}(\mathbb{R}_{+})}=\int_{0}^{\infty}\big{(}z \,\partial_{z}^{2}f(t,z)+z\,\partial_{z}f(t,z)\big{)}\,dz\,. \tag{90}\]
The right-hand side simplifies to the differential equation
\[d\alpha_{t}=(1-\alpha_{t})\,dt\,,\quad\alpha_{0}=\mu_{0}[\mathbb{R}_{+}]\,, \tag{91}\]
using integration by parts, its solution is given by (89).
Furthermore, we summarise invariance and exponential ergodicity of \((\hat{\mu}_{t})_{t\;\geqslant\;0}\) in the following lemma:
**Lemma 3.8**.: _The process \((\hat{Z}(t))_{t\;\geqslant\;0}\) satisfies the following properties:_
1. _The exponential distribution_ \(\operatorname{Exp}(1)\) _is the unique invariant probability measure._
2. _We have_ \[\Big{\|}\text{Law}(\hat{Z}(t))-\operatorname{Exp}(1)\Big{\|}_{TV}\;\leqslant \;e^{-t}\,,\quad\forall t\;\geqslant\;0\,.\] (92)
Proof.: The fact that the exponential distribution is invariant can be explicitly proven using integration by parts, but also follows directly from [1, Corollary 1]. The exponential ergodicity is a consequence of [1, Theorem 2]. The same result yields that the process is Harris recurrent, thus, the Exponential distribution is the unique invariant distribution.
Lastly, it is an easy consequence that the point mass on the Exponential distribution is an absorbing point for the measure valued process.
**Corollary 3.9**.: \(\hat{\mathbf{P}}=\delta_{\mathrm{Exp}(1)}\) _is reversible w.r.t. the dynamics induced by \(\hat{\mathcal{L}}\)._
Proof.: Let \(H\in\mathcal{D}(\hat{\mathcal{L}})\) be of the form \(H(\mu)=\mu(h_{1})\cdots\mu(h_{n})\). For simplicity we write \(\nu:=\mathrm{Exp}(1)\in\mathcal{M}_{1}(\mathbb{R}_{+})\). Then
\[\hat{\mathcal{L}}H(\nu)=\sum_{j=1}^{n}\nu(\hat{A}h_{i})\prod_{l\neq j}\nu(h_{l })=0\,, \tag{93}\]
as \(\nu\) is invariant w.r.t. \(\hat{A}\), cf. Lemma 3.8. Thus, \(\hat{\mathbf{P}}(F\hat{\mathcal{L}}H)=0=\hat{\mathbf{P}}(H\hat{\mathcal{L}}F)\).
### A natural extension of the Poisson-Dirichlet diffusion
In this section we prove Theorem 1.5 which states that \(\mathcal{L}_{\theta}\) (which is equivalent to the Poisson-Dirichlet diffusion) has a limit as \(\theta\to\infty\) under appropriate rescaling.
Proof of Theorem 1.5.: The statement is once more a conclusion of the Trotter-Kurtz approximation. We state the essential steps for completeness. Let \(H\in\mathcal{D}(\hat{\mathcal{L}})\) be of the form \(\mu\mapsto\mu(h_{1})\cdots\mu(h_{n})\), \(h_{k}\in\mathcal{D}(\hat{A})\), and define \(H_{\theta}\in\mathcal{D}(\mathcal{L}_{\theta})\) by
\[E\ni\mu\mapsto\mu(h_{1}(\theta\,\cdot))\cdots\mu(h_{n}(\theta\,\cdot))\,, \tag{94}\]
where we interpreted the \(h_{k}(\theta\,\cdot)\)'s to be elements of \(C^{3}([0,1])\). We have
\[\hat{A}h_{k}(\theta z)-\frac{1}{\theta}A_{\theta}h_{k}(\theta\,\cdot)(z)=z \big{(}\theta z\,h_{k}^{\prime\prime}(\theta z)+2\,h_{k}^{\prime}(\theta z) \big{)}\,. \tag{95}\]
If \(h_{k}\) is a constant function, the r.h.s. vanishes. On the other hand, if \(h_{k}\in C^{3}_{c}(\mathbb{R}_{+})\), we can write
\[\sup_{\mu\in E}\big{|}\mu\big{(}(\hat{A}h_{k})(\theta\cdot)\big{)} -\tfrac{1}{\theta}\mu\big{(}A_{\theta}h_{k}(\theta\,\cdot)\big{)}\big{|} \leqslant C_{h_{k}}\,\sup_{\mu\in E}\mu\big{(}Z\,\mathds{1}_{ \theta Z\in\mathrm{supp}(h_{k})}\big{)}\] \[= C_{h_{k}}\,\sup_{z\in[0,1]}\{z\,\mathds{1}_{\theta z\in\mathrm{ supp}(h_{k})}\}\,,\]
Figure 1. Simulations for both the inclusion process (\(N=L=1024\), \(d=L^{-1/2}=\frac{1}{32}\)) and the jump diffusion generated by \(\hat{A}\) agree in accordance with Proposition 3.4. The black graph shows the density profile of embedded inclusion process (10) (1000 samples), whereas the grey histogram represents the density of the jump diffusion (10000 samples). Both profiles converge rapidly to the unit exponential density (green line), cf. Lemma 3.8. We considered an initial condition \(\hat{\mu}_{0}=\delta_{z}\), \(z\simeq\frac{25}{32}\).
where \(Z\sim\mu\) and \(C_{h_{k}}\) is a finite constant, depending on \(h_{k}\). The right hand side vanishes as \(\theta\to\infty\). Along the same lines, we can show that the interaction term of \(\frac{1}{\theta}\mathcal{L}_{\theta}\) disappears. Overall we conclude
\[\lim_{\theta\to\infty}\sup_{\mu\in E}\big{|}\tfrac{1}{\theta}\mathcal{L}_{ \theta}H_{\theta}(\mu)-(\hat{\mathcal{L}}H)(S_{\theta}\mu)\big{|}=0\,. \tag{96}\]
Again, the convergence of generators suffices to conclude weak convergence on the process level.
## 4. Discussion and outlook
We conclude this paper with a discussion of boundary cases in the setting considered and outline future directions as well as work in progress.
Throughout the paper we assumed that \(\rho=\lim_{N,L\to\infty}N/L\in(0,\infty)\). However, the derived scaling limits do not depend on the actual value of \(\rho\), as we study the distribution of mass after renormalising by \(N\). As long as \(N,L\to\infty\), our results extend to the boundary cases \(\rho\in\{0,\infty\}\), up to a regime around \(\rho=0\) in the case \(dL\to\infty\). In this regime we see an interesting transition of the clustering behaviour, cf. Lemma 4.1 below.
First consider \(\theta<\infty\) and \(d\to 0\), for which both cases \(\rho\in\{0,\infty\}\) are covered by our proof. For \(\rho=0\), i.e. \(N\ll L\), this is clear intuitively, as an increasing number of empty sites does not affect the dynamics since the total diffusivity per particle is \(dL\to\theta\). On the other hand, if \(\rho=\infty\) it may seem surprising that the number of sites \(L\) does not play a role (as long they are divergent). Here, the core lies in Lemma A.4, which states that for any thermodynamic limit, we can approximate configurations in \(\overline{\nabla}\) (equivalently measures in \(E\)) by a sequence of particle configurations. Indeed, having a closer look at the proof of Lemma A.4, we see it is only necessary that a macroscopic excess mass of order \(\sim N\) can be distributed uniformly over sites such that it is not visible under the macroscopic rescaling \(\frac{1}{N}\). This is always the case as we can put \(\sim N/L\) particles on sites (which might itself diverge), however, under macroscopic rescaling we have \(\sim\frac{1}{N}\frac{N}{L}\to 0\).
Our approach can further be adapted, with minimal changes in Lemma A.2, to cover the situation of fixed \(L\) and \(d=d_{N}\to 0\) as \(N\to\infty\). This has been considered in [1, 1] to study the metastable dynamics of a single condensate on a large time scale. In our time scale the system is described by a Wright-Fisher diffusion (cf. (68)) with a single cluster site as absorbing state, describing convergence to a typical stationary configuration.
Now assume \(\theta=\infty\). In the case \(\rho\in(0,\infty)\), the results in the present paper, and also [13], yields that a size-biased chosen chunk (at equilibrium) is approximately exponentially distributed with mean \(\simeq\frac{N}{dL}\). In fact, looking at the proof of Theorem 1.2, the result remains true as long as \(dL/N\to 0\). This trivially holds when \(\rho=\infty\), in which case cluster sizes live on the scale of order
\[\frac{N}{dL}\gg\frac{1}{d}\,. \tag{97}\]
On the other hand, if \(N/L\to 0\) we have no control over \(dN\) (in contrast to \(N/L\to\infty\), which implies \(dN\to\infty\) since \(dL\to\infty\)). Therefore, for \(\rho=0\) the convergence in Theorem 1.2 remains true only if \(dL/N\to 0\), i.e. \(d\ll N/L\). Assume on the other hand that \(dL/N\to\gamma\in(0,\infty]\), then we don't expect any clustering of particles on diverging scales. This is indeed the case, in fact, we see a finer structure emerging in the limit on scales of order one. Note that the following is independent of the underlying graph structure and holds more generally for irreducible and spatially homogeneous dynamics, cf. [13].
**Lemma 4.1**.: _Assume that \(N/L\to 0\), as \(N,L\to\infty\), \(d\to 0\) and \(dL\to\infty\) such that \(N/(dL)\to\gamma\in[0,\infty)\). Then_
\[\lim_{N/L\to 0}\pi_{L,N}[\tilde{\eta}_{1}\in\cdot]=\operatorname{ Geom}\left(\frac{1}{1+\gamma}\right)\,. \tag{98}\]
_Here \(\pi_{L,N}\) denotes the unique invariant distribution w.r.t. \(\mathfrak{L}_{L,N}\)._
Hence, for \(\rho=0\) and \(dL\to\infty\), there is a critical scaling \(N\sim dL\) below which the equilibrium measure does not exhibit clustering of particles on diverging scales.
Before proving the above lemma, we require some notation and representations, which can be found in more detail in [1]. Recall that \(\pi_{L,N}\) denotes the unique invariant distribution w.r.t. \(\mathfrak{L}_{L,N}\) supported on \(\Omega_{L,N}\), which is given explicitly by
\[\pi_{L,N}[d\eta]=\frac{1}{Z_{L,N}}\prod_{x=1}^{L}w_{L}(\eta_{x}) \,d\eta\,, \tag{99}\]
where \(d\eta\) denotes the counting measure, \(Z_{L,N}\) the appropriate partition function and \(w_{L}\) describing weights of the form
\[w_{L}(n)=\frac{\Gamma(n+d)}{n!\Gamma(d)}\,, \tag{100}\]
arising from the choice of transition rates in \(\mathfrak{L}_{L,N}\). Moreover, the partition function can be explicitly written in terms of
\[Z_{L,N}=\frac{\Gamma(N+dL)}{N!\Gamma(dL)}\,. \tag{101}\]
Proof of Lemma 4.1.: Recall from [1, 1] that
\[\pi_{L,N}[\tilde{\eta}_{1}=n]=\frac{L}{N}n\,w_{L}(n)\frac{Z_{L-1,N -n}}{Z_{L,N}}\,, \tag{102}\]
which is equal to zero if \(n=0\). Thus, without loss of generality let \(n>0\). We replace the terms in the previous display with the corresponding expressions in (100) and (101), which yields
\[\pi_{L,N}[\tilde{\eta}_{1}=n]\simeq\frac{dL}{N}\frac{\Gamma(n+d) \,d}{(n-1)!\Gamma(d)}\,\frac{\Gamma(N-n+dL)}{\Gamma(N+dL)}N^{n}\,. \tag{103}\]
Figure 2. Graphical summary of the clustering of particles at equilibrium for the inclusion process when \(dL\to\infty\). The distributions displayed describe the first size-biased marginal \(\tilde{\eta}_{1}\) on the appropriate scale. Note particularly the transition from diverging scales to scales of order 1, when moving from the regime \(\rho\gg d\) into \(\rho\sim d\).
We analyse the remaining terms individually and conclude
\[\pi_{L,N}[\tilde{\eta}_{1}=n]\simeq\frac{dL}{N}\bigg{(}\frac{\frac{N}{dL}}{1+ \frac{N}{dL}}\bigg{)}^{n}\to\frac{\gamma^{n-1}}{(1+\gamma)^{n}}\,,\quad\text{ if }\gamma\in[0,\infty)\,. \tag{104}\]
This finishes the proof.
In summary, our approach not only fully determines the clustering of particles on diverging scales in the inclusion process, but also describes the corresponding dynamics of the limiting Markov process in case of the complete graph.
A natural next step, in view of the hydrodynamic limit Proposition 3.5, when \(dL\to\infty\), is to study fluctuations around the equilibrium. A second moment calculation w.r.t. the stationary measure yields
\[\pi_{L,N}\left(\big{(}\hat{\mu}^{(\eta)}(h)-\operatorname{Exp}(1)(h)\big{)}^{ 2}\right)\simeq\frac{1}{dL}\pi_{L,N}\big{(}\tilde{\eta}_{1}\tfrac{dL}{N}h( \tilde{\eta}_{1}\tfrac{dL}{N})^{2}\big{)}\to 0\,, \tag{105}\]
for any \(h\in C_{b}(\mathbb{R}_{+})\). Hence, in order to see a non-trivial limit, we should investigate fluctuations of order \(\sqrt{dL}\) by studying the limiting behaviour of
\[\sqrt{dL}\left((\hat{\mu}_{\#}\eta^{(L,N)}(\tfrac{t}{dL}))(h)-\operatorname{ Exp}(1)(h)\right)\,,\quad t\;\geqslant\;0\,, \tag{106}\]
where \((\eta^{(L,N)}(t))_{t\;\geqslant\;0}\) denotes the inclusion process generated by \(\mathfrak{L}_{L,N}\). Note that we slowed down time of the process, as indicated in Theorem 1.2. Due to decoupling of the size-biased marginals, the fluctuations are expected to be Gaussian.
Our approach should be robust towards perturbation of transition rates, as we do not require the explicit form of the partition function of the canonical distribution, cf. (101). However, the compact form of the limit dynamics and in particular the duality (16) are not expected to extend to more general models. Throughout the paper, we have focused on the one parameter family of Poisson-Dirichlet diffusions. There exists a two-parameter extension of the process, which was introduced in [20]. This process has gained a lot of attention over the past years [14, 15, 16], just to name a few. It would be interesting to investigate the size-biased approach in this setting. To the best of the authors' knowledge, the two parameter process has only been studied when fixing finitely many locations/sites and observing the evolution of mass on them, see for example [21] and also the discussion in Section 2.3.1.
Furthermore, it would be interesting to investigate diffusion limits of the generalised version of the inclusion process with non-trivial bulk, studied in [10]. Numerical simulations and heuristic arguments suggest that the macroscopic phase evolves under the dynamics described in Theorem 1.1. At the same time one can observe a transfer of mass between the bulk and the condensate whose evolution is described by a system of ODEs, similar to Corollary 3.7.
## Appendix A Embeddings and approximations of particle configurations
First, we show that size-biased probability measures \(E\) are isomorphic to the Kingman simplex.
**Lemma A.1**.: _The map \(\mu^{(\cdot)}:\overline{\nabla}\to E\), cf. (4), is an isomorphism._
Proof.: First note that surjectivity is trivial due to the definition of \(E\). Now, consider \(p,q\in\overline{\nabla}\) such that \(p\neq q\). Then there exists an index \(i\in\mathbb{N}\) such that \(p_{i}\neq q_{i}\) and \(p_{j}=q_{j}\) for all \(j<i\)
without loss of generality assume \(p_{i}>q_{i}\). Then
\[\mu^{(p)}([p_{i},1])\;\geqslant\;\sum_{j=1}^{i}p_{j}>\sum_{j=1}^{i-1}q_{j}=\mu^{( q)}([p_{i},1])\,. \tag{107}\]
thus, \(\mu^{(p)}\neq\mu^{(q)}\).
In order to show that the map \(\mu^{(\cdot)}\) is continuous, consider a sequence of partitions \((p^{(n)})_{n\in\mathbb{N}}\) converging to \(p\) in \(\overline{\nabla}\). Then for every \(h\in C([0,1])\) (uniformly in \(n\))
\[\Big{|}\sum_{i=M}^{\infty}p_{i}^{(n)}(h(p_{i}^{(n)})-h(0))\Big{|}\;\leqslant \;\sup_{0\;\leqslant\;z\;\leqslant\;\frac{1}{M}}|h(z)-h(0)|\to 0\,,\quad\text{as }M\to\infty\,, \tag{108}\]
where we used the fact that \(p_{i}\;\leqslant\;\frac{1}{i}\) for any \(i\in\mathbb{N}\). This implies in particular \(\mu^{(p^{(n)})}\stackrel{{ D}}{{\longrightarrow}}\mu^{(p)}\), recall that \(\mu^{(p)}(h)=h(0)+\sum_{i=1}^{\infty}p_{i}(h(p_{i})-h(0))\).
Continuity of the inverse is now immediate: let \((\mu_{n})_{n\in\mathbb{N}}\) be a sequence in \(E\) weakly converging to \(\mu\in\mathcal{M}_{1}([0,1])\). Then we can identify each \(\mu_{n}\) with a unique \(p^{(n)}\) satisfying \(\mu^{(p^{(n)})}=\mu_{n}\). Due to compactness of \(\overline{\nabla}\), it suffices to consider convergent subsequences, say \((p^{(n_{j})})_{j\in\mathbb{N}}\) with limit \(p\). Thus, by assumption and continuity of \(\mu^{(\cdot)}\)
\[\mu_{\#}p^{(n_{j})}\stackrel{{ D}}{{\longrightarrow}}\mu^{(p)}= \mu\,, \tag{109}\]
which particularly implies that \(\mu\in E\). This implies that each accumulation point must agree with \((\mu^{(\cdot)})^{-1}(\mu)=p\).
**Lemma A.2**.: _Let \(h\in C(\mathbb{R}_{+})\) and \(\rho\in[0,\infty)\). Then for any \(\zeta_{L}\to 0\)_
\[\lim_{N/L\to\rho}\sup_{\eta\in\Omega_{L,N}}\Big{|}\frac{1}{L}\sum_{x=1}^{L}h( \zeta_{L}\eta_{x})-h(0)\Big{|}=0\,. \tag{110}\]
Proof.: Let \(\varepsilon>0\) and \(\eta\in\Omega_{L,N}\), then
\[\Big{|}\frac{1}{L}\sum_{x=1}^{L}h(\zeta_{L}\eta_{x})-h(0)\Big{|} \;\leqslant\;\Big{|}\frac{1}{L}\sum_{x=1}^{L}\big{(}h(\zeta_{L} \eta_{x})-h(0)\big{)}\mathds{1}_{\zeta_{L}\eta_{x}>\varepsilon}\Big{|}\] \[\qquad+\Big{|}\frac{1}{L}\sum_{x=1}^{L}\mathds{1}_{\zeta_{L}\eta _{x}\;\leqslant\;\varepsilon}\big{(}h(\zeta_{L}\eta_{x})-h(0)\big{)}\Big{|}\] \[\;\leqslant\;2\|h\|_{\infty}\frac{1}{L}\sum_{x=1}^{L}\mathds{1}_ {\zeta_{L}\eta_{x}>\varepsilon}+\frac{1}{L}\sum_{x=1}^{L}\mathds{1}_{\zeta_{L }\eta_{x}\;\leqslant\;\varepsilon}\big{|}h(\zeta_{L}\eta_{x})-h(0)\big{|}\] \[\;\leqslant\;2\|h\|_{\infty}\frac{1}{L}\sum_{x=1}^{L}\mathds{1}_ {\zeta_{L}\eta_{x}>\varepsilon}+\sup_{0\;\leqslant\;v\;\leqslant\;\varepsilon }\big{|}h(v)-h(0)\big{|}\frac{1}{L}\sum_{x=1}^{L}\mathds{1}_{\zeta_{L}\eta_{x} \;\leqslant\;\varepsilon}\,.\]
The first term on the r.h.s. vanishes because the number of sites satisfying \(\zeta_{L}\eta_{x}>\varepsilon\) is upper bounded by \(\zeta_{L}\,N\,\varepsilon^{-1}\) (otherwise the total mass exceeds \(N\)). Thus,
\[\frac{1}{L}\sum_{x=1}^{L}\mathds{1}_{\eta_{x}>\varepsilon\zeta_{L}^{-1}}\; \leqslant\;\frac{\zeta_{L}\,N}{\varepsilon L}\,. \tag{111}\]
The second term, on the other hand, is upper bounded by \(\sup_{0\;\leqslant\;v\;\leqslant\;\varepsilon}\big{|}h(v)-h(0)\big{|}\), which vanishes in the small \(\varepsilon\)-limit. Note that both upper bounds are uniform in \(\Omega_{L,N}\). Now, taking first the thermodynamic limit \(N/L\to\rho\) before taking \(\varepsilon\to 0\), finishes the proof.
**Remark A.3**.: _Note that Lemma A.2 remains true if \(\rho=\infty\) with a choice \(\zeta_{L}\) satisfying \(\zeta_{L}L/N\to 0\), cf. (111). In particular, the case \(\zeta_{L}=dL/N\) is covered._
Indeed, we can show that, independently of the thermodynamic limit taken, any element in \(\overline{\nabla}\) can be approximated by particle configurations:
**Lemma A.4**.: _Let \(N/L\to\rho\in[0,\infty]\), then for any \(p\in\overline{\nabla}\) there exist \(\eta^{(L,N)}\in\Omega_{L,N}\) such that_
\[\frac{1}{N}\hat{\eta}^{(L,N)}=\frac{1}{N}\eta^{(L,N)}\to p\,,\quad\text{in } \overline{\nabla}\,. \tag{112}\]
Proof.: Consider \(p\in\overline{\nabla}\) with \(\|p\|_{1}=1-\gamma\), we then define \(\bar{\eta}^{(L,N)}_{i}:=\lfloor p_{i}\,N\rfloor\), for \(i\in\{1,\ldots,L\}\). Hence, there are
\[M_{L,N}(p):=N-\sum_{x=1}^{L}\bar{\eta}^{(L,N)}_{x}=\gamma\,N+\sum_{i=1}^{L}(p_ {i}N-\lfloor p_{i}\,N\rfloor)\;\leqslant\;\gamma\,N+L \tag{113}\]
particles to spare. Thus, defining \(\eta^{(L,N)}\in\Omega_{L,N}\) via
\[\eta^{(L,N)}_{x}:=\bar{\eta}^{(L,N)}_{x}+\Big{\lfloor}\frac{M_{L,N}(p)}{L} \Big{\rfloor}+\mathds{1}_{x\;\leqslant\;(M_{L,N}(p)\mod L)}\,,\quad x\in\{1, \ldots,L\}\,, \tag{114}\]
yields the desired approximation, since \(M_{L,N}(p)/(N\,L)\to 0\), as \(N,L\to\infty\).
Similarly, the embedding via \(\hat{\mu}\), cf. (10), allows to approximate any probability measure on \(\overline{\mathbb{R}}_{+}\) by particle configurations.
**Lemma A.5**.: _Let \(\rho\in[0,\infty]\) and \(d=d(L)\) such that_
\[\frac{N}{L}\to\rho\,\quad dL\to\infty\text{ and }\quad\frac{dL}{N}\to 0\,.\]
_Then for any \(\mu\in\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\) there exist \(\eta^{(L,N)}\in\Omega_{L,N}\) such that_
\[\hat{\mu}_{\#}\eta^{(L,N)}\stackrel{{ D}}{{\longrightarrow}}\mu\,. \tag{115}\]
Proof.: We will see that it suffices to approximate discrete measures of the form
\[\alpha_{0}\delta_{0}+\alpha_{\infty}\delta_{\infty}+\sum_{i=1}^{m}\alpha_{i} \delta_{p_{i}}\in\mathcal{M}_{1}(\overline{\mathbb{R}}_{+})\,, \tag{116}\]
with \(p_{i}\in(0,\infty)\), \(1\;\leqslant\;i\;\leqslant\;m\). Let \(\nu\) be such a probability measure.
We explicitly construct configurations in \(\Omega_{L,N}\) that converge to \(\nu\) under the map
\[\hat{\mu}^{(\eta)}=\hat{\mu}^{(\eta)}_{L,N}=\sum_{x=1}^{L}\frac{\eta_{x}}{N} \delta_{dL\frac{\eta_{x}}{N}}\,,\]
cf. (10), when considering the thermodynamic limit \(N/L\to\rho\in[0,\infty]\).
First, we consider the point masses lying in \((0,\infty)\). For convenience, let us introduce
\[k_{i}:=\left\lfloor\frac{N}{dL}p_{i}\right\rfloor\quad\text{and}\quad\#_{i}: =\left\lfloor\frac{\alpha_{i}N}{k_{i}}\right\rfloor\,.\]
Note that \(k_{i}\to\infty\), as \(N/L\to\rho\), by assumption.
Now, let \(\eta^{\prime}\) be the vector given by gluing together vectors \((k_{i},\dots,k_{i})\in\mathbb{N}^{\#_{i}}\), \(1\;\leqslant\;i\;\leqslant\;m\). Note that
\[\frac{1}{L}\sum_{i=1}^{m}\#_{i}\leqslant\sum_{i=1}^{m}\frac{\alpha_{i}\,N}{ \frac{1}{2}k_{i}\,L}\leqslant 2d\sum_{i=1}^{m}\frac{\alpha_{i}}{p_{i}}\,,\]
and the r.h.s. vanishes because the sum is finite, recall \(p_{i}>0\). Hence, without loss of generality, we assume that the resulting vector \(\eta^{\prime}\) lies in \(\mathbb{N}^{L}\), as we can append zeros to the constructed vector until \(\eta^{\prime}\) has length \(L\). In particular, the relative number of empty sites converges to one.
It only remains to distribute the remaining particles to create the point masses at zero and infinity. Thus, far only \(\#_{\Sigma}=\sum_{i=1}^{m}\#_{i}\) sites are occupied in \(\eta^{\prime}\). We start by adding \(k_{\infty}:=\lfloor\alpha_{\infty}N\rfloor\) particles onto the \(\#_{\Sigma}+1\)-th position of \(\eta^{\prime}\) (which is empty). This corresponds to the point mass at infinity.
Now, the number of allocated particles to \(\eta^{\prime}\) is upper bounded by
\[k_{\infty}+\sum_{i=1}^{m}k_{i}\,\#_{i}\leqslant\alpha_{\infty}N+\sum_{i=1}^{ m}\alpha_{i}N=(1-\alpha_{0})N\,.\]
We distribute the remaining \(k_{0}:=N-k_{\infty}-\sum_{i=1}^{m}k_{i}\,\#_{i}\) particles as uniform as possible between all empty sites of \(\eta^{\prime}\) (there are \(\#_{0}:=L-\#_{\Sigma}-1\) many). This yields the configuration
\[\eta_{x}:=\begin{cases}\eta^{\prime}_{x}&\text{ if }1\;\leqslant\;x\; \leqslant\;\#_{\Sigma}+1\\ \left\lfloor\frac{k_{0}}{\#_{0}}\right\rfloor+\mathds{1}_{x\in\{\#_{\Sigma}+2,\dots,\#_{\Sigma}+1+(k_{0}\mod\#_{0})\}}&\text{ otherwise.}\end{cases}\]
Because the number of non-empty sites in \(\eta^{\prime}\) was relatively vanishing, the particles distributed on previously empty sites will correspond to the point mass at zero. Note that \(\eta\in\Omega_{L,N}\) by construction.
Indeed, the constructed particle configuration \(\eta\) approximates the discrete measure arbitrarily well in the thermodynamic limit, as for every \(f\in C_{b}(\bar{\mathbb{R}}_{+})\) we have
\[\hat{\mu}^{(\eta)}(f)=\sum_{x=\#_{\Sigma}+2}^{L}\frac{\eta_{x}}{N}f\left( \frac{dL}{N}\eta_{x}\right)+\frac{k_{\infty}}{N}f\left(\frac{dL}{N}k_{\infty} \right)+\sum_{i=1}^{m}\frac{k_{i}}{N}\#_{i}\,f\left(\frac{dL}{N}k_{i}\right)\,, \tag{117}\]
which converges to
\[\lim_{N/L\to\rho}\hat{\mu}^{(\eta)}(f)=\alpha_{0}f(0)+\alpha_{\infty}f(\infty )+\sum_{i=1}^{m}\alpha_{i}f(p_{i})=\nu(f)\,.\]
Now, for every \(\mu\in\mathcal{M}_{1}(\bar{\mathbb{R}}_{+})\) there exists a sequence \((\nu_{n})_{n\in\mathbb{N}}\) of measures of the form (116) such that \(\nu_{n}\stackrel{{ D}}{{\longrightarrow}}\mu\). Moreover, each \(\nu_{n}\) can be approximated by a sequence \((\eta_{n}^{(L,N)})_{L,N}\) following the above approach. Hence, we can construct a sequence of configurations with the desired property using a diagonal argument. For the sake of clarity we write out the details w.r.t the thermodynamic limit explicitly: we consider \(N_{j},L_{j}\to\infty\) such that \(\lim_{j\to\infty}N_{j}/L_{j}=\rho>0\). Then for every \(j\) we choose \(n_{j}>n_{j-1}\) such that
\[d(\nu_{n_{j}},\hat{\mu}_{\#}\eta_{n_{j}}^{(L_{j},N_{j})})\;\leqslant\;2^{-j} \tag{118}\]
where \(d(\cdot,\cdot)\) denotes an appropriate metric, e.g. the Levy-Prokhorov metric. Thus,
\[d(\mu,\hat{\mu}_{\#}\eta_{n_{j}}^{(L_{j},N_{j})})\;\leqslant\;d(\mu,\nu_{n_{j} })+2^{-j}\to 0\,,\quad\text{as }j\to\infty\,, \tag{119}\]
which completes the proof.
## Appendix B Convergence to a Fleming-Viot process
In this appendix we outline briefly how to prove convergence of the inclusion process to a Fleming-Viot process with with mutation operator \(A_{FV}\) (32), recall
\[A_{FV}h(u)=\theta\int_{0}^{1}[h(v)-h(u)]\,dv\,.\]
The generator of the Fleming-Viot process, when applied to a cylindrical test function of the form \(H(\nu)=\nu(h_{1})\cdots\nu(h_{n})\), is given by
\[\mathcal{L}_{FV}H(\nu) =2\sum_{1\;\leqslant\;k<l\;\leqslant\;n}\big{(}\nu(h_{k}h_{l})- \nu(h_{k})\nu(h_{l})\big{)}\prod_{j\neq k,l}\nu(h_{j}) \tag{120}\] \[\quad+\sum_{1\;\leqslant\;k\;\leqslant\;n}\nu(A_{FV}h_{k})\prod_ {j\neq k}\nu(h_{j})\,.\]
Instead of considering the generator \(\mathfrak{L}_{L,N}\) (1) on \(\Omega_{L,N}\), it is more convenient to define the inclusion process on a state space keeping track of particle positions. More precisely, we define
\[S_{L,N}:=\Lambda_{L}^{N}\,,\quad\text{with}\quad\;\;\Lambda_{L}:=\{1,\dots,L \}\,.\]
Now, the (labelled) inclusion process is described by the infinitesimal generator
\[\mathfrak{G}_{L,N}g(\sigma)=\sum_{i,j=1}^{N}[g(\sigma^{i\to\sigma_{j}})-g( \sigma)]+d\sum_{i=1}^{N}\sum_{x\in\Lambda}[g(\sigma^{i\to x})-g(\sigma)]\,, \tag{121}\]
where
\[\sigma_{j}^{i\to x}:=\begin{cases}x&\text{if }i=j\,,\\ \sigma_{j}&\text{if }i\neq j\,,\end{cases} \tag{122}\]
denotes the updated position after the \(i\)-th particle jumped onto site \(x\). The generator \(\mathfrak{G}_{L,N}\) characterises a Markov process on \(S_{L,N}\) which we denote by \((\sigma^{(L,N)}(t))_{t\geqslant 0}\). We can recover the corresponding unlabelled particle configuration by
\[\iota:S_{L,N}\to\Omega_{L,N}\quad\text{with}\quad\iota(\sigma)_{x}:=\sum_{i=1 }^{N}\mathds{1}_{\sigma_{i}=x}\,,\]
and in particular we have \(\mathfrak{L}_{L,N}f(\iota(\sigma))=\mathfrak{G}_{L,N}f(\iota(\cdot))(\sigma)\).
Again, we interpret particle configurations as probability measures on \([0,1]\). However, now we consider the embedding
\[\nu^{(\cdot)}:\,\sigma\mapsto\frac{1}{N}\sum_{i=1}^{N}\delta_{\frac{\sigma_{ i}}{L}}\in\mathcal{M}_{1}([0,1])\,, \tag{123}\]
where rescaled particle locations are encoded on the 'type space' \([0,1]\). Now the convergence of processes under the embedding (123) follows from approximation of generators in analogy to our main result. Again, we start with test functions of the form
\[\nu^{(\sigma)}(h)=\frac{1}{N}\sum_{i=1}^{N}h\left(\frac{\sigma_{i}}{L}\right) \,,\quad h\in C^{3}([0,1])\,,\]
in which case the action of \(\mathfrak{G}_{L,N}\) reads
\[\mathfrak{G}_{L,N}\nu^{(\sigma)}(h)=\sum_{i,j=1}^{N}\frac{1}{N}[h(\tfrac{\sigma_ {j}}{L})-h(\tfrac{\sigma_{j}}{L})]+d\sum_{i=1}^{N}\sum_{x\in\Lambda}\frac{1}{N} [h(\tfrac{x}{L})-h(\tfrac{\sigma_{i}}{L})]\,.\]
Because the first sum on the r.h.s. vanishes by symmetry, we are only left with
\[\mathfrak{G}_{L,N}\nu^{(\sigma)}(h)=d\sum_{i=1}^{N}\sum_{x\in\Lambda}\frac{1} {N}[h(\tfrac{x}{L})-h(\tfrac{\sigma_{i}}{L})]=dL\,\cdot\,\nu^{(\sigma)}\left( \frac{1}{L}\sum_{x\in\Lambda}h(\tfrac{x}{L})-h\right)\,,\]
which implies the uniform convergence
\[\lim_{N/L\to\rho}\sup_{\sigma\in\Lambda^{N}}\left|\mathfrak{G}_{L,N}(\nu^{( \cdot)}(h))(\sigma)-\nu^{(\sigma)}(A_{FV}h)\right|=0\,.\]
By considering cylindrical test-functions, this convergence can be extended to a core of the Fleming-Viot process with generator \(\mathcal{L}_{FV}\) (120) in full analogy to our main results in Section 2. We leave out further details.
Overall, this yields convergence of the (labelled) inclusion process in the following sense: if \(\nu_{\#}\sigma^{(L,N)}(0)\overset{D}{\longrightarrow}\nu_{0}\) then
\[\left(\nu_{\#}\sigma^{(L,N)}(t)\right)_{t\geqslant 0}\overset{D}{\longrightarrow }(\nu_{t})_{t\geqslant 0}\,,\quad\text{ in }D([0,\infty),\mathcal{M}_{1}([0,1]))\,,\]
where \((\nu_{t})_{t\geqslant 0}\) denotes the Fleming-Viot process generated by (120) with initial condition \(\nu_{0}\).
|
2306.07176 | Unbalanced Optimal Transport meets Sliced-Wasserstein | Optimal transport (OT) has emerged as a powerful framework to compare
probability measures, a fundamental task in many statistical and machine
learning problems. Substantial advances have been made over the last decade in
designing OT variants which are either computationally and statistically more
efficient, or more robust to the measures and datasets to compare. Among them,
sliced OT distances have been extensively used to mitigate optimal transport's
cubic algorithmic complexity and curse of dimensionality. In parallel,
unbalanced OT was designed to allow comparisons of more general positive
measures, while being more robust to outliers. In this paper, we propose to
combine these two concepts, namely slicing and unbalanced OT, to develop a
general framework for efficiently comparing positive measures. We propose two
new loss functions based on the idea of slicing unbalanced OT, and study their
induced topology and statistical properties. We then develop a fast
Frank-Wolfe-type algorithm to compute these loss functions, and show that the
resulting methodology is modular as it encompasses and extends prior related
work. We finally conduct an empirical analysis of our loss functions and
methodology on both synthetic and real datasets, to illustrate their relevance
and applicability. | Thibault Séjourné, Clément Bonet, Kilian Fatras, Kimia Nadjahi, Nicolas Courty | 2023-06-12T15:15:00Z | http://arxiv.org/abs/2306.07176v1 | # Unbalanced Optimal Transport meets Sliced-Wasserstein
###### Abstract
Optimal transport (OT) has emerged as a powerful framework to compare probability measures, a fundamental task in many statistical and machine learning problems. Substantial advances have been made over the last decade in designing OT variants which are either computationally and statistically more efficient, or more robust to the measures/datasets to compare. Among them, sliced OT distances have been extensively used to mitigate optimal transport's cubic algorithmic complexity and curse of dimensionality. In parallel, unbalanced OT was designed to allow comparisons of more general positive measures, while being more robust to outliers. In this paper, we propose to combine these two concepts, namely slicing and unbalanced OT, to develop a general framework for efficiently comparing positive measures. We propose two new loss functions based on the idea of slicing unbalanced OT, and study their induced topology and statistical properties. We then develop a fast Frank-Wolfe-type algorithm to compute these loss functions, and show that the resulting methodology is modular as it encompasses and extends prior related work. We finally conduct an empirical analysis of our loss functions and methodology on both synthetic and real datasets, to illustrate their relevance and applicability.
## 1 Introduction
Positive measures are ubiquitous in various fields, including data sciences and machine learning (ML) where they commonly serve as data representations. A common example is the density fitting task, which arises in generative modeling [1, 2]: the observed samples can be represented as a discrete positive measure \(\alpha\) and the goal is to find a parametric measure \(\beta_{\eta}\) which fits the best \(\alpha\). This can be achieved by training a model that minimizes a loss function over \(\eta\), usually defined as a distance between \(\alpha\) and \(\beta_{\eta}\). Therefore, it is important to choose a meaningful discrepancy with desirable
statistical, robustness and computational properties. In particular, some settings require comparing arbitrary positive measures, _i.e._ measures whose total mass can have an arbitrary value, as opposed to probability distributions, whose total mass is equal to 1. In cell biology [3], for example, measures are used to represent and compare gene expressions of cell populations, and the total mass represents the population size.
**(Unbalanced) Optimal Transport.** Optimal transport has been chosen as a loss function in various ML applications. OT defines a distance between two positive measures of same mass \(\alpha\) and \(\beta\) (_i.e._\(m(\alpha)=m(\beta)\)) by moving the mass of \(\alpha\) toward the mass of \(\beta\) with least possible effort. The mass equality can nevertheless be hindering by imposing a normalization of \(\alpha\) and \(\beta\) to enforce \(m(\alpha)=m(\beta)\), which is potentially spurious and makes the problem less interpretable. In recent years, OT has then been extended to settings where measures have different masses, leading to the _unbalanced OT_ (UOT) framework [4, 5, 6]. An appealing outcome of this new OT variant is its robustness to outliers which is achieved by discarding them before transporting \(\alpha\) to \(\beta\). UOT has been useful for many theoretical and practical applications, e.g. theory of deep learning [7, 8], biology [3, 9] and domain adaptation [10]. We refer to [11] for an extensive survey of UOT. Computing OT requires to solve a linear program whose complexity is cubical in the number \(n\) of samples (\(\mathcal{O}(n^{3}\log n)\)). Besides, accurately estimating OT distances through empirical disributions is challenging as OT suffers from the curse of dimension [12]. A common workaround is to rely on OT variants with lower complexities and better statistical properties. Among the most popular, we can list entropic OT [13], minibatch OT [14] and sliced OT [15, 16]. In this paper, we will focus on the latter.
**Slicing (U)OT and related work.** Sliced OT leverages the OT 1D closed-form solution to define a new cost. It averages the OT cost between projections of \((\alpha,\beta)\) on 1D subspaces of \(\mathbb{R}^{d}\). For 1D data, the OT solution can be computed through a sort algorithm, leading to an appealing \(\mathcal{O}(n\log(n))\) complexity [17]. Furthermore, it has been shown to lift useful topological and statistical properties of OT from 1-dimensional to multi-dimensional settings [18, 19, 20]. It therefore helps to mitigate the curse of dimensionality making SOT-based algorithms theoretically-grounded, statistically efficient and efficiently solvable even on large-scale settings. These appealing properties motivated the development of several variants and generalizations, e.g. to different types or distributions of projections [21, 22, 23, 24] and non-Euclidean data [25, 26, 27]. The slicing operation has also been applied to partial OT [28, 29, 30], a particular case of UOT, in order to speed up comparisons of unnormalized measures at large scale. However, while (sliced) partial OT allows to compare measures with different masses, it assumes that each input measure is discrete and supported on points that all share the same mass (typically 1). In contrast, the Gaussian-Hellinger-Kantorovich (GHK) distance [4], another popular formulation of UOT, allows to compare measures with different masses _and_ supported on points with varying masses, and has not been studied jointly with slicing.
**Contributions.** In this paper, we present the first general framework combining UOT and slicing. Our main contribution is the introduction of two novel sliced variants of UOT, respectively called _Sliced UOT_ (SUOT) and _Unbalanced Sliced OT_ (USOT). SUOT and USOT both leverage one-dimensional projections and the newly-proposed implementation of UOT in 1D [31], but differ in the penalization used to relax the constraint on the equality of masses: USOT essentially performs a global reweighting of the inputs measures \((\alpha,\beta)\), while SUOT reweights each projection of \((\alpha,\beta)\). Our work builds upon the Frank-Wolfe-type method [32] recently proposed in [31] to efficiently compute GHK between univariate measures, an instance of UOT which has not yet been combined with slicing. We derive the associated theoretical properties, along with the corresponding fast and GPU-friendly algorithms. We demonstrate its versatility and efficiency on challenging experiments, where slicing is considered on a non-Euclidean hyperbolic manifold, as a similarity measure for document classification, or for computing barycenters of geoclimatic data.
**Outline.** In Section 2, we provide background knowledge on UOT and sliced OT (SOT).
In Section 3, we define our two new loss functions (SUOT and USOT) and prove their metric, topological, statistical and duality properties in wide generality. We then detail in Section 4 the numerical implementation of SUOT and USOT based on the Frank-Wolfe algorithm. We investigate their empirical performance on hyperbolic and geophysical data as well as document classification in Section 5.
## 2 Background
**Unbalanced Optimal Transport.** We denote by \(\mathcal{M}_{+}(\mathbb{R}^{d})\) the set of all positive Radon measures on \(\mathbb{R}^{d}\). For any \(\alpha\in\mathcal{M}_{+}(\mathbb{R}^{d})\), \(\mathrm{supp}(\alpha)\) is the support of \(\alpha\) and \(m(\alpha)=\int_{\mathbb{R}^{d}}\mathrm{d}\alpha(x)\) the mass of \(\alpha\). We recall the standard formulation of unbalanced OT [4], which uses _\(\varphi\)-divergences_ for regularization.
**Definition 1**.: _(Unbalanced OT) Let \(\alpha,\beta\in\mathcal{M}_{+}(\mathbb{R}^{d})\). Let \(\varphi:\mathbb{R}\to\mathbb{R}\cup\{+\infty\}\) be an entropy function, i.e. \(\varphi\) is convex, lower semicontinuous, \(\mathrm{dom}(\varphi)\triangleq\{x\in\mathbb{R},\,\varphi(x)<+\infty\}\subset[ 0,+\infty)\) and \(\varphi(1)=0\). Denote \(\varphi_{\infty}^{\prime}\triangleq\lim_{x\to+\infty}\varphi(x)/x\). The \(\varphi\)-divergence between \(\alpha\) and \(\beta\) is defined as,_
\[D_{\varphi}(\alpha|\beta)\triangleq\int_{\mathbb{R}^{d}}\varphi\left(\frac{ \mathrm{d}\alpha}{\mathrm{d}\beta}(x)\right)\mathrm{d}\beta(x)+\varphi_{ \infty}^{\prime}\int_{\mathbb{R}^{d}}\mathrm{d}\alpha^{\perp}(x)\,, \tag{1}\]
_where \(\alpha^{\perp}\) is defined as \(\alpha=(\mathrm{d}\alpha/\mathrm{d}\beta)\beta+\alpha^{\perp}\). Given two entropy functions \((\varphi_{1},\varphi_{2})\) and a cost \(\mathrm{C}_{d}:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}\), the unbalanced OT problem between \(\alpha\) and \(\beta\) reads_
\[\mathrm{UOT}(\alpha,\beta)\triangleq\inf_{\pi\in\mathcal{M}_{+}(\mathbb{R}^{d }\times\mathbb{R}^{d})}\int\mathrm{C}_{d}(x,y)\mathrm{d}\pi(x,y)+\mathrm{D}_{ \varphi_{1}}(\pi_{1}|\alpha)+\mathrm{D}_{\varphi_{2}}(\pi_{2}|\beta)\,, \tag{2}\]
_where \((\pi_{1},\pi_{2})\) denote the marginal distributions of \(\pi\)._
When \(\varphi_{1}=\varphi_{2}\) and \(\varphi_{1}(x)=0\) for \(x=1\), \(\varphi_{1}(x)=+\infty\) otherwise, (2) boils down to the Kantorovich formulation of OT (or _balanced OT_), which we denote by \(\mathrm{OT}(\alpha,\beta)\). Indeed, in that case, \(\mathrm{D}_{\varphi_{1}}(\pi_{1}|\alpha)=\mathrm{D}_{\varphi_{2}}(\pi_{2}| \beta)=0\) if \(\pi_{1}=\alpha\) and \(\pi_{2}=\beta\), \(\mathrm{D}_{\varphi_{1}}(\pi_{1}|\alpha)=\mathrm{D}_{\varphi_{2}}(\pi_{2}| \beta)=+\infty\) otherwise.
Under suitable choices of entropy functions \((\varphi_{1},\varphi_{2})\), \(\mathrm{UOT}(\alpha,\beta)\) allows to compare \(\alpha\) and \(\beta\) even when \(m(\alpha)\neq m(\beta)\) and can discard outliers, which makes it more robust than \(\mathrm{OT}(\alpha,\beta)\). Two common choices are \(\varphi(x)=\rho\,|x-1|\) and \(\varphi(x)=\rho(x\log(x)-x+1)\), where \(\rho>0\) is a _characteristic radius_ w.r.t. \(\mathrm{C}_{d}\). They respectively correspond to \(\mathrm{D}_{\varphi}=\rho\mathrm{TV}\) (total variation distance [33]) and \(\mathrm{D}_{\varphi}=\rho\mathrm{KL}\) (_Kullback-Leibler divergence_).
The UOT problem has been shown to admit an equivalent formulation obtained by deriving the dual of (2) and proving strong duality. Based on Proposition 1, computing \(\mathrm{UOT}(\alpha,\beta)\) consists in optimizing a pair of continuous functions \((f,g)\).
**Proposition 1**.: _[_4_, Corollary 4.12]_ _The UOT problem (2) can equivalently be written as_
\[\mathrm{UOT}(\alpha,\beta)=\sup_{f\oplus g\leq\mathrm{C}_{d}}\int\varphi_{1}^ {\circ}(f(x))\mathrm{d}\alpha(x)+\int\varphi_{2}^{\circ}(g(y))\mathrm{d}\beta(y), \tag{3}\]
_where for \(i\in\{1,2\}\), \(\varphi_{i}^{\circ}(x)\triangleq-\varphi_{i}^{*}(-x)\) with \(\varphi_{i}^{*}(x)\triangleq\sup_{y\geq 0}xy-\varphi_{i}(y)\) the Legendre transform of \(\varphi_{i}\), and \(f\oplus g\leq\mathrm{C}_{d}\) means that for \((x,y)\sim\alpha\otimes\beta\), \(f(x)+g(y)\leq\mathrm{C}_{d}(x,y)\)._
In this paper, we mainly focus on the _GHK setting_, both theoretically and computationally. It corresponds to (2) with \(\mathrm{C}_{d}(x,y)=||x-y||^{2}\), \(\mathrm{D}_{\varphi_{i}}=\rho_{i}\mathrm{KL}\), leading to \(\varphi_{i}^{\circ}(x)=\rho_{i}(1-e^{-x/\rho_{i}})\). \(\mathrm{UOT}(\alpha,\beta)\) is known to be computationally intensive [34], thus motivating the development of methods that can scale to dimensions and sample sizes encountered in ML applications.
**Sliced Optimal Transport.** Among the many workarounds that have been proposed to overcome the OT computational bottleneck [17], Sliced OT [35] has attracted a lot of attention due to its computational benefits and theoretical guarantees. We define it below.
**Definition 2** (Sliced OT).: _Let \(\mathbb{S}^{d-1}\triangleq\{\theta\in\mathbb{R}^{d}\ :\ \|\theta\|=1\}\) be the unit sphere in \(\mathbb{R}^{d}\). For \(\theta\in\mathbb{S}^{d-1}\), denote by \(\theta^{\star}:\mathbb{R}^{d}\to\mathbb{R}\) the linear map such that for \(x\in\mathbb{R}^{d}\), \(\theta^{\star}(x)\triangleq\langle\theta,x\rangle\). Let \(\boldsymbol{\sigma}\) be the uniform probability over \(\mathbb{S}^{d-1}\). Consider \(\alpha,\beta\in\mathcal{M}_{+}(\mathbb{R}^{d})\). The Sliced OT problem is defined as_
\[\mathrm{SOT}(\alpha,\beta)\triangleq\int_{\mathbb{S}^{d-1}}\mathrm{OT}(\theta _{\sharp}^{\star}\alpha,\theta_{\sharp}^{\star}\beta)\mathrm{d}\boldsymbol{ \sigma}(\theta)\,, \tag{4}\]
_where for any measurable function \(f\) and \(\xi\in\mathcal{M}_{+}(\mathbb{R}^{d})\), \(f_{\sharp}\xi\) is the push-forward measure of \(\xi\) by \(f\), i.e. for any measurable set \(A\subset\mathbb{R}\), \(f_{\sharp}\xi(A)\triangleq\xi(f^{-1}(A))\), \(f^{-1}(A)\triangleq\{x\in\mathbb{R}^{d}:f(x)\in A\}\)._
Note that \(\theta_{\sharp}^{\star}\alpha,\theta_{\sharp}^{\star}\beta\) are two measures supported on \(\mathbb{R}\), therefore \(\mathrm{OT}(\theta_{\sharp}^{\star}\mu,\theta_{\sharp}^{\star}\nu)\) is defined in terms of a cost function \(\mathrm{C}_{1}:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\). Since OT between univariate measures can be efficiently computed, \(\mathrm{SOT}(\alpha,\beta)\) can provide significant computational advantages over \(\mathrm{OT}(\alpha,\beta)\) in large-scale settings. In practice, if \(\alpha\) and \(\beta\) are discrete measures supported on \(\{x_{i}\}_{i=1}^{n}\) and \(\{y_{i}\}_{i=1}^{n}\) respectively, the standard procedure for approximating \(\mathrm{SOT}(\alpha,\beta)\) consists in (i) sampling \(m\) i.i.d. samples \(\{\theta_{j}\}_{j=1}^{m}\) from \(\boldsymbol{\sigma}\), (ii) computing \(\mathrm{OT}((\theta_{j}^{\star})_{\sharp}\alpha,(\theta_{j}^{\star})_{\sharp} \beta)\), \(j=1,\ldots,m\). Computing OT between univariate discrete measures amounts to sorting [17, Section 2.6], thus step (ii) involves \(\mathcal{O}(n\log n)\) operations for each \(\theta_{j}\).
\(\mathrm{SOT}(\alpha,\beta)\) is defined in terms of the Kantorovich formulation of OT, hence inherits the following drawbacks: \(\mathrm{SOT}(\alpha,\beta)<+\infty\) only when \(m(\alpha)=m(\beta)\), and may not provide meaningful comparisons in presence of outliers. To overcome such limitations, prior work have proposed sliced versions of partial OT [28, 29], a particular instance of UOT. However, their contributions only apply to measures whose samples have constant mass. In the next section, we generalize their line of work and propose a new way of combining sliced OT and unbalanced OT.
## 3 Sliced Unbalanced OT and Unbalanced Sliced OT: Theoretical Analysis
We propose two strategies to make unbalanced OT scalable, by leveraging sliced OT. We formulate two loss functions (Definition 3), then study their theoretical properties and discuss their implications.
**Definition 3**.: _Let \(\alpha,\beta\in\mathcal{M}_{+}(\mathbb{R}^{d})\). The **Sliced Unbalanced OT** loss (\(\mathrm{SUOT}\)) and the **Unbalanced Sliced OT** loss (\(\mathrm{USOT}\)) between \(\alpha\) and \(\beta\) are defined as,_
\[\mathrm{SUOT}(\alpha,\beta) \triangleq\int_{\mathbb{S}^{d-1}}\mathrm{UOT}(\theta_{\sharp}^{ \star}\alpha,\theta_{\sharp}^{\star}\beta)\mathrm{d}\boldsymbol{\sigma}( \theta)\,, \tag{5}\] \[\mathrm{USOT}(\alpha,\beta) \triangleq\inf_{(\pi_{1},\pi_{2})\in\mathcal{M}_{+}(\mathbb{R}^{ d})\times\mathcal{M}_{+}(\mathbb{R}^{d})}\mathrm{SOT}(\pi_{1},\pi_{2})+\mathrm{D}_{ \varphi_{1}}(\pi_{1}|\alpha)+\mathrm{D}_{\varphi_{2}}(\pi_{2}|\beta)\,. \tag{6}\]
\(\mathrm{SUOT}(\alpha,\beta)\) compares \(\alpha\) and \(\beta\) by solving the UOT problem between \(\theta_{\sharp}^{\star}\alpha\) and \(\theta_{\sharp}^{\star}\beta\) for \(\theta\sim\boldsymbol{\sigma}\). Note that \(\mathrm{SUOT}\) extends the sliced partial OT problem [28, 29] (where \(\mathrm{D}_{\varphi_{i}}=\rho_{i}\mathrm{TV}\)) by allowing the use of arbitrary \(\varphi\)-divergences. On the other hand, \(\mathrm{USOT}\) is a completely novel approach and stems from the following property on UOT [4, Equations (4.21)]: \(\mathrm{UOT}(\alpha,\beta)=\inf_{(\pi_{1},\pi_{2})\in\mathcal{M}_{+}(\mathbb{R }^{d})^{2}}\mathrm{OT}(\pi_{1},\pi_{2})+\mathrm{D}_{\varphi_{1}}(\pi_{1}|\alpha)+ \mathrm{D}_{\varphi_{2}}(\pi_{2}|\beta)\).
**SUOT vs. USOT.** As outlined in Definition 3, \(\mathrm{SUOT}\) and \(\mathrm{USOT}\) differ in how the transportation problem is penalized: \(\mathrm{SUOT}(\alpha,\beta)\) regularizes the marginals of \(\pi_{\theta}\) for \(\theta\sim\boldsymbol{\sigma}\) where \(\pi_{\theta}\) denotes the
solution of \(\mathrm{UOT}(\theta_{\sharp}^{\star}\alpha,\theta_{\sharp}^{\star}\beta)\), while \(\mathrm{USOT}(\alpha,\beta)\) operates a geometric normalization directly on \((\alpha,\beta)\). We illustrate this difference on the following practical setting: we consider \((\alpha,\beta)\in\mathcal{M}_{+}(\mathbb{R}^{2})\) where \(\alpha\) is polluted with some outliers, and we compute \(\mathrm{SUOT}(\alpha,\beta)\) and \(\mathrm{USOT}(\alpha,\beta)\). We plot the input measures and the sampled projections \(\{\theta_{k}\}_{k}\) (Figure 1, left), the marginals of \(\pi_{\theta_{k}}\) for SUOT and the marginals of \((\theta_{k})_{\sharp}^{\star}\pi\) for \(\mathrm{USOT}\) (Figure 1, right). As expected, SUOT marginals change for each \(\theta_{k}\). We also observe that the source outliers have successfully been removed for any \(\theta\) when using \(\mathrm{USOT}\), while they may still appear with SUOT (e.g. for \(\theta=120^{\circ}\)): this is a direct consequence of the penalization terms \(\mathrm{D}_{\varphi_{i}}\) in \(\mathrm{USOT}\), which operate on \((\alpha,\beta)\) rather than on their projections.
**Theoretical analysis.** In the rest of this section, we prove a set of theoretical properties of \(\mathrm{SUOT}\) and \(\mathrm{USOT}\). All proofs are provided in Appendix A. We first identify the conditions on the cost \(\mathrm{C}_{1}\) and entropies \(\varphi_{1},\varphi_{2}\) under which the infimum is attained in \(\mathrm{UOT}(\theta_{\sharp}^{\star}\alpha,\theta_{\sharp}^{\star}\beta)\) for \(\theta\in\mathbb{S}^{d-1}\) and in \(\mathrm{USOT}(\alpha,\beta)\): the formal statement is given in Appendix A. We also show that these optimization problems are convex, both \(\mathrm{SUOT}\) and \(\mathrm{USOT}\) are jointly convex w.r.t. their input measures, and that strong duality holds (Theorem 5).
Next, we prove that both \(\mathrm{SUOT}\) and \(\mathrm{USOT}\) preserve some topological properties of \(\mathrm{UOT}\), starting with the metric axioms as stated in the next proposition.
**Proposition 2**.: _(Metric properties) (i) Suppose \(\mathrm{UOT}\) is non-negative, symmetric and/or definite on \(\mathcal{M}_{+}(\mathbb{R})\times\mathcal{M}_{+}(\mathbb{R})\). Then, \(\mathrm{SUOT}\) is respectively non-negative, symmetric and/or definite on \(\mathcal{M}_{+}(\mathbb{R}^{d})\times\mathcal{M}_{+}(\mathbb{R}^{d})\). If there exists \(p\in[1,+\infty)\) s.t. for any \((\alpha,\beta,\gamma)\in\mathcal{M}_{+}(\mathbb{R})\), \(\mathrm{UOT}^{1/p}(\alpha,\beta)\leq\mathrm{UOT}^{1/p}(\alpha,\gamma)+\mathrm{ UOT}^{1/p}(\gamma,\beta)\), then \(\mathrm{SUOT}^{1/p}(\alpha,\beta)\leq\mathrm{SUOT}^{1/p}(\alpha,\gamma)+ \mathrm{SUOT}^{1/p}(\gamma,\beta)\)._
(ii) _For any \(\alpha,\beta\in\mathcal{M}_{+}(\mathbb{R}^{d})\), \(\mathrm{USOT}(\alpha,\beta)\geq 0\). If \(\varphi_{1}=\varphi_{2}\), \(\mathrm{USOT}\) is symmetric. If \(D_{\varphi_{1}}\) and \(D_{\varphi_{2}}\) are definite, then \(\mathrm{USOT}\) is definite._
By Proposition 2(i), establishing the metric axioms of \(\mathrm{UOT}\) between _univariate_ measures (e.g., as detailed in [11, Section 3.3.1]) suffices to prove the metric axioms of \(\mathrm{SUOT}\) between _multivariate_ measures. Since e.g. \(\mathrm{GHK}\)[4, Theorem 7.25] is a metric for \(p=2\), then so is the associated \(\mathrm{SUOT}\).
In our next theorem, we show that \(\mathrm{SUOT}\), \(\mathrm{USOT}\) and \(\mathrm{UOT}\) are equivalent, under certain assumptions on the entropies \((\varphi_{1},\varphi_{2})\), cost functions, and input measures \((\alpha,\beta)\).
**Theorem 1**.: _(Equivalence of \(\mathrm{SUOT},\mathrm{USOT},\mathrm{UOT}\)) Let \(\mathsf{X}\) be a compact subset of \(\mathbb{R}^{d}\) with radius \(R\). Let \(p\in[1,+\infty)\) and assume \(\mathrm{C}_{1}(x,y)=\left|x-y\right|^{p}\), \(\mathrm{C}_{d}(x,y)=\left\|x-y\right\|^{p}\). Consider that
Figure 1: **Toy illustration** on the behaviors of \(\mathrm{SUOT}\) and \(\mathrm{USOT}\). _(left)_ Original 2D samples and slices used for illustration. KDE density estimations of the projected samples: grey, original distributions, colored, distributions reweighed by _(center)_, and reweighed by _(right)_.
\(D_{\varphi_{1}}=D_{\varphi_{2}}=\rho\mathrm{KL}\). Then, for any \(\alpha,\beta\in\mathcal{M}_{+}(\mathsf{X})\),_
\[\mathrm{SUOT}(\alpha,\beta)\leq\mathrm{USOT}(\alpha,\beta)\leq\mathrm{UOT}( \alpha,\beta)\leq c(m(\alpha),m(\beta),\rho,R)\mathrm{SUOT}(\alpha,\beta)^{1/(d +1)}\,, \tag{7}\]
_where \(c(m(\alpha),m(\beta),\rho,R)\) is constant depending on \(m(\alpha),m(\beta),\rho,R\), which is non-decreasing in \(m(\alpha)\) and \(m(\beta)\). Additionally, assume there exists \(M>0\) s.t. \(m(\alpha)\leq M,m(\beta)\leq M\). Then, \(c(m(\alpha),m(\beta),\rho,R)\) no longer depends on \(m(\alpha),m(\beta)\), which proves the equivalence of \(\mathrm{SUOT}\), \(\mathrm{USOT}\) and \(\mathrm{UOT}\)._
Theorem 1 is an application of a more general result, which we derive in the appendix. In particular, we show that the first two inequalities in (7) hold under milder assumptions on \(\varphi_{1},\varphi_{2}\) and \(\mathrm{C}_{1},\mathrm{C}_{d}\). The equivalence of \(\mathrm{SUOT},\mathrm{USOT}\) and \(\mathrm{UOT}\) is useful to prove that \(\mathrm{SUOT}\) and \(\mathrm{USOT}\)_metrize the weak\({}^{*}\) convergence_ when \(\mathrm{UOT}\) does, e.g. in the GHK setting [4, Theorem 7.25]. Before formally stating this result, we recall that a sequence of positive measures \((\alpha_{n})_{n\in\mathbb{N}^{*}}\) converges weakly to \(\alpha\in\mathcal{M}_{+}(\mathbb{R}^{d})\) (denoted \(\alpha_{n}\rightharpoonup\alpha\)) if for any continuous \(f:\mathbb{R}^{d}\to\mathbb{R}\), \(\lim_{n\to+\infty}\int f\mathrm{d}\alpha_{n}=\int f\mathrm{d}\alpha\).
**Theorem 2**.: _(Weak\({}^{*}\) metrization) Assume \(D_{\varphi_{1}}=D_{\varphi_{2}}=\rho\mathrm{KL}\). Let \(p\in[1,+\infty)\) and consider \(\mathrm{C}_{1}(x,y)=|x-y|^{p}\), \(\mathrm{C}_{d}(x,y)=\|x-y\|^{p}\). Let \((\alpha_{n})\) be a sequence of measures in \(\mathcal{M}_{+}(\mathsf{X})\) and \(\alpha\in\mathcal{M}_{+}(\mathsf{X})\), where \(\mathsf{X}\subset\mathbb{R}^{d}\) is compact with radius \(R>0\). Then, \(\mathrm{SUOT}\) and \(\mathrm{USOT}\) metrizes the weak\({}^{*}\) convergence, i.e._
\[\alpha_{n}\rightharpoonup\alpha\Leftrightarrow\lim_{n\to+\infty}\mathrm{ SUOT}(\alpha_{n},\alpha)=0\Leftrightarrow\lim_{n\to+\infty}\mathrm{USOT}(\alpha_{n}, \alpha)=0.\]
The metrization of weak\({}^{*}\) convergence is an important property when comparing measures. For instance, it can be leveraged to justify the well-posedness of approximating an unbalanced Wasserstein gradient flow [36] using \(\mathrm{SUOT}\), as done in [37, 38] for \(\mathrm{SOT}\). Unbalanced Wasserstein gradient flows have been a key tool in deep learning theory, e.g. to prove global convergence of \(1\)-hidden layer neural networks [7, 8].
We now specialize some metric and topological properties to sliced partial OT, a particular case of \(\mathrm{SUOT}\). Theorem 3 shows that our framework encompasses existing approaches and more importantly, helps complement their analysis [28, 29].
**Theorem 3**.: _(Properties of Sliced Partial OT) Assume \(\mathrm{C}_{1}(x,y)=|x-y|\) and \(D_{\varphi_{1}}=D_{\varphi_{2}}=\rho\mathrm{TV}\). Then, \(\mathrm{USOT}\) satisfies the triangle inequality. Additionally, for any \((\alpha,\beta)\in\mathcal{M}_{+}(\mathsf{X})\) where \(\mathsf{X}\subset\mathbb{R}^{d}\) is compact with radius \(R\), \(\mathrm{UOT}(\alpha,\beta)\leq c(\rho,R)\,\mathrm{SUOT}(\alpha,\beta)^{1/(d+1)}\), and \(\mathrm{USOT}\) and \(\mathrm{SUOT}\) both metrize the weak\({}^{*}\) convergence._
We move on to the statistical properties and prove that \(\mathrm{SUOT}\) offers important statistical benefits, as it lifts the _sample complexity_ of \(\mathrm{UOT}\) from one-dimensional setting to multi-dimensional ones. In what follows, for any \(\alpha\in\mathcal{M}_{+}(\mathbb{R}^{d})\), we use \(\hat{\alpha}_{n}\) to denote the empirical approximation of \(\alpha\) over \(n\geq 1\) i.i.d. samples, _i.e._\(\hat{\alpha}_{n}=\frac{1}{n}\sum_{i=1}^{n}\delta_{Z_{i}}\), \(Z_{i}\sim\alpha\) for \(i=1,\ldots,n\).
**Theorem 4**.: _(Sample complexity) (i) If for \((\mu,\nu)\in\mathcal{M}_{+}(\mathbb{R})\), \(\mathbb{E}|\mathrm{UOT}(\mu,\nu)-\mathrm{UOT}(\hat{\mu}_{n},\hat{\nu}_{n})|\leq \kappa(n)\), then for \((\alpha,\beta)\in\mathcal{M}_{+}(\mathbb{R}^{d})\), \(\mathbb{E}|\mathrm{SUOT}(\alpha,\beta)-\mathrm{SUOT}(\hat{\alpha}_{n},\hat{ \beta}_{n})|\leq\kappa(n)\)._
(ii) _If for \((\mu,\nu)\in\mathcal{M}_{+}(\mathbb{R})\), \(\mathbb{E}|\mathrm{UOT}(\mu,\hat{\mu}_{n})|\leq\xi(n)\), then for \((\alpha,\beta)\in\mathcal{M}_{+}(\mathbb{R}^{d})\), \(\mathbb{E}|\mathrm{SUOT}(\alpha,\hat{\alpha}_{n})|\leq\xi(n)\)._
Theorem 4 means that \(\mathrm{SUOT}\) enjoys a _dimension-free_ sample complexity, even when comparing multivariate measures: this advantage is recurrent of sliced divergences [19] and further motivates their use on high-dimensional settings. The sample complexity rates \(\kappa(n)\) or \(\xi(n)\) can be deduced from the literature on \(\mathrm{UOT}\) for univariate measures, for example we refer to [39] for the GHK setting.
Establishing the statistical properties of USOT may require extending the analysis in [40]: we leave this question for future work.
We conclude this section by deriving the dual formulations of \(\mathrm{SUOT},\mathrm{USOT}\) and proving that strong duality holds. We will consider that \(\mathbf{\sigma}\) is approximated with \(\hat{\mathbf{\sigma}}_{K}=\frac{1}{K}\sum_{k=1}^{K}\delta_{\theta_{k}}\), \(\theta_{k}\sim\mathbf{\sigma}\). This corresponds to the routine case in practice, as practitioners usually resort to a Monte Carlo approximation to estimate the expectation w.r.t. \(\mathbf{\sigma}\) defining sliced OT.
**Theorem 5**.: _(Strong duality) For \(i\in\{1,2\}\), let \(\varphi_{i}\) be an entropy function s.t. \(\mathrm{dom}(\varphi_{i}^{*})\cap\mathbb{R}_{-}\) is non-empty, and either \(0\in\mathrm{dom}(\varphi_{i})\) or \(m(\alpha),m(\beta)\in\mathrm{dom}(\varphi_{i})\). Define \(\mathcal{E}\triangleq\{\forall\theta\in\mathrm{supp}(\sigma_{K}),\,f_{ \theta}\oplus g_{\theta}\leq\mathrm{C}_{1}\}\). Let \(f_{avg}\triangleq\int_{\mathbb{S}^{d-1}}f_{\theta}\mathrm{d}\hat{\mathbf{\sigma}} _{K}(\theta)\), \(g_{avg}\triangleq\int_{\mathbb{S}^{d-1}}g_{\theta}\mathrm{d}\hat{\mathbf{\sigma}} _{K}(\theta)\)._
_Then, \(\mathrm{SUOT}\) (5) and \(\mathrm{USOT}\) (6) can be equivalently written for \(\alpha,\beta\in\mathcal{M}_{+}(\mathbb{R}^{d})\) as,_
\[\mathrm{SUOT}(\alpha,\beta) =\sup_{(f_{\theta}),(g_{\theta})\in\mathcal{E}}\int_{\mathbb{S}^ {d-1}}\Big{(}\int\varphi_{1}^{\circ}\big{(}f_{\theta}\circ\theta^{*}(x)\big{)} \mathrm{d}\alpha(x)+\int\varphi_{2}^{\circ}\big{(}g_{\theta}\circ\theta^{*}(y )\big{)}\mathrm{d}\beta(y)\Big{)}\mathrm{d}\hat{\mathbf{\sigma}}_{K}(\theta) \tag{8}\] \[\mathrm{USOT}(\alpha,\beta) =\sup_{(f_{\theta}),(g_{\theta})\in\mathcal{E}}\int\varphi_{1}^{ \circ}\big{(}f_{avg}\circ\theta^{*}(x)\big{)}\mathrm{d}\alpha(x)+\int\varphi_{ 2}^{\circ}\big{(}g_{avg}\circ\theta^{*}(y)\big{)}\mathrm{d}\beta(y) \tag{9}\]
We conjecture that strong duality also holds for \(\mathbf{\sigma}\) Lebesgue over \(\mathbb{S}^{d-1}\), and discuss this aspect in Appendix A. Theorem 5 has important pratical implications, since it justifies the Frank-Wolfe-type algorithms that we develop in Section 4 to compute SUOT and USOT in practice.
## 4 Computing SUOT and USOT with Frank-Wolfe algorithms
In this section, we explain how to implement SUOT and USOT. We propose two algorithms by leveraging our strong duality result (Theorem 5) along with a Frank-Wolfe algorithm (FW, [32]) introduced in [31] to optimize UOT dual 3. Our methods, summarized in Algorithms 1 and 2, can be applied for smooth \(\mathrm{D}_{\varphi_{1}},\mathrm{D}_{\varphi_{2}}\): this condition is satisfied in the GHK setting (where \(\mathrm{D}_{\varphi_{i}}=\rho_{i}\mathrm{KL}\)), but not for sliced partial OT (where \(\mathrm{D}_{\varphi_{i}}=\rho_{i}\mathrm{TV}\), [29]). We refer to Appendix B for more details on the technical implementation and theoretical justification of our methodology.
FW is an iterative procedure which aims at maximizes a functional \(\mathcal{H}\) over a compact convex set \(\mathcal{E}\), by maximizing a linear approximation \(\nabla\mathcal{H}\): given iterate \(x^{t}\), FW solves the linear oracle \(r^{t+1}\in\arg\max_{r\in\mathcal{E}}\left\langle\nabla\mathcal{H}(x^{t}),r\right\rangle\) and performs a convex update \(x^{t+1}=(1-\gamma_{t+1})x^{t}+\gamma_{t+1}r^{t+1}\), with \(\gamma_{t+1}\) typically chosen as \(\gamma_{t+1}=2/(2+t+1)\). We call this step FWStep in our pseudo-code. When applied in [31] to compute \(\mathrm{UOT}(\alpha,\beta)\) dual (3), FWStep updates \((f_{t},g_{t})\) s.t. \(f_{t}\oplus g_{t}\leq\mathrm{C}_{d}\), and the linear oracle is the balanced dual of \(\mathrm{OT}(\alpha_{t},\beta_{t})\) where \((\alpha_{t},\beta_{t})\) are normalized versions of \((\alpha,\beta)\). Updating \((\alpha_{t},\beta_{t})\) involves \((f_{t},g_{t})\) and \(\mathbf{\rho}=(\rho_{1},\rho_{2})\): we refer to this routine as \(\texttt{Norm}(\alpha,\beta,f_{t},g_{t},\mathbf{\rho})\) and report the closed-form updates in Appendix B. In other words, computing UOT amounts to solve a sequence of OT problems, which can efficiently be done for univariate measures [31].
Analogously to UOT, and by Theorem 5, we propose to compute \(\mathrm{SUOT}(\alpha,\beta)\) and \(\mathrm{USOT}(\alpha,\beta)\) based on their dual forms. FW iterates consists in solving a sequence of sliced OT problems. We derive the updates for the FWStep tailored for SUOT and USOT in Appendix B, and re-use the aforementioned Norm routine. For USOT, we implement an additional routine called \(\texttt{AvgPot}\big{(}(f_{\theta})\big{)}\) to compute \(\int f_{\theta}\mathrm{d}\hat{\mathbf{\sigma}}_{K}(\theta)\) given the sliced potentials \((f_{\theta})\).
A crucial difference is the need of SOT dual potentials \((r_{\theta},s_{\theta})\) to call Norm. However, past implementations only return the loss \(\mathrm{SOT}(\alpha,\beta)\) for e.g. training models [22, 23]. Thus we designed two novel (GPU) implementations in PyTorch [41] which return them. The first one leverages that the gradient of \(\mathrm{OT}(\alpha,\beta)\) w.r.t. \((\alpha,\beta)\) are optimal \((f,g)\), which allows to backpropagate \(\mathrm{OT}(\theta_{\sharp}^{*}\alpha,\theta_{\sharp}^{*}\beta)\) w.r.t. \((\alpha,\beta)\) to obtain \((r_{\theta},s_{\theta})\). The second implementation computes them in parallel on GPUs
using their closed form, which to the best of our knowledge is a new sliced algorithm. We call \(\texttt{SlicedDual}(\theta_{\sharp}^{\star}\alpha,\theta_{\sharp}^{\star}\beta)\) the step returning optimal \((r_{\theta},s_{\theta})\) solving \(\text{OT}(\theta_{\sharp}^{\star}\alpha,\theta_{\sharp}^{\star}\beta)\) for all \(\theta\). Both routines preserve the \(O(N\log N)\) per slice time complexity and can be adapted to any SOT variant. Thus, our FW approach is modular in that one can reuse the SOT literature. We illustrate this by computing USOT between distributions in the hyperbolic Poincare disk. (Figure 2).
**Algorithmic complexity.** FW algorithms and its variants have been widely studied theoretically. Computing \(\texttt{SlicedDual}\) has a complexity \(O(KN\log N)\), where \(N\) is the number of samples, and \(K\) the number of projections of \(\hat{\mathbf{\sigma}}_{K}\). The overall complexity of SUOT and USOT is thus \(O(FKN\log N)\), where \(F\) is the number of FW iterations needed to reach convergence. Our setting falls under the assumptions of [42, Theorem 8], thus ensuring fast convergence of our methods. We plot in Appendix B empirical evidence that a few iterations of FW (\(F\leq 20\)) suffice to reach numerical precision.
```
Input:\(\alpha\), \(\beta\), \(F\), \((\theta_{k})_{k=1}^{K}\), \(\mathbf{\rho}=(\rho_{1},\rho_{2})\) Output:\(\text{SUOT}(\alpha,\beta)\), \((f_{\theta},g_{\theta})\leftarrow(0,0)\) for\(t=0,1,\ldots,F-1\), for\(\theta\in(\theta_{k})_{k=1}^{K}\)do\((\alpha_{\theta},\beta_{\theta})\leftarrow\texttt{Norm}(\theta_{\sharp}^{ \star}\alpha,\theta_{\sharp}^{\star}\beta,f_{\theta},g_{\theta},\mathbf{\rho})\) \((r_{\theta},s_{\theta})\leftarrow\texttt{SlicedDual}(\alpha_{\theta},\beta_{ \theta})\) \((f_{\theta},g_{\theta})\leftarrow\texttt{FWStep}(f_{\theta},g_{\theta},r_{ \theta},s_{\theta},\gamma_{t})\) endfor Return \(\text{SUOT}(\alpha,\beta)\), \((f_{\theta},g_{\theta})\) as in (8)
```
**Algorithm 1** SUOT
**Outputing marginals of SUOT and USOT.** The optimal primal marginals of UOT (and a fortiori SUOT and USOT) are geometric normalizations of inputs \((\alpha,\beta)\) with discarded outliers. Their computation involves the Norm routine, using optimal dual potentials. This is how we compute marginals in Figures (1, 2, 4). We refer to Appendix B for more details and formulas.
**Stochastic USOT.** In practice, the measure \(\hat{\mathbf{\sigma}}_{K}=\frac{1}{K}\sum_{i}^{K}\delta_{\theta_{i}}\) is fixed, and \((f_{avg},g_{avg})\) are computed w.r.t. \(\hat{\mathbf{\sigma}}_{K}\). However, the process of sampling \(\hat{\mathbf{\sigma}}_{K}\) satisfies \(\mathbb{E}_{\theta_{k}\sim\mathbf{\sigma}}[\hat{\mathbf{\sigma}}_{K}]=\mathbf{\sigma}\). Thus, assuming Theorem 5 still holds for \(\mathbf{\sigma}\), it yields \(\mathbb{E}_{\theta_{k}\sim\mathbf{\sigma}}[f_{avg}(x)]=\int f_{\theta}(\mathbf{\theta }^{\star}(x))\text{d}\mathbf{\sigma}(\theta)\) if we sample a new \(\hat{\mathbf{\sigma}}_{K}\) at each FW step. We call this approach _Stochastic_ USOT. It outputs a more accurate estimate of the true USOT w.r.t. \(\mathbf{\sigma}\). It is more expensive, as we need to sort projected data w.r.t new projections at each iteration, More importantly, for balanced OT (\(\varphi^{\circ}(x)=x\)), one has \(\text{USOT}=\text{SOT}\) and this idea remains valid for sliced OT. See Section 5 for applications.
## 5 Experiments
This section presents a set of numerical experiments, which illustrate the effectiveness, computational efficiency and versatility of SUOT and USOT, as implemented by Algorithms 1 and 2. We first evaluate SUOT and USOT between measures supported on hyperbolic data, and investigate the influence of the hyperparameters \(\rho_{1}\) and \(\rho_{2}\). Then, we solve a document classification problem with SUOT and USOT, and compare their performance (in terms of accuracy and computational complexity) against classical OT losses. Our last experiment is conducted on large-scale datasets from a real-life application: we deploy USOT to compute barycenters of climate datasets in a robust and efficient manner.
Comparing hyperbolic datasets.We display in Figure 2 the impact of the parameter \(\rho=\rho_{1}=\rho_{2}\) on the optimal marginals of \(\text{USOT}\). To illustrate the modularity of our FW algorithm, our inputs are synthetic mixtures of Wrapped Normal Distribution on the 2-hyperbolic manifold \(\mathbb{H}\)[43], so that the FW oracle is hyperbolic sliced OT [26]. The parameter \(\theta\) characterizes on \(\mathbb{H}\) any geodesic curve passing through the origin, and each sample is projected by taking the shortest path to such geodesics. Once projected on a geodesic curve, we sort data and compute SOT w.r.t. hyperbolic metric \(d_{\mathbb{H}}\).
We display the 2-hyperbolic manifold on the Poincare disc. The measure \(\alpha\) (in red) is a mixture of 3 isotropic normal distributions, with a mode at the top of the disc playing the role of an outlier. The measure \(\beta\) is a mixture of two anisotropic normal distributions, whose means are close to two modes of \(\alpha\), but are slightly shifted at the disc's center.
We wish to illustrate several take-home messages, stated in Section 3. First, the optimal marginals \((\pi_{1},\pi_{2})\) are renormalisation of \((\alpha,\beta)\) accounting for their geometry, which are able to remove outliers for properly tuned \(\rho\). When \(\rho\) is large, \((\pi_{1},\pi_{2})\simeq(\alpha,\beta)\) and we retrieve SOT. When \(\rho\) is too small, outliers are removed, but we see a shift of the modes, so that modes of \((\pi_{1},\pi_{2})\) are closer to each other, but do not exactly correspond to those of \((\alpha,\beta)\). Second, note that such plot cannot be made with SUOT, since the optimal marginals depend on the projection \(\theta\) (see Figure 1). Third, we emphasize that we are indeed able to reuse any variant of SOT existing in the literature.
Document classification.To show the benefits of our proposed losses over SOT, we consider a document classification problem [44]. Documents are represented as distributions of words embedded with _word2vec_[45] in dimension \(d=300\). Let \(D_{k}\) be the \(k\)-th document and \(x_{1}^{k},\ldots,x_{n_{k}}^{k}\in\mathbb{R}^{d}\) be the set of words in \(D_{k}\). Then, \(D_{k}=\sum_{i=1}^{n_{k}}w_{i}^{k}\delta_{x_{i}^{k}}\) where \(w_{i}^{k}\) is the frequency of \(x_{i}^{k}\) in \(D_{k}\) normalized s.t. \(\sum_{i=1}^{n_{k}}w_{i}^{k}=1\). Given a loss function L, the document classification task is solved by computing the matrix \(\left(\text{L}(D_{k},D_{\ell})\right)_{k,\ell}\), then using a k-nearest neighbor classifier. Since a word typically appears several times in a document, the measures are not uniform and sliced partial OT [28, 29] cannot be used in this setting. The aim of this experiment is to show that by discarding possible outliers using a well chosen parameter \(\rho\), USOT is able to outperform SOT and SUOT on this task. We consider three different datasets, BBCSport [44], Movies reviews [46] and the Goodreads dataset [47] on two tasks (genre and likability). We report in Table 1 the accuracy of SUOT, USOT and the stochastic USOT (SUSOT) compared with SOT, OT and UOT computed with the majorization minimization algorithm [48] or approximated with the Sinkhorn algorithm [34]. All the benchmark methods are computed using the POT library [49]. For sliced methods (SOT, SUOT, USOT and SUSOT), we average over 3 computations of the loss matrix and report the standard deviation in Table 1. The number of neighbors was selected via cross validation. The results in Table 1 are reported for \(\rho\)
yielding the best accuracy, and we display an ablation of this parameter on the BBCSport dataset in Figure 3. We observe that when \(\rho\) is tuned, USOT outperforms SOT, just as UOT outperforms OT. Note that OT and UOT cannot be used in large scale settings (typically large documents) as their complexity scale cubically. We report in Appendix C runtimes on the Goodreads dataset. In particular, computing the OT matrix took 3 times longer than computing the USOT matrix on GPU. Morever, we were unable to run UOT using POT on the Movies and Goodreads datasets in a reasonable amount of time, due to their computational complexity.
Barycenter on geophysical data.OT barycenters are an important topic of interest [37, 50] for their ability to capture mass changes and spatial deformations over several reference measures. In order to compute barycenters under the USOT geometry on a fixed grid, we employ a mirror-descent strategy similar to [51, Algorithm (1)] and described more in depth in Appendix C. We showcase unbalanced sliced OT barycenter using climate model data. Ensembles of multiple models are commonly employed to reduce biases and evaluate uncertainties in climate projections (_e.g._[52, 53]). The commonly used Multi-Model Mean approach assumes models are centered around true values and averages the ensemble with equal or varying weights. However, spatial averaging may fail in capturing specific characteristics of the physical system at stake. We propose to use USOT barycenter here instead. We use data from the ClimateNet dataset [54], and more specifically the TMQ (precipitable water) indicator. The ClimateNet dataset is a human-expert-labeled curated dataset that captures notably tropical cyclones (TCs). In order to simulate the output of several climate models, we take a specific instant (first date of 2011) and deform the data with the elastic deformation from TorchVision [41], in an area located close to the eastern part of the United States of America. As a result, we obtain 4 different TCs, as shown in the first row of Figure 4. The classical L2 spatial mean is displayed on the second row of Figure 4 and, as can be expected, reveal 4 different TCs centers/modes, which is undesirable. As the total TMQ mass in the considered zone varies between the different models, a direct application of SOT is impossible, or requires a normalization of the mass that has undesired effect as can be seen on the second picture of the second row. Finally, we show the result of the USOT barycenter with \(\rho_{1}=1e1\) (related to the data) and \(\rho_{2}=1e4\) (related to the barycenter). As a result, the corresponding barycenter has only one apparent mode which is the expected behaviour. The considered measures have a size of \(100\times 200\), and we run the barycenter algorithm for 500 iterations (with \(K=64\) projections), which takes 3 minutes on a commodity GPU. UOT barycenters for this size of problems are untractable, and to the best of our knowledge, this is the first time such large scale unbalanced OT barycenters can be computed. This experiment encourages an in-depth analysis of the relevance of this aggregation strategy for climate modeling and related problems, which we will investigate as future work.
\begin{table}
\begin{tabular}{c c c c c} & **BBCSport** & **Movies** & **Goodreads genre** & **Goodreads like** \\ \hline OT & 91.64 & 68.88 & 52.75 & 70.60 \\ UOT & 96.27 & - & - & - \\ Sinkhorn UOT & 93.64 & 63.8 & 42.55 & 66.06 \\ SOT & 89.39\({}_{\pm 0.76}\) & 66.95\({}_{\pm 0.45}\) & 50.09\({}_{\pm 0.51}\) & 65.60\({}_{\pm 0.20}\) \\ SUOT & 90.12\({}_{\pm 0.15}\) & 67.84\({}_{\pm 0.37}\) & 50.15\({}_{\pm 0.04}\) & 66.72\({}_{\pm 0.38}\) \\ USOT & 92.36\({}_{\pm 0.07}\) & 69.21\({}_{\pm 0.37}\) & 51.87\({}_{\pm 0.56}\) & 67.41\({}_{\pm 1.06}\) \\ SUSOT & 92.45\({}_{\pm 0.39}\) & 69.53\({}_{\pm 0.53}\) & 51.93\({}_{\pm 0.53}\) & 67.33\({}_{\pm 0.26}\) \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy on document classification
## 6 Conclusion and Discussion
We proposed two losses merging unbalanced and sliced OT altogether, with theoretical guarantees and an efficient Frank-Wolfe algorithm which allows to reuse any sliced OT variant. We highlighted experimentally the performance improvement over SOT, and described novel applications of unbalanced OT barycenters of positive measures, with a new case study on geophysical data. These novel results and algorithms pave the way to numerous new applications of sliced variants of OT, and we believe that our contributions will motivate practitioners to further explore their use in general ML applications, without the requirements of manipulating probability measures.
On the limitations side, an immediate drawback arises from the induced additional computational cost w.r.t. SOT. While the above experimental results show that SUOT and USOT improve performance significantly over SOT, and though the complexity is still sub-quadratic in number of samples, our FW approach uses SOT as a subroutine, rendering it necessarily more expensive. Additionally, another practical burden comes from the introduction of extra parameters \((\rho_{1},\rho_{2})\) which requires cross-validation when possible. Therefore, a future direction would be to derive efficient strategies to tune \((\rho_{1},\rho_{2})\), maybe w.r.t. the applicative context, and further complement the possible interpretations of \(\rho\) as a 'threshold' for the geometric information encode by C\({}_{1}\), C\({}_{d}\).
On the theoretical side, while OT between univariate measures has been shown to define a reproducing kernel, and while sliced OT can take advantage of this property [55, 56], some of our numerical experiments suggest this property no longer holds for UOT (and therefore, for SUOT and USOT). This negative result leaves as an open direction the design of OT-based kernel methods between arbitrary positive measures.
Figure 4: **Barycenter of geophysical data**. (_First row_) Simulated output of 4 different climate models depicting different scenarios for the evolution of a tropical cyclone (_Second row_) Results of different averaging/aggregation strategies.
## Acknowledgment
KF is supported by NSERC Discovery grant (RGPIN-2019-06512) and a Samsung grant. CB is supported by project DynaLearn from Labex CominLabs and Region Bretagne ARED DLearnMe.
|
2305.01057 | Novel neural-network architecture for continuous gravitational waves | The high computational cost of wide-parameter-space searches for continuous
gravitational waves (CWs) significantly limits the achievable sensitivity. This
challenge has motivated the exploration of alternative search methods, such as
deep neural networks (DNNs). Previous attempts to apply convolutional
image-classification DNN architectures to all-sky and directed CW searches
showed promise for short, one-day search durations, but proved ineffective for
longer durations of around ten days. In this paper, we offer a hypothesis for
this limitation and propose new design principles to overcome it. As a proof of
concept, we show that our novel convolutional DNN architecture attains
matched-filtering sensitivity for a targeted search (i.e., single sky-position
and frequency) in Gaussian data from two detectors spanning ten days. We
illustrate this performance for two different sky positions and five
frequencies in the $20 - 1000 \mathrm{Hz}$ range, spanning the spectrum from an
``easy'' to the ``hardest'' case. The corresponding sensitivity depths fall in
the range of $82 - 86 / \sqrt{\mathrm{Hz}}$. The same DNN architecture is
trained for each case, taking between $4 - 32$ hours to reach matched-filtering
sensitivity. The detection probability of the trained DNNs as a function of
signal amplitude varies consistently with that of matched filtering.
Furthermore, the DNN statistic distributions can be approximately mapped to
those of the $\mathcal{F}$-statistic under a simple monotonic function. | Prasanna M. Joshi, Reinhard Prix | 2023-05-01T19:44:31Z | http://arxiv.org/abs/2305.01057v2 | # A novel neural-network architecture for continuous gravitational waves
###### Abstract
The high computational cost of wide-parameter-space searches for continuous gravitational waves (CWs) significantly limits the achievable sensitivity. This challenge has motivated the exploration of alternative search methods, such as deep neural networks (DNNs). Previous attempts [1; 2] to apply convolutional image-classification DNN architectures to all-sky and directed CW searches showed promise for short, one-day search durations, but proved ineffective for longer durations of around ten days. In this paper, we offer a hypothesis for this limitation and propose new design principles to overcome it. As a proof of concept, we show that our novel convolutional DNN architecture attains matched-filtering sensitivity for a targeted search (i.e., single sky-position and frequency) in Gaussian data from two detectors spanning ten days. We illustrate this performance for two different sky positions and five frequencies in the \(20-1000\,\mathrm{Hz}\) range, spanning the spectrum from an "easy" to the "hardest" case. The corresponding sensitivity depths fall in the range of \(82-86\,/\sqrt{\mathrm{Hz}}\). The same DNN architecture is trained for each case, taking between \(4-32\,\mathrm{hours}\) to reach matched-filtering sensitivity. The detection probability of the trained DNNs as a function of signal amplitude varies consistently with that of matched filtering. Furthermore, the DNN statistic distributions can be approximately mapped to those of the \(\mathcal{F}\)-statistic under a simple monotonic function.
## I Introduction
Continuous gravitational waves (CWs) are weak, long-lasting and nearly-monochromatic waves emitted by non-axisymmetric spinning neutron stars. Numerous searches have been performed on the data from the LIGO (H1 and L1) and Virgo (V1) detectors, yet no CWs have so far been detected [3]. Owing to the expected small amplitude of CW signals, months to years of data will be required in order to collect a sufficient signal-to-noise ratio to allow a detection.
The most sensitive search method consist of _coherently_ integrating signal templates over the entire time-span of the data, as process commonly known as _matched filtering_. However, for wide parameter spaces this method is severely constrained by the required computing cost (due to the astronomical number of required templates), e.g., see [4]. Instead, _semi-coherent_ search methods are used in practice for wide parameter-space searches, which proceed by coherently analyzing shorter segments of data and combining their results incoherently. These search methods tend to result in the highest sensitivity at a fixed computational cost.
In order to overcome the computational-cost constraint on the achievable sensitivity, one alternative approach being explored is to use deep neural networks (DNNs) to search for CWs in the data. There has been a number of studies exploring the potential of DNNs to help improve CW searches, for example, as a clustering and follow-up method of search candidates [5; 6; 7], to reduce the computational cost of follow-ups [8; 9], and to mitigate the effect of instrumental noise artifacts [10]. DNNs have also been shown to be able to accelerate searches for long-duration, transient CWs [11; 12].
Here we continue to pursue the approach of [1; 2] to train DNNs as an _end-to-end_ search method for CW signals directly on the detector strain data. These earlier studies considered wide-parameter-space searches of signals with time-spans of \(10^{5}\,\mathrm{s}\) and \(10^{6}\,\mathrm{s}\), respectively, using the best available DNN architectures for image classification tasks. While this approach proved quite effective on the shorter time-span of \(10^{5}\,\mathrm{s}\sim 1\,\mathrm{day}\), it performed poorly on the longer signals of \(10^{6}\,\mathrm{s}\sim 11.6\,\mathrm{days}\).
This raises the question if it is the larger number of signal waveforms (i.e., _templates_) in a wide parameter space, or the morphology of longer signals itself that thwarts the networks' ability to successfully learn to detect them. A similar difficulty of DNNs to detect long-duration signals has also been observed in the context of compact-binary-coalescence searches, see [13]. We find that the characteristics of longer CW signals seem to be the underlying cause of the inability of the image-classification architectures to effectively learn to detect them.
Therefore here we take a step back and focus on the problem of targeted (i.e., single-template) CW searches over a time-span of ten days in simulated Gaussian noise. Developing architectures capable of detecting longer CW signals is crucial for scaling up to complete wide-parameter space search methods that could ultimately compete with current state-of-the-art semi-coherent CW searches.
The plan of the paper is as follows: in Sec.II, we introduce the CW signal model, we define benchmark targeted search cases in Sec.III and describe the architecture and training of our DNN in Sec.IV. Finally, we present our test results and discussion in Sec.V and conclusions and future outlook in Sec.VI.
## II Continuous gravitational waves
Continuous gravitational waves (CWs) are long-lasting, quasi-monochromatic waves with a slowly varying frequency, emitted by spinning non-axisymmetric
neutrons stars. We model the evolution of the CW signal phase \(\Phi(\tau)\) as a function of time \(\tau\) in the source frame (assuming only linear spindown) as:
\[\Phi(\tau)=2\pi\left[f(\tau_{\rm ref})\Delta\tau+\frac{1}{2}\dot{f}(\tau_{\rm ref })\,\Delta\tau^{2}\right]+\phi_{0}, \tag{1}\]
where \(\Delta\tau\equiv\tau-\tau_{\rm ref}\), and \(\tau_{\rm ref}\) is the reference time at which \(f(\tau_{\rm ref})\) and \(f(\tau_{\rm ref})\) are defined.
In the detector frame the signal experiences frequency modulation due to the relative motion between the the detector and the source. This modulation can be characterized by the relation between the arrival time \(t\) of a wavefront at the detector that left the source at time \(\tau\). For an isolated neutron star, this timing relation \(\tau(t)\) can be written as follows:
\[\tau(t;{\bf n})=t+\frac{{\bf r}(t)\cdot{\bf n}}{c}-\frac{d}{c}, \tag{2}\]
where \({\bf n}=(\cos\delta\cos\alpha,\cos\delta\sin\alpha,\sin\delta)\) is the unit vector pointing to the source in equatorial coordinates, expressed in terms of right ascension (\(\alpha\)) and declination (\(\delta\)), \({\bf r}(t)\) is the vector from the solar-system barycenter (SSB) to the detector location, \(d\) is the distance between the SSB and the source and \(c\) is the speed of light. The term \({\bf r}\cdot{\bf n}/c\) is known as the Romer delay.
The frequency evolution \(f(t)\) of the signal in the detector frame is obtained by applying the timing relation \(\tau(t)\) of Eq. (2) to the source-frame phase evolution of Eq. (1), namely \(\Phi(t)=\Phi(\tau(t))\), and computing the derivative
\[f(t;\lambda)=\frac{d\Phi(t)}{2\pi dt}=\left[f(\tau_{\rm ref})+\dot{f}(\tau_{ \rm ref})\,\Delta\tau(t)\right]\,\frac{d\tau}{dt}, \tag{3}\]
where \(\lambda\equiv\{f(\tau_{\rm ref}),\dot{f}(\tau_{\rm ref}),\alpha,\delta\}\) are commonly referred to as the phase-evolution parameters.
An example of the detector-frame frequency evolution \(f(t)\) for a CW signal over a time-span of ten days is shown in Fig. 1. Here we see the two-component Doppler modulation of the signal due to the diurnal rotation of the detector (Doppler shifts of order \(\sim 10^{-6}f\)) and the orbital motion of the Earth (Doppler shifts of order \(\sim 10^{-4}f\) over the course of a year).
The CW strain signal in the detector additionally depends on four _amplitude parameters_\(\mathcal{A}\), namely the overall signal amplitude \(h_{0}\), the neutron-star spin-axis alignment \(\cos\iota\) with the line of sight, the polarization angle \(\psi\) and the initial phase \(\phi_{0}\). The full expression for the strain signal \(h(t;\mathcal{A},\lambda)\) is not important here and can be found, for example, in [4; 14]. The total measured strain \(x(t)\) in a detector can be expressed as
\[x(t)=n(t)+h(t;\mathcal{A},\lambda), \tag{4}\]
where \(n(t)\) denotes the noise, characterized by a noise power spectral density \(S_{\rm n}(f)\) as a function of frequency. In practice \(n(t)\) is often assumed to be (approximately) Gaussian, a simplifying assumption that we will also use in this work.
We can distinguish three main categories of CW searches [3; 4], depending on the assumed level of knowledge about the signals: wide parameter-space _all-sky_ searches assume the phase-evolution parameters \(\lambda\) to be completely unknown, _directed_ searches treat the sky position \({\bf n}\) as known with unknown frequency and spindown(s), while _targeted_ searches take the phase-evolution parameters \(\lambda\) to be fully known. Note that the four amplitude parameters \(\mathcal{A}\) are typically considered unknown even for targeted searches.
The _sensitivity_ of a CW search [15; 16] is typically characterized in terms of an _upper-limit_ amplitude \(h_{0}^{p_{\rm det}}\) at which a search achieves a given detection probability \(p_{\rm det}\) (typically chosen as 90% or 95%) at a chosen false-alarm level \(p_{\rm fa}\). This upper-limit amplitude \(h_{0}\) characterizes a _population_ of signals with unknown (neutron-star) spin axis orientation (uniform priors \(\cos\iota\in[-1,1]\) and \(\psi\in[-\pi/4,\pi/4]\)) and initial phase (uniform in \(\phi_{0}\in[0,2\pi]\)). The CW upper limit amplitude \(h_{0}^{p_{\rm det}}\) scales with the amplitude noise spectral density \(\sqrt{S_{\rm n}}\) at every frequency, it is therefore more convenient to use the _sensitivity depth_\(\mathcal{D}^{p_{\rm det}}\), defined as
\[\mathcal{D}^{p_{\rm det}}\equiv\frac{\sqrt{S_{\rm n}}}{h_{0}^{p_{\rm det}}}, \tag{5}\]
which characterizes the sensitivity of a search setup [16] independently of the noise-floor level \(S_{\rm n}\). In the following we use the sensitivity depth \(\mathcal{D}^{90\%}\), corresponding to an upper-limit amplitude \(h_{0}^{90\%}\), for which a matched-filter search would achieve a detection probability of \(p_{\rm det}=90\%\) at a false-alarm probability of \(p_{\rm fa}=1\%\).
## III Benchmark targeted searches
Previous studies [1; 2] had directly attempted to tackle wide-parameter-space CW searches with convolutional
Figure 1: Detector-frame frequency evolution \(f(t)\) of Eq. (3) for a CW signal with source-frame frequency \(f(\tau_{\rm ref})=1000\,\rm Hz\), spindown \(\dot{f}(\tau_{\rm ref})=-10^{-10}\,\rm Hz\,s^{-1}\) and sky position Sky-B (see Table 1). The highlighted region denotes the bandwidth of the signal over each one-day time span.
deep-neural-network (DNN) architectures from image classification. While this approach worked well for short search durations of about one day, it became ineffective when extended to longer durations up to \(10^{6}\,\mathrm{s}\sim 11.6\,\mathrm{days}\), see Table. 6 in [2].
Further experimentation reveals that these network architectures perform well with longer-duration signals when trained for much simpler targeted searches at a lower frequency, but struggle with signals at higher frequency (relatively more complex signal morphology), see Fig. 2 for an example. This indicates that it is not (only) the larger parameter space but the signal morphology itself that causes problems. Specifically, the difficulty encountered by DNNs in detecting CW signals appears to be correlated with their effective bandwidth in the data, corresponding to the Doppler broadening in the detector frame (as illustrated in Fig. 1).
In this study, we therefore narrow our focus on targeted searches spanning ten days, in order to demonstrate, as a proof of concept, that an appropriately-designed DNN architecture can detect these longer signals with optimal matched-filter sensitivity. For this purpose we define ten benchmark cases of targeted ten-day searches, given in Table. 1, spanning the spectrum from "easy" to "hardest", with five different frequencies from \(20-1000\,\mathrm{Hz}\) (higher frequency leads to more Doppler broadening) and two different sky positions, Sky-A and Sky-B.
Sky position Sky-B has the widest Doppler broadening (over the sky) of the signal during the ten-day search span, while sky position Sky-A is more favorable with a narrow signal bandwidth. The total ten-day signal bandwidths for all benchmark cases are listed in Table. 2. We see that the search at \(f=20\,\mathrm{Hz}\) targeting sky position Sky-A has the narrowest signal band (\(\sim 0.09\,\mathrm{mHz}\)), and is expected to be the easiest to master for a DNN, while targeting Sky-B at \(f=1000\,\mathrm{Hz}\) is expected to be the hardest case (with a total bandwidth of \(\sim 18.7\,\mathrm{mHz}\)). A visual illustration of the respective bandwidths of signals at the two sky-positions can also be found in Fig. 3.
We can estimate the optimal matched-filtering sensitivity \(\mathcal{D}^{90\%}\) (at \(p_{\mathrm{fa}}=1\%\) false-alarm level) for each of the benchmark search cases using the approach developed in [15; 16] and implemented in [17]1. The corresponding optimal sensitivity depths depend on the sky position (due to the different antenna-pattern response), and are obtained as
Footnote 1: This is assuming the \(\mathcal{F}\)-statistic, which is not strictly Neyman-Pearson optimal [18], but the difference is too small to be of practical relevance for these search setups.
\[\begin{split}\mathcal{D}^{90\%}_{\mathrm{Sky-A}}& \approx 86.2\,/\sqrt{\mathrm{Hz}},\\ \mathcal{D}^{90\%}_{\mathrm{Sky-B}}&\approx 81.8\,/ \sqrt{\mathrm{Hz}}.\end{split} \tag{6}\]
This defines the optimal sensitivity ceiling to compare the DNN performance against.
Note that the matched-filter search at Sky-B is slightly less sensitive than at Sky-A, requiring a stronger signal (i.e., smaller depth) to reach \(p_{\mathrm{det}}=90\%\). This is due to differences in the antenna-pattern response at the two sky positions and is unrelated to the previous discussion about signal bandwidths in the detector frame, which does not affect matched-filter performance.
## IV Deep Learning
In this section we describe the design of the DNN architecture, the pre-processing of the input data, and the training process.
Figure 2: Comparison of training progress of between two DNNs trained on the signals from the targeted search cases Sky-B@20 Hz and Sky-B@1000 Hz at depth \(\mathcal{D}^{90\%}_{\mathrm{Sky-B}}\) from Eq. 6. The DNNs have the Inception-Resnet-v2 architecture used in [2]
### A new DNN architecture for CWs
Deep state-of-the-art image classification networks (specifically, ResNet [19] and Inception-ResNet-v2 [20]) employed in [1; 2] were unable to achieve competitive sensitivities for CW signals lasting \(\sim 11.6\,\mathrm{days}\), with rapidly decreasing performance at higher frequencies (e.g., see Table. 6 in [2]). As mentioned in the previous section, these architectures perform poorly even when simplifying the problem to simple targeted searches over ten days.
We hypothesize that this failure to learn is due to a mismatch between the _morphology of long CW signals_ in noise and the _implicit priors_ in (convolutional) image-classification network architectures. These image-classification priors can be roughly characterized as
* the image could represent any object,
* high signal-to-noise-ratio pixels can be combined locally to find small-scale structures like ridges, corners etc,
* lower-level patterns can be hierarchically combined into larger structures in subsequent layers, where the exact location of lower-level structures has little to no impact on the final classification.
The resulting typical convolutional image-classification architectures consists of small convolutional kernels (such as 7x7 or smaller), lossy layer reductions such as max- or mean-pooling and a large number (\(\gtrsim 50\)) of layers (cf. [19; 20]).
Contrast this with CW signals, where two things happen when increasing search duration: (i) the signal depth \(\mathcal{D}^{90\%}\) for matched filtering grows as \(\propto\sqrt{\text{duration}}\), so the target signals get weaker, and (ii) the Doppler spreading of the signal in the detector frame increases, see Sec. III and Fig. 1. Both of these factors contribute to _weaker_ and _less localized_ signal power in time-frequency space, i.e., reduced local signal-to-noise ratio in any spectrogram bin. Figure. 4 illustrates this effect for a spectrogram of the Sky-B@1000 Hz signal of Fig. 3 added to Gaussian noise at a matched-filtering depth of \(\mathcal{D}^{90\%}=$81.8\,\sqrt{\text{Hz}}$\). The resulting "image" does not show any visible trace of the signal. We can therefore roughly characterize the _CW signal priors_ as:
* a diurnal narrow frequency pattern, repeating daily with an overall frequency drift (due to spindowns and orbital motion2), see Figs. 1 and 3, Footnote 2: This assumes _isolated_ sources and needs to be revisited for sources in binary systems.
* a vanishing local signal-to-noise ratio in any spectrogram pixel, see Fig. 4,
* a lossless combination of _all_ signal pixels will be necessary for classification to be able to compete with matched filtering.
Motivated by these priors, we use the following design principles to construct our CW-DNN architecture:
1. avoid operations that lose information about the signal (such as max-/mean-pooling),
2. combine _all_ signal bins within the shortest layer pathway,
3. use an input spectrogram adapted to the diurnal repeating shifted signal pattern.
The last point is probably not strictly necessary, and is intended to simplify the problem for the network, by providing a "natural" factorization into a repeating shifted daily pattern that can be learned by the same convolutional kernels across all segments, producing a track-like output pattern to be combined by subsequent layers.
### Pre-processing the Input
We convert the one-dimensional time-series data \(x(t)\) of Eq. (4) for each detector into a two-channel (real and imaginary part) spectrogram over ten one-day segments. Detectors are stacked along the channel dimension, and for two detectors (H1+L1) the input spectrograms therefore have a total of four channels. Hence, the input consists of a three-dimensional array, with axes corresponding to segments, frequency bins, and channels, as depicted in Figs. 3 and 4.
The input frequency band encompasses the entire signal bandwidth, aligned to start at the lowest signal frequency with a fixed total bandwidth corresponding to the widest signal in Table. 2, specifically \(\sim 18.7\,\mathrm{mHz}\) for the Sky-B@1000 Hz case. With the segment FFT resolution of 1/day, plus a padding of 16 frequency bins on either side of the band, this results in a total input bandwidth of 1647 frequency bins.
### Network Architecture
Through extensive experimentation based on the architecture design principles of Sec.IV.1, we ultimately arrived at the simple network architecture summarized in Table3.
The first layer performs 1D-convolutions with 64 kernels of dimension 1x313x2, sliding along the frequency-axis for each detector and segment. The kernel size of 313 frequency bins encompasses the widest signal bandwidth within the one-day segments, namely \(3.4\,\mathrm{mHz}\), for the skyB@1000 Hz signal.
The second layer performs 2D convolution of 64 kernels with dimension 2x40x64, combining neighboring segments over 40 units along the frequency-axis. The width in frequency covers the widest output "track" width over the two-day span. This choice is motivated by the idea of combining the full signal information within the shortest possible network path, as discussed in Sec. IV.1.
The output block consists of three layers: a flatten layer (reshaping the input to a one-dimensional array), a dense layer with 32 units and a final output layer consisting of a single unit.
Every layer except the flatten and output layers use ReLU activation. The output layer uses a sigmoid activation in the final output for the probability \(\hat{y}\in[0,1]\) of the data containing a signal.
The sigmoid output \(\hat{y}\) is well suited for training a classification network, but as previously observed [21; 2], this tends to run into numerical over- and underflow issues when using it as a detection _statistic_, e.g., when measuring the receiver-operator-characteristic (ROC) of detection probability \(p_{\text{det}}\) versus false-alarm \(p_{\text{fa}}\). This can be avoided simply by dropping the final sigmoid activation when using the trained network's output as a detection statistic.
The total number of trainable parameters of the network in Table. 3 is \(\sim 900\)k, and the network requires \(\sim 5\) MB of GPU memory per sample. The training was performed on NVIDIA A100-SXM4 GPUs with 40 GB of memory. The DNN was implemented in TENSORFLOW 2.0 ([22]) with the Keras API ([23]).
Note that despite this being a rather shallow five-layer network, it still contains about half the trainable parameters of the nearly 100-layer deep Inception-Resnet-v2 of [2], which is due to the fact that our network uses substantially larger kernels.
### Training and Validation
For each of the targeted-search cases in Table. 1 we train the same network architecture on samples containing either pure Gaussian noise or an additional signal. The training data is constructed from a fixed set of 8192 precomputed (for performance reasons) signals with
Figure 3: Input spectrogram arrays for two signals without noise at 1000 Hz: (a) sky position Sky-A featuring low Doppler broadening of the signal, and (b) sky position Sky-B exhibiting maximal Doppler broadening over the ten-day period. See Table 1 for the complete parameter definitions.
\begin{table}
\begin{tabular}{c|c} \hline \hline Layer & Output shape \\ & (T, F, C) \\ \hline \hline Input & (10, 1647, 4) \\ \hline Conv1D & \\ Kernel - (1, 313, 2) & (10, 103, 64) \\ Stride - 16 & \\ \hline Conv2D & \\ Kernel - (2, 40, 64) & (10, 26, 64) \\ Stride - (1, 4) & \\ \hline Flatten & (16640) \\ \hline Dense & (32) \\ \hline Output & (1) \\ \hline \hline \end{tabular}
\end{table}
Table 3: DNN architecture for targeted ten-day CW search: the output shape (T, F, C) of each layer corresponds to the number of bins in the (time, frequency, channels) axes, respectively. The kernel sizes of the convolutional layers are using the same convention.
randomly-chosen amplitude parameters according to the physical uniform priors \(\cos\iota\in[-1,1]\), \(\psi\in[-\pi/4,\pi/4]\) and \(\phi_{0}\in[0,2\pi]\). Each signal is added to a dynamically-generated noise realization, at a fixed matched-filtering sensitivity depth of \(\mathcal{D}^{90\%}\) as described in Sec. III.
In every epoch, the network is trained on all 8192 signals added to Gaussian noise and an equal number of pure Gaussian-noise samples, where the noise is dynamically generated in every sample. The training therefore never sees the exact same samples twice and therefore there can no overfitting or memorization in the strict sense, although the finite selection of 8192 signals from the continuous distribution can still result in some bias or small overfitting.
We use a binary cross-entropy loss function, which is common practice for classification tasks, namely
\[\mathcal{L}(y,\hat{y})=\frac{1}{N}\sum_{i=1}^{N}-y^{i}\log\hat{y}^{i}-(1-y^{i}) \log(1-\hat{y}^{i}), \tag{7}\]
where \(\hat{y}^{i}\in[0,1]\) is the DNN sigmoid output for the \(i^{\text{th}}\) sample, \(y^{i}\) is the corresponding label (0 for noise and 1 for a signal), and \(N\) is the total number of samples in a batch. We use the Adam optimizer with a batch size of 128 samples for training.
At every epoch, we measure the DNN detection probability \(p_{\text{det}}\) at a constant \(p_{\text{fa}}=1\%\) false-alarm level on the training dataset. Every 100 epochs, we perform a validation step, where loss and detection probability are evaluated on an independent dataset drawn from the same distribution, constructed again from a (different) set of 8192 independent precomputed signals.
The learning progress of detection probability versus training epoch and time are shown in Fig. 5 for four representative cases. The network training continues until the validation detection probability exceeds \(p_{\text{det}}\geq 89\%\). For each of the targeted-search cases, we start training from ten different random DNN weight initializations, and we use the best-performing network from each case for final testing.
These results confirm an empirical observation mentioned in Sec. III, namely the time required for the DNN
Figure 5: Training and validation detection probability \(p_{\text{det}}\) (at fixed \(p_{\text{fa}}=1\%\)) versus number of epochs and training time for the targeted-search cases (a) Sky-A@20 Hz, (b) Sky-B@20 Hz, (c) Sky-A@1000 Hz and (d) Sky-B@1000 Hz. See Table 1 for the complete parameter definitions and Table. 2 for the corresponding signal bandwidths.
to achieve matched-filtering performance increases with signal bandwidth, suggesting that is more "difficult" for the network to learn wider signals.
## V Results and Discussion
### Verifying performance on a test dataset
The close agreement observed in Fig. 5 between the DNN performance on the training and validation datasets indicates that there is no overfitting to the finite set of 8192 training signals. However, there is still potential for overfitting to the validation set during the optimization of the network hyper-parameters (i.e., learning rate, layers, kernel sizes, strides etc).
Therefore we evaluate the full-trained DNN on a completely independent _test_ dataset. We generate new samples of Gaussian white noise and add signals at fixed matched-filtering depth \(\mathcal{D}^{90\%}\) of Eq. (5) with randomly-drawn amplitude parameters \(\cos\iota,\psi,\phi_{0}\), resulting in the final test detection probabilities \(p_{\mathrm{det}}\) at fixed false-alarm of \(p_{\mathrm{fa}}=1\%\) shown in Table. 4. These results are consistent with the validation detection probability of \(p_{\mathrm{det}}\geq 89\%\) that was used as a stopping criterion for the training. There is a slight downward bias of the test results, i.e., \(\bar{p}_{\mathrm{det}}\sim 88.3\%\), which makes sense given that training was stopped as soon as \(p_{\mathrm{det}}\) exceeded 89% on the validation dataset that is subject to both finite-sampling uncertainties and biases.
### Detection efficiency versus signal depth
All results presented up to this point refer to signals at a fixed matched-filtering depth \(\mathcal{D}^{90\%}\) of Eq. (5), i.e., signals with a fixed amplitude \(h_{0}\). A valid question therefore arises if the network correctly generalizes to other signal amplitudes, as it could in principle have memorized or specialized to this particular sensitivity depth.
We measure \(p_{\mathrm{det}}\) of the trained DNN at varying signal depths \(\mathcal{D}\), commonly referred to as the _efficiency curve_, shown in Fig. 6 for the Sky-B@1000 Hz case (results for the other test cases look similar). We can see that the DNN statistic behaves very similarly to matched filtering for both weaker and stronger signals compared to the \(\mathcal{D}^{90\%}\) depth it was trained at. This confirms similar results found previously in [1, 2], namely fixing the training depth to \(\mathcal{D}^{90\%}\) does not seem to result in any over-specialization of the network.
### Approximate mapping to the \(\mathcal{F}\)-statistic
For all practical purposes, the receiver-operator-characteristic (ROC) \(p_{\mathrm{det}}(p_{\mathrm{fa}})\) of the DNN statistic appears to agree well with that of the matched-filtering \(\mathcal{F}\)-statistic [14], which is close [18] to Neyman-Pearson optimal for targeted CW searches.
We therefore expect the DNN statistic and the \(\mathcal{F}\)-statistic to be related by a monotonic function. We test this prediction by comparing the DNN statistic distributions to the known \(\chi^{2}\)-distribution of the \(\mathcal{F}\)-statistic in both the pure-noise as well as the signal+noise cases. For illustration purposes we focus again on the Sky-B@1000 Hz test case.
We generate a distribution of DNN output statistics on pure noise samples and fit a quadratic mapping to the \(\mathcal{F}\)-statistic noise distribution, namely a central \(\chi^{2}\)-distribution with four degrees of freedom. The best-fit quadratic is obtained as \(0.1x^{2}+1.9x+9.2\) which is a monotonic function in the range of the DNN statistic, and the resulting mapped noise distributions are shown in the top panel of Fig. 7.
Then we apply the same mapping to the DNN statistic outputs obtained in the signal case with signals at depth \(\mathcal{D}^{90\%}\), and we compare the resulting distribution to the corresponding fixed-depth \(\mathcal{F}\)-statistic distribution (cf. [16]), shown in the bottom panel in Fig. 7. We see that we obtain reasonably good agreement between the DNN and \(\mathcal{F}\)-statistic distributions, and it seems therefore fair to say that the DNN appears to have learned
\begin{table}
\begin{tabular}{c|c|c} Frequency & Sky-A & Sky-B \\ \hline
20 & \(89.0^{+0.8}_{-1.2}\) & \(88.5^{+0.8}_{-1.0}\) \\ \hline
100 & \(87.8^{+0.3}_{-0.1}\) & \(87.4^{+1.0}_{-1.0}\) \\ \hline
200 & \(89.0^{+0.0}_{-0.1}\) & \(89.0^{+1.0}_{-1.0}\) \\ \hline
500 & \(88.4^{+0.7}_{-1.0}\) & \(88.8^{+1.0}_{-0.9}\) \\ \hline
1000 & \(87.6^{+0.8}_{-1.1}\) & \(87.6^{+1.0}_{-1.2}\) \\ \end{tabular}
\end{table}
Table 4: Detection probabilities \(p_{\mathrm{det}}\) (at fixed \(p_{\mathrm{fa}}=1\%\)) achieved by the trained DNNs, evaluated on an independent _test_ dataset for each of the five frequencies and the two sky positions Sky-A and Sky-B, see Table. 1 for the complete parameter definitions.
Figure 6: Detection probability \(p_{\mathrm{det}}\) (at fixed \(p_{\mathrm{fa}}=1\%\)) versus signal depth \(\mathcal{D}\) for the trained DNN (circles with 90% error bars) compared to matched filtering (solid line), for the benchmark case Sky-B@1000 Hz.
to compute (something close to) the \(\mathcal{F}\)-statistic for the ten-day targeted searches.
## VI Conclusions
State-of-the-art convolutional image-classification networks haven proven ineffective [1, 2] for CW searches on longer durations of \(\sim 11.6\,\mathrm{days}\). We hypothesize that this failure is due to an inherent mismatch between the CW signal morphology and the priors implicit in (convolutional) image-classification network architectures.
We propose new DNN architecture design principles for CWs, which lead us to a novel convolutional DNN architecture that can effectively achieve matched-filtering sensitivity for targeted CW searches over ten days.
Future work needs to extend this study to longer durations (up to 1-2 years) and CW sources in binaries that would be subject to even larger Doppler spreads. The resulting network input sizes will become substantially larger as a result, potentially creating memory and performance bottlenecks. Furthermore, returning to wide-parameter-space searches will require scaling up the network _capacity_ in order to be able to learn large numbers of different signal shapes.
More work will therefore be required to further improve the network architecture, for example, using transformers [24, 25] for the 2D "track" processing (see Sec. IV.3) might be an interesting direction with the potential of minimizing the network pathway combining the full signal power.
###### Acknowledgements.
This work has utilized the ATLAS computing cluster at the MPI for Gravitational Physics, Hannover, and the HPC system Raven at the Max Planck Computing and Data Facility.
|
2305.15022 | Hierarchical clustering with dot products recovers hidden tree structure | In this paper we offer a new perspective on the well established
agglomerative clustering algorithm, focusing on recovery of hierarchical
structure. We recommend a simple variant of the standard algorithm, in which
clusters are merged by maximum average dot product and not, for example, by
minimum distance or within-cluster variance. We demonstrate that the tree
output by this algorithm provides a bona fide estimate of generative
hierarchical structure in data, under a generic probabilistic graphical model.
The key technical innovations are to understand how hierarchical information in
this model translates into tree geometry which can be recovered from data, and
to characterise the benefits of simultaneously growing sample size and data
dimension. We demonstrate superior tree recovery performance with real data
over existing approaches such as UPGMA, Ward's method, and HDBSCAN. | Annie Gray, Alexander Modell, Patrick Rubin-Delanchy, Nick Whiteley | 2023-05-24T11:05:12Z | http://arxiv.org/abs/2305.15022v3 | # Hierarchical clustering with dot products
###### Abstract
In this paper we offer a new perspective on the well established agglomerative clustering algorithm, focusing on recovery of hierarchical structure. We recommend a simple variant of the standard algorithm, in which clusters are merged by maximum average dot product and not, for example, by minimum distance or within-cluster variance. We demonstrate that the tree output by this algorithm provides a bona fide estimate of generative hierarchical structure in data, under a generic probabilistic graphical model. The key technical innovations are to understand how hierarchical information in this model translates into tree geometry which can be recovered from data, and to characterise the benefits of simultaneously growing sample size and data dimension. We demonstrate superior tree recovery performance with real data over existing approaches such as UPGMA, Ward's method, and HDBSCAN.
## 1 Introduction
Hierarchical structure is known to occur in many natural and man-made systems [30], and the problem considered in this paper is how to recover this structure from data. Hierarchical clustering algorithms, [24, 34, 35, 39] are very popular techniques which organise data into nested clusters, used routinely by data scientists and machine learning researchers, and are easily accessible through open source software packages such as scikit-learn [36]. We focus on perhaps the most popular family of such techniques: agglomerative clustering [16], [23, Ch.3], in which clusters of data points are merged recursively.
Agglomerative clustering methods are not model-based procedures, but rather simple algorithms. Nevertheless, in this work we uncover a new perspective on agglomerative clustering by introducing a general form of generative statistical model for the data, but without assuming specific parametric families of distributions (e.g., Gaussian). In our model (section 2.1), hierarchy takes the form of a tree defining the conditional independence structure of latent variables, using elementary concepts from probabilistic graphical modelling [29, 27]. In a key innovation, we then augment this conditional independence tree to form what we call a _dendrogram_, whose geometry is related to population statistics of the data. The new insight which enables tree recovery in our setting (made precise and explained in section 3) is that _dot products between data vectors reveal heights of most recent common ancestors in the dendrogram_.
We suggest an agglomerative algorithm which merges clusters according to highest sample average dot product (section 2.2). This is in contrast to many existing approaches which quantify dissimilarity between data vectors using Euclidean distance. We also consider the case where data are preprocessed by reducing dimension using PCA. We mathematically analyse the performance of our dot product clustering algorithm and establish that under our model, with sample size \(n\) and data dimension \(p\) growing simultaneously at appropriate rates, the merge distortion [19, 26] between the algorithm output and underlying tree vanishes. In numerical examples with real and simulated data (section 4) we compare performance of our algorithm against existing methods in terms of a Kendall \(\tau_{b}\) correlation performance measure, which quantifies association between ground-truth and estimated tree structure. We examine statistical performance with and without dimension reduction by PCA, and illustrate how dot products versus Euclidean distances relate to semantic structure in ground-truth hierarchy.
Related workAgglomerative clustering methods combine some dissimilarity measure with a 'linkage' function, determining which clusters are combined. Popular special cases include UPGMA [42] and
Ward's method [45], against which we make numerical comparisons (section 4). Owing to the simple observation that, in general, finding the largest dot product between data vectors is not equivalent to finding the smallest Euclidean distance, we can explain why these existing methods may not correctly recover tree structure under our model (appendix E). Popular density-based clustering methods methods include CURE [21], OPTICS [7], BIRCH [49] and HDBSCAN [9]. In section 4 we discuss the extent to which these methods can and cannot be compared against ours in terms of tree recovery performance.
Existing theoretical treatments of hierarchical clustering involve different mathematical problem formulations and assumptions to ours. One common setup is to assume an underlying ultrametric space whose geometry specifies the unknown tree, and/or to study tree recovery as \(n\to\infty\) with respect to a cost function, e.g, [10, 15, 41, 11, 13, 14]. An alternative problem formulation addresses recovery of the cluster tree of the probability density from which it is assumed data are sampled [40, 19, 26]. The unknown tree in our problem formulation specifies conditional independence structure, and so has a very different interpretation to in all these cited works. Moreover, our data are vectors in \(\mathbb{R}^{p}\), and \(p\to\infty\) is a crucial aspect of our convergence arguments, but in the above cited works \(p\) plays no role or is fixed. Our definition of dendrogram is different to that in e.g. [10]: we do not require all leaf vertices to be equidistant from the root - a condition which arises as the "fixed molecular clock" hypothesis [17, 32] in phylogenetics. We also allow data to be sampled from non-leaf vertices. There is an enormous body of work on tree reconstruction methods in phylogenetics, e.g. listed at [4], but these are mostly not general-purpose solutions to the problem of inferring hierarchy. Associated theoretical convergence results are limited in scope, e.g, the famous work [17] is limited to a fixed molecular clock, five taxa and does not allow observation error.
## 2 Model and algorithm
### Statistical model, tree and dendrogram
Where possible, we use conventional terminology from the field of probabilistic graphical models, e.g., [27] to define our objects and concepts. Our model is built around an unobserved tree \(\mathcal{T}=(\mathcal{V},\mathcal{E})\), that is a directed acyclic graph with vertex and edge sets \(\mathcal{V}\) and \(\mathcal{E}\), with two properties: \(\mathcal{T}\) is connected (ignoring directions of edges), and each vertex has at most one parent, where we say \(u\) is a parent of \(v\) if there is an edge from \(u\) to \(v\). We observe data vectors \(\mathbf{Y}_{i}\in\mathbb{R}^{p}\), \(i=1,\ldots,n\), which we model as:
\[\mathbf{Y}_{i}=\mathbf{X}(Z_{i})+\mathbf{S}(Z_{i})\mathbf{E}_{i}, \tag{1}\]
comprising three independent sources of randomness:
* \(Z_{1},\ldots,Z_{n}\) are i.i.d., discrete random variables, with distribution supported on a subset of vertices \(\mathcal{Z}\subseteq\mathcal{V}\), \(|\mathcal{Z}|<\infty\);
* \(\mathbf{X}(v)\coloneqq[X_{1}(v)\,\cdots\,X_{p}(v)]^{\top}\) is an \(\mathbb{R}^{p}\)-valued random vector for each vertex \(v\in\mathcal{V}\);
* \(\mathbf{E}_{1},\ldots,\mathbf{E}_{n}\) are i.i.d, \(\mathbb{R}^{p}\)-valued random vectors. The elements of \(\mathbf{E}_{i}\) are i.i.d., zero mean and unit variance. For each \(z\in\mathcal{Z}\), \(\mathbf{S}(z)\in\mathbb{R}^{p\times p}\), is a deterministic matrix.
We assume that the tree \(\mathcal{T}\) determines two distributional properties of \(\mathbf{X}\). Firstly, \(\mathcal{T}\) is the conditional independence graph of the collection of random variables \(X_{j}\coloneqq\{X_{j}(v);v\in\mathcal{V}\}\) for each \(j=1\,\ldots,p\), that is the marginal probability density or mass function of \(X_{j}\) factorises as:
\[p(x_{j})=\prod_{v\in\mathcal{V}}p\left(x_{j}(v)|x_{j}(\mathrm{Pa}_{v})\right), \tag{2}\]
where \(\mathrm{Pa}_{v}\) denotes the parent of vertex \(v\) (however we do not necessarily assume \(X_{1},\ldots,X_{p}\) are independent or identically distributed). Secondly, for each \(j=1\,\ldots,p\), the following martingale-like property holds:
\[\mathbb{E}\left[X_{j}(v)|X_{j}(\mathrm{Pa}_{v})\right]=X_{j}(\mathrm{Pa}_{v}), \tag{3}\]
for all vertices \(v\in\mathcal{V}\) except the root. One may interpret (1) as a form of mixture model, where \(Z_{1},\ldots,Z_{n}\) are latent variables specifying which mixture components the data vectors \(\mathbf{Y}_{1},\ldots,\mathbf{Y}_{n}\) are drawn from. As an example, \(\mathcal{Z}\) could be the set of leaf vertices of \(\mathcal{T}\), but neither our methods nor theory require this to be the case. The conditions (2) and (3) induce an additive hierarchical structure in \(\mathbf{X}\). For any
distinct vertices \(u,v\in\mathcal{V}\) and \(w\) an ancestor of both, (2) and (3) imply that given \(X_{j}(w)\), the increments \(X_{j}(u)-X_{j}(w)\) and \(X_{j}(v)-X_{j}(w)\) are conditionally independent and both conditionally mean zero.
To explain our algorithm we need to introduce the definition of a _dendrogram_, \(\mathcal{D}=(\mathcal{T},h)\), where \(h:\mathcal{V}\to\mathbb{R}_{+}\) is a function which assigns a height to each vertex of \(\mathcal{T}\), such that \(h(v)\geq h(\mathrm{Pa}_{v})\) for any vertex \(v\in\mathcal{V}\) other than the root. The term "dendrogram" is derived from the ancient Greek for "tree" and "drawing", and indeed the numerical values \(h(v)\), \(v\in\mathcal{V}\), can be used to construct a drawing of \(\mathcal{T}\) where height is measured with respect to some arbitrary baseline on the page, an example is shown in figure 1(a). With \(\langle\cdot,\cdot\rangle\) denoting the usual dot product between vectors, the function
\[\alpha(u,v)\coloneqq\frac{1}{p}\mathbb{E}\left[\langle\mathbf{X}(u),\mathbf{X }(v)\rangle\right],\quad u,v\in\mathcal{V}, \tag{4}\]
will act as a measure of affinity underlying our algorithm. The specific height function we consider is \(h(v)\coloneqq\alpha(v,v)\). The martingale property (3) ensures that this height function satisfies \(h(v)\geq h(\mathrm{Pa}_{v})\) as required, see lemma 2 in appendix C.
### Algorithm
Combining the model (1) with (4) we have \(\alpha(Z_{i},Z_{j})=p^{-1}\mathbb{E}[\langle\mathbf{Y}_{i},\mathbf{Y}_{j} \rangle|Z_{1},\ldots,Z_{n}]\) for \(i\neq j\in[n]\), \([n]\coloneqq\{1,\ldots,n\}\). The input to algorithm 1 is an estimate \(\hat{\alpha}(\cdot,\cdot)\) of all the pairwise affinities \(\alpha(Z_{i},Z_{j})\), \(i\neq j\in[n]\). We consider two approaches to estimating \(\alpha(Z_{i},Z_{j})\), with and without dimension reduction by uncentered PCA. For some \(r\leq\min\{p,n\}\), let \(\mathbf{V}\in\mathbb{R}^{p\times r}\) denote the matrix whose columns are orthonormal eigenvectors of \(\sum_{i=1}^{n}\mathbf{Y}_{i}\mathbf{Y}_{i}^{\top}\) associated with its \(r\) largest eigenvalues. Then \(\zeta_{i}\coloneqq\mathbf{V}^{\top}\mathbf{Y}_{i}\) is \(r\)-dimensional vector of principal component scores for the \(i\)th data vector. The two possibilities for \(\hat{\alpha}(\cdot,\cdot)\) we consider are:
\[\hat{\alpha}_{\mathrm{data}}(i,j)\coloneqq\frac{1}{p}\langle\mathbf{Y}_{i}, \mathbf{Y}_{j}\rangle,\qquad\hat{\alpha}_{\mathrm{pca}}(i,j)\coloneqq\frac{1}{ p}\langle\zeta_{i},\zeta_{j}\rangle. \tag{5}\]
In the case of \(\hat{\alpha}_{\mathrm{pca}}\) the dimension \(r\) must be chosen. Our theory in section 3.2 assumes that \(r\) is chosen as the rank of the matrix with entries \(\alpha(u,v),u,v\in\mathcal{Z}\), which is at most \(|\mathcal{Z}|\). In practice \(r\) usually must be chosen based on the data, we discuss this in appendix A.
Algorithm 1 returns a dendrogram \(\hat{\mathcal{D}}=(\mathcal{T},\hat{h})\), comprising a tree \(\hat{\mathcal{T}}=(\hat{\mathcal{V}},\hat{\mathcal{E}})\) and height function \(\hat{h}\). Each vertex in \(\hat{\mathcal{V}}\) is a subset of \([n]\), thus indexing a subset of the data vectors \(\mathbf{Y}_{1},\ldots,\mathbf{Y}_{n}\). The leaf vertices are the singleton sets \(\{i\}\), \(i\in[n]\), corresponding to the data vectors themselves. As algorithm 1 proceeds, vertices are appended to \(\hat{\mathcal{V}}\), edges are appended to \(\hat{\mathcal{E}}\), and the domain of the function \(\hat{\alpha}(\cdot,\cdot)\) is extended as affinities between elements of \(\hat{\mathcal{V}}\) are computed. Throughout the paper we simplify notation
by writing \(\hat{\alpha}(i,j)\) as shorthand for \(\hat{\alpha}(\{i\},\{j\})\) for \(i,j\in[n]\), noting that each argument of \(\hat{\alpha}(\cdot,\cdot)\) is in fact a subset of \([n]\).
```
1:pairwise affinities \(\hat{\alpha}(\cdot,\cdot)\) between \(n\) data points
2:Initialise partition \(P_{0}\coloneqq\{\{1\},\ldots,\{n\}\}\), vertex set \(\hat{\mathcal{V}}\coloneqq P_{0}\) and edge set \(\hat{\mathcal{E}}\) to the empty set.
3:for\(m\in\{1,\ldots,n-1\}\)do
4: Find distinct pair \(u,v\in P_{m-1}\) with largest affinity
5: Update \(P_{m-1}\) to \(P_{m}\) by merging \(u,v\) to form \(w\coloneqq u\cup v\)
6: Append vertex \(w\) to \(\hat{\mathcal{V}}\) and directed edges \(w\to u\) and \(w\to v\) to \(\hat{\mathcal{E}}\)
7: Define affinity between \(w\) and other members of \(P_{m}\) as \[\hat{\alpha}(w,\cdot)\coloneqq\frac{|u|}{|w|}\hat{\alpha}(u,\cdot)+\frac{|v|}{ |w|}\hat{\alpha}(v,\cdot),\] and \(\hat{\alpha}(w,w)\coloneqq\hat{\alpha}(u,v)\).
8:endfor
9: Dendrogram \(\hat{\mathcal{D}}\coloneqq(\hat{\mathcal{T}},\hat{h})\), comprising tree \(\hat{\mathcal{T}}=(\hat{\mathcal{V}},\hat{\mathcal{E}})\) and heights \(\hat{h}(v)\coloneqq\hat{\alpha}(v,v)\) for \(v\in\hat{\mathcal{V}}\setminus P_{0}\), and \(\hat{h}(v)\coloneqq\max\{\hat{h}(\mathrm{Pa}_{v}),\hat{\alpha}(v,v)\}\) for \(v\in P_{0}\).
```
**Algorithm 1** Dot product hierarchical clustering
Implementation using scikit-learnThe presentation of algorithm 1 has been chosen to simplify its theoretical analysis, but alternative formulations of the same method may be much more computationally efficient in practice. In appendix B we outline how algorithm 1 can easily be implemented using the AgglomerativeClustering class in scikit-learn [36].
## 3 Performance Analysis
### Merge distortion is upper bounded by affinity estimation error
In order to explain the performance of algorithm 1 we introduce the _merge height_ functions:
\[m(u,v) \coloneqq h(\text{most recent common ancestor of $u$ and $v$}), u,v\in\mathcal{V},\] \[\hat{m}(u,v) \coloneqq\hat{h}(\text{most recent common ancestor of $u$ and $v$}), u,v\in\hat{\mathcal{V}}.\]
To simplify notation we write \(\hat{m}(i,j)\) as shorthand for \(\hat{m}(\{i\},\{j\})\), for \(i,j\in[n]\). The discrepancy between any two dendrograms whose vertices are in correspondence can be quantified by _merge distortion_[19] - the maximum absolute difference in merge height across all corresponding pairs of vertices. [19] advocated merge distortion as a performance measure for cluster-tree recovery, which is different to our model-based formulation, but merge distortion turns out to be a useful and tractable performance measure in our setting too. As a preface to our main theoretical results, lemma 1 explains how the geometry of the dendrogram \(\mathcal{D}\), in terms of merge heights, is related to population statistics of our model. Defining \(d(u,v)\coloneqq h(u)-h(w)+h(v)-h(w)\) for \(u\neq v\in\mathcal{V}\), where \(w\) is the most recent common ancestor of \(u\) and \(v\), we see \(d(u,v)\) is the vertical distance on the dendrogram from \(u\) down to \(w\) then back up to \(v\), as illustrated in figure 1(a).
**Lemma 1**.: _For any two vertices \(u,v\in\mathcal{V}\),_
\[m(u,v)=\frac{1}{p}\mathbb{E}\left[\left\langle\mathbf{X}(u),\mathbf{X}(v) \right\rangle\right]=\alpha(u,v),\quad d(u,v)=\frac{1}{p}\mathbb{E}\left[ \left\|\mathbf{X}(u)-\mathbf{X}(v)\right\|^{2}\right]. \tag{6}\]
The proof is in appendix C. Considering the first two equalities in (6), it is natural to ask if the estimated affinities \(\hat{\alpha}(\cdot,\cdot)\) being close to the true affinities \(\alpha(\cdot,\cdot)\) implies a small merge distortion between \(\hat{m}(\cdot,\cdot)\) and \(m(\cdot,\cdot)\). This is the subject of our first main result, theorem 1 below. In appendix E we use the third equality in (6) to explain why popular agglomerative techniques such as UPGMA [42] and Ward's method [45] which merge clusters based on proximity in Euclidean distance may enjoy limited success under our model, but in general do not correctly recover tree structure.
Let \(b\) denote the minimum branch length of \(\mathcal{D}\), that is, \(b=\min\{h(v)-h(\mathrm{Pa}_{v})\}\), where the minimum is taken over all vertices in \(\mathcal{V}\) except the root.
**Theorem 1**.: _Let the function \(\hat{\alpha}(\cdot,\cdot)\) given as input to algorithm 1 be real-valued and symmetric but otherwise arbitrary. For any \(z_{1},\ldots,z_{n}\in\mathcal{Z}\), if_
\[\max_{i,j\in[n],i\neq j}|\alpha(z_{i},z_{j})-\hat{\alpha}(i,j)|<b/2,\]
_then the dendrogram returned by algorithm 1 satisfies_
\[\max_{i,j\in[n],i\neq j}|m(z_{i},z_{j})-\hat{m}(i,j)|\leq\max_{i,j\in[n],i\neq j }|\alpha(z_{i},z_{j})-\hat{\alpha}(i,j)|. \tag{7}\]
The proof is in appendix C.
### Affinity estimation error vanishes with increasing dimension and sample size
Our second main result, theorem 2 below, concerns the accuracy of estimating the affinities \(\alpha(\cdot,\cdot)\) using \(\hat{\alpha}_{\text{data}}\) or \(\hat{\alpha}_{\text{pca}}\) as defined in (5). We shall consider the following technical assumptions.
**A1** (Mixing across dimensions).: _For mixing coefficients \(\varphi\) satisfying \(\sum_{k\geq 1}\varphi^{1/2}(k)<\infty\) and all \(u,v\in\mathcal{Z}\), the sequence \(\{(X_{j}(u),X_{j}(v));j\geq 1\}\) is \(\varphi\)-mixing._
**A2** (Bounded moments).: _For some \(q\geq 2\), \(\sup_{j\geq 1}\max_{v\in\mathcal{Z}}\mathbb{E}[|X_{j}(v)|^{2q}]<\infty\) and \(\mathbb{E}[|\mathbf{E}_{11}|^{2q}]<\infty\), where \(\mathbf{E}_{11}\) is the first element of the vector \(\mathbf{E}_{1}\)._
**A3** (Disturbance control).: \(\max_{v\in\mathcal{Z}}\|\mathbf{S}(v)\|_{\mathrm{op}}\in O(1)\) _as \(p\to\infty\), where \(\|\cdot\|_{\mathrm{op}}\) is the spectral norm._
**A4** (PCA rank).: _The dimension \(r\) chosen in definition of \(\hat{\alpha}_{\text{pca}}\), see (5), is equal to the rank of the matrix with entries \(\alpha(u,v)\), \(u,v\in\mathcal{Z}\)._
The concept of \(\varphi\)-mixing is a classical weak-dependence condition, e.g. [18, 37]. **A1** implies that for each \(j\geq 1\), \((X_{j}(u),X_{j}(v))\) and \((X_{j+\delta}(u),X_{j+\delta}(v))\) are asymptotically independent as \(\delta\to\infty\). However, it is important to note that \(\hat{\alpha}_{\text{data}}\) and \(\hat{\alpha}_{\text{pca}}\) in (5), and hence the operation of algorithm 1 with these inputs, are invariant to permutation of the data dimensions \(j=1,\ldots,p\). Thus our analysis under **A1** only requires there is _some_ permutation of dimensions under which \(\varphi\)-mixing holds. **A2** is a fairly mild integrabilty condition. **A3** allows control of magnitudes of the"disturbance" vectors \(\mathbf{Y}_{i}-\mathbf{X}(Z_{i})=\mathbf{S}(Z_{i})\mathbf{E}_{i}\). Further background and discussion of assumptions is given in appendix C.2.
**Theorem 2**.: _Assume that the model from section 2.1 satisfies **A1**-**A3** and let \(q\) be as in **A2**. Then_
\[\max_{i,j\in[n],i\neq j}|\alpha(Z_{i},Z_{j})-\hat{\alpha}_{\text{data}}(i,j)| \in O_{\mathbb{P}}\left(\frac{n^{2/q}}{\sqrt{p}}\right). \tag{8}\]
_If additionally **A1** is strengthened from \(\varphi\)-mixing to independence, \(\mathbf{S}(v)=\sigma\mathbf{I}_{p}\) for some constant \(\sigma\geq 0\) and all \(v\in\mathcal{Z}\) (in which case **A3** holds), and **A4** holds, then_
\[\max_{i,j\in[n],i\neq j}|\alpha(Z_{i},Z_{j})-\hat{\alpha}_{\text{pca}}(i,j)| \in O_{\mathbb{P}}\left(\sqrt{\frac{nr}{p}}+\sqrt{\frac{r}{n}}\right). \tag{9}\]
The proof of theorem 2 is in appendix C.2. We give an original and self-contained proof of (8). To prove (9) we use a recent uniform-concentration result for principal component scores from [46]. Overall, theorem 2 says that affinity estimation error vanishes if the dimension \(p\) grows faster enough relative to \(n\) (and \(r\) in the case of \(\hat{\alpha}_{\text{pca}}\), noting that under **A4**, \(r\leq|\mathcal{Z}|\), so it is sensible to assume \(r\) is much smaller than \(n\) and \(p\)). The r.h.s. of (8) is decreasing in \(q\) where as the r.h.s. of (9) is not. It is an open mathematical question whether (9) can be improved in this regard; sharpening the results of Whiteley et al. [46] used in the proof of (9) seems very challenging. However, when \(q=2\), (8) gives \(O_{\mathbb{P}}\left(n/\sqrt{p}\right)\) compared to \(O_{\mathbb{P}}\left(\sqrt{nr/p}+\sqrt{r/n}\right)\) in (9), i.e. an improvement from \(n\) to \(\sqrt{nr}\) in the first term. We explore empirical performance of \(\hat{\alpha}_{\text{data}}\) versus \(\hat{\alpha}_{\text{pca}}\) in section 4.2.
### Interpretation
By combining theorems 2 and 1, we find that when \(b>0\) is constant and \(\hat{\alpha}\) is either \(\hat{\alpha}_{\text{data}}\) or \(\hat{\alpha}_{\text{pca}}\), the merge distortion
\[\max_{i,j\in[n],i\neq j}|m(Z_{i},Z_{j})-\hat{m}(i,j)| \tag{10}\]
converges to zero at rates given by the r.h.s of (8) and (9). To gain intuition into what (10) tells us about the resemblance between \(\hat{\mathcal{D}}\) and \(\mathcal{D}\), it is useful to consider an intermediate dendrogram illustrated in figure 1(b) which conveys the realized values of \(Z_{1},\dots,Z_{n}\). This dendrogram is constructed from \(\mathcal{D}\) by adding a leaf vertex corresponding to each observation \(\mathbf{Y}_{i}\), with parent \(Z_{i}\) and height \(p^{-1}\mathbb{E}[\|\mathbf{Y}_{i}\|^{2}|Z_{1},\dots,Z_{n}]=h(Z_{i})+p^{-1} \text{tr}[\mathbf{S}(Z_{i})^{\top}\mathbf{S}(Z_{i})]\), and deleting any \(v\in\mathcal{Z}\) such that \(Z_{i}\neq v\) for all \(i\in[n]\) (e.g., vertex \(c\) in figure 1(b)). The resulting merge height between the vertices corresponding to \(\mathbf{Y}_{i}\) and \(\mathbf{Y}_{j}\) is \(m(Z_{i},Z_{j})\). (10) being small implies this must be close to \(\hat{m}(i,j)\) in \(\hat{\mathcal{D}}\) as in figure 1(c). Moreover, in the case \(\hat{\alpha}=\hat{\alpha}_{\text{data}}\), the height \(\hat{h}(\{i\})\) of leaf vertex \(\{i\}\) in \(\hat{\mathcal{D}}\) is upper bounded by \(p^{-1}\|\mathbf{Y}_{i}\|^{2}\), which under our statistical assumptions is concentrated around \(p^{-1}\mathbb{E}[\|\mathbf{Y}_{i}\|^{2}|Z_{1},\dots,Z_{n}]\), i.e., the heights of the leaves in figure 1(c) approximate those of the corresponding leaves in figure 1(b).
Overall we see that \(\hat{\mathcal{D}}\) in figure 1(c) approximates the dendrogram in figure 1(b), and in turn \(\mathcal{D}\). However even if \(m(Z_{i},Z_{j})=\hat{m}(i,j)\) for all \(i,j\), the tree output from algorithm 1, \(\hat{\mathcal{T}}\), may not be isomorphic (i.e., equivalent up to relabelling of vertices) to the tree in figure 1(b); \(\hat{\mathcal{T}}\) is always binary and has \(2n-1\) vertices, whereas the tree in figure 1(b) may not be binary, depending on the underlying \(\mathcal{T}\) and the realization of \(Z_{1},\dots,Z_{n}\). This reflects the fact that merge distortion, in general, is a _pseudometric_ on dendrograms. However, if one restricts attention to specific classes of true dendrograms \(\mathcal{D}\), for instance binary trees with non-zero branch lengths, then asymptotically algorithm 1 can recover them exactly. We explain this point further in appendix C.3.
## 4 Numerical experiments
We explore the numerical performance of algorithm 1 in the setting of five data sets summarised below. The real datasets used are open source, and full details of data preparation and sources are given in appendix D. Code is available at: [https://github.com/anniegray52/dot_product_hierarchical](https://github.com/anniegray52/dot_product_hierarchical).
**Simulated data.** A simple tree structure with vertices \(\mathcal{V}=\{1,2,3,4,5,6,7,8\}\), edge set \(\mathcal{E}=\{6{\rightarrow}1,6{\rightarrow}2,6{\rightarrow}3,7{ \rightarrow}4,7{\rightarrow}5,8{\rightarrow}6,8{\rightarrow}7\}\) and \(\mathcal{Z}=\{1,2,3,4,5\}\) (the leaf vertices). \(Z_{1},\dots,Z_{n}\) are drawn from the uniform distribution on \(\mathcal{Z}\). The \(X_{j}(v)\) are Gaussian random variables, independent across \(j\). Full details of how these variables are sampled are in appendix D. The elements of \(\mathbf{E}_{i}\) are standard Gaussian, and \(\mathbf{S}(v)=\sigma\mathbf{I}_{p}\) with \(\sigma=1\).
**20 Newsgroups.** We used a random subsample of \(n=5000\) documents from the well-known 20 Newsgroups data set [28]. Each data vector corresponds to one document, capturing its \(p=12818\) Term Frequency Inverse Document Frequency features. The value of \(n\) was chosen to put us in the regime \(p\geq n\), to which our theory is relevant - see section 3.2. Some ground-truth labelling of documents is known: each document is associated with 1 of 20 newsgroup topics, organized at two hierarchical levels.
**Zebrafish gene counts.** These data comprise gene counts in zebrafish embryo cells taken from their first day of development [44]. As embryos develop, cells differentiate into various types with specialised, distinct functions, so the data are expected to exhibit tree-like structure mapping these changes. We used a subsample such that \(n=5079\) and \(p=5498\) to put us in the \(p\geq n\) regime. Each cell has two labels: the tissue that the cell is from and a subcategory of this.
**Amazon reviews.** This dataset contains customer reviews on Amazon products [1]. A random sample of \(n=5000\) is taken and each data vector corresponds to one review with \(p=5594\) Term Frequency Inverse Document Frequency features. Each product reviewed has labels which make up a three-level hierarchy of product types.
**S&P 500 stock returns.** The data are \(p=1259\) daily returns between for \(n=368\) stocks which were constituents of the S&P 500 market index [2] between 2013 to 2018. The two-level hierarchy of stock sectors by industries and sub-industries follows the Global Industry Classification Standard [3].
### Comparing algorithm 1 to existing methods
We numerically compare algorithm 1 against three very popular variants of agglomerative clustering: UPGMA, Ward's method, and cosine distance combined with average linkage. These are natural comparators because they work by iteratively merging clusters in a manner similar to algorithm 1, but
using different criteria for choosing which clusters to merge. In appendix E we complement our numerical results with mathematical insights into how these methods perform under our modelling assumptions. Numerical results for other linkage functions are given in appendix D. Several popular density-based clustering methods use some hierarchical structure, such as CURE [21], OPTICS [7] and BIRCH [49] but these have limitations which prevent direct comparisons: they aren't equipped with a way to simplify the structure into a tree, which it is our aim to recover, and only suggest extracting a flat partition based on a density threshold. HDBSCAN [9] is a density-based method that doesn't have these limitations, and we report numerical comparisons against it.
Kendall \(\tau_{b}\) ranking correlation.For real data some ground-truth hierarchical labelling may be available but ground-truth merge heights usually are not. We need a performance measure to quantitatively compare methods operating on such data. Commonly used clustering performance measures such as the Rand index [38] and others [22, 20] allow pairwise comparisons between partitions, but do not capture information about hierarchical structure. The cophenetic correlation coefficient [43] is commonly used to compare dendrograms, but relies on an assumption that points close in Euclidean distance should be considered similar which is incompatible with our notion of dot product affinity. To overcome these obstacles we formulate a performance measure as follows. For each of \(n\) data points, we rank the other \(n-1\) data points according to the order in which they merge with it in the ground-truth hierarchy. We then compare these ground truth rankings to those obtained from a given hierarchical clustering algorithm using the Kendall \(\tau_{b}\) correlation coefficient [25]. This outputs a value in the interval \([-1,1]\), with \(-1\), \(1\) and \(0\) corresponding to negative, positive and lack of association between the ground-truth and algorithm-derived rankings. We report the mean association value across all \(n\) data points as the overall performance measure. Table 1 shows results with raw data vectors \(\mathbf{Y}_{1:n}\) or PC scores \(\zeta_{1:n}\) taken as input to the various algorithms. For all the data sets except S&P 500, algorithm 1 is found to recover hierarchy more accurately than other methods. We include the results for the S&P 500 data to give a balanced scientific view, and in appendix E we discuss why our modelling assumptions may not be appropriate for these data, thus explaining the limitations of algorithm 1.
### Simulation study of dot product estimation with and without PCA dimension reduction
For high-dimensional data, reducing dimension with PCA prior to clustering may reduce overall computational cost. Assuming \(\zeta_{1:n}\) are obtained from, e.g., a partial SVD, in time \(O(npr)\), the time complexity of evaluating \(\hat{\alpha}_{\text{pca}}\) is \(O(npr+n^{2}r)\), versus \(O(n^{2}p)\) for \(\hat{\alpha}_{\text{data}}\), although this ignores the cost of choosing \(r\). In table 1 we see for algorithm 1, the results for input \(\mathbf{Y}_{1:n}\) are very similar to those for \(\zeta_{1:n}\). To examine this more closely and connect our findings to theorem 2, we now compare \(\hat{\alpha}_{\text{data}}\) and \(\hat{\alpha}_{\text{pca}}\) as estimates of \(\alpha\) through simulation. The model is as described at the start of section 4. In figure 2(a)-(b), we see that when \(p\) is growing with \(n\), and when \(p\) is constant, the \(\hat{\alpha}_{\text{pca}}\) error is very slightly smaller than the \(\hat{\alpha}_{\text{data}}\) error. By contrast, in figure 2(c), when \(n=10\) is fixed, we see that the \(\hat{\alpha}_{\text{pca}}\) error is larger than
\begin{table}
\begin{tabular}{c c|c c c c} \hline \hline Data & Input & Dot product & Cosine distance & HDBSCAN & UPGMA & Ward \\ \hline \multirow{2}{*}{Newsgroups} & \(\mathbf{Y}_{1:n}\) & 0.26 (2.9) & 0.26 (2.9) & -0.010 (0.65) & 0.23 (2.7) & 0.18 (2.5) \\ & \(\zeta_{1:n}\) & 0.24 (2.6) & 0.18 (1.9) & -0.016 (1.9) & 0.038 (1.5) & 0.19 (2.7) \\ \hline \multirow{2}{*}{Zebrafish} & \(\mathbf{Y}_{1:n}\) & 0.34 (3.4) & 0.25 (3.1) & 0.023 (2.9) & 0.27 (3.2) & 0.30 (3.8) \\ & \(\zeta_{1:n}\) & 0.34 (3.4) & 0.27 (3.2) & 0.11 (2.8) & 0.16 (2.5) & 0.29 (3.8) \\ \hline \multirow{2}{*}{Reviews} & \(\mathbf{Y}_{1:n}\) & 0.15 (2.5) & 0.12 (1.9) & 0.014 (1.1) & 0.070 (1.5) & 0.10 (1.8) \\ & \(\zeta_{1:n}\) & 0.14 (2.4) & 0.14 (2.4) & -0.0085 (0.78) & 0.14 (2.6) & 0.12 (2.4) \\ \hline \multirow{2}{*}{S\&P 500} & \(\mathbf{Y}_{1:n}\) & 0.34 (10) & 0.34 (10) & 0.14 (9.3) & 0.34 (1) & 0.35 (10) \\ & \(\zeta_{1:n}\) & 0.36 (9.4) & 0.42 (11) & 0.33 (13) & 0.39 (11) & 0.39 (11) \\ \hline \multirow{2}{*}{Simulated} & \(\mathbf{Y}_{1:n}\) & 0.86 (1) & 0.81 (2) & 0.52 (8) & 0.52 (8) & 0.52 (8) \\ & \(\zeta_{1:n}\) & 0.86 (1) & 0.81 (2) & 0.52 (8) & 0.52 (8) & 0.52 (8) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Kendall \(\tau_{b}\) ranking performance measure. For the dot product method, i.e., algorithm 1, \(\mathbf{Y}_{1:n}\) as input corresponds to using \(\hat{\alpha}_{\text{data}}\), and \(\zeta_{1:n}\) corresponds to \(\hat{\alpha}_{\text{pca}}\). The mean Kendall \(\tau_{b}\) correlation coefficient is reported alongside the standard error (numerical value shown is the standard error\(\times 10^{3}\)).
that for \(\hat{\alpha}_{\text{data}}\). This inferior performance of \(\hat{\alpha}_{\text{pca}}\) for very small and fixed \(n\) is explained by \(n\) appearing in the denominator of the second term in the rate \(O_{\mathbb{P}}(\sqrt{nr/p}+\sqrt{r/n})\) for \(\hat{\alpha}_{\text{pca}}\) in theorem 2 versus \(n\) appearing only in the numerator of \(O_{\mathbb{P}}(n^{2/q}/\sqrt{p})\) for \(\hat{\alpha}_{\text{data}}\). Since it is Gaussian, this simulation model has finite exponential-of-quadratic moments, which is a much stronger condition than **A2**; we conjecture the convergence rate in this Gaussian case is \(O_{\mathbb{P}}(\sqrt{\log n/p})\) for \(\hat{\alpha}_{\text{data}}\), which would be consistent with figure 2(a). These numerical results seem to suggest the rate for \(\hat{\alpha}_{\text{pca}}\) is similar, thus the second result of theorem 2 may not be sharp.
### Comparing dot product affinities and Euclidean distances for the 20 Newsgroups data
In this section we expand on the results in table 1 for the 20 Newsgroups data, by exploring how inter-topic and intra-topic dot product affinities and Euclidean distances relate to ground-truth labels. Most existing agglomerative clustering techniques quantify dissimilarity using Euclidean distance. To compare dot products and Euclidean distances, figures 3(a)-(b) show, for each topic, the top five topics with the largest average dot product and smallest average Euclidean distance respectively. We see that clustering of semantically similar topic classes is apparent when using dot products but not when using Euclidean distance.
For one topic ('comp.windows.x') the above plots are expanded in figures 3(c)-(d) to show the average dot products and average Euclidean distances to all other topics. Four out of the five topics with the largest dot product affinity belong to the same 'comp' topic class and other one is a semantically similar'sci.crypt' topic. Whereas, the other topics in the same 'comp' class are considered dissimilar in terms of Euclidean distance.
In order to display visually compact estimated dendrograms, we applied algorithm 1 and UPGMA in a semi-supervised setting where each topic is assigned its own PC score, taken to be the average of the PC scores of the documents in that topic, and then the algorithms are applied to cluster the topics. The results are shown in figures 3(e)-(f) (for ease of presentation, leaf vertex 'heights' are fixed to be equal).
## 5 Limitations and opportunities
Our algorithm is motivated by modelling assumptions. If these assumptions are not appropriate for the data at hand, then the algorithm cannot be expected to perform well. A notable limitation of our model is that \(\alpha(u,v)\geq 0\) for all \(u,v\in\mathcal{V}\) (see lemma 3 in appendix C). This is an inappropriate assumption when there are strong negative cross-correlations between some pairs of data vectors, and may explain why our algorithm has inferior performance on the S&P 500 data in table 1. Further discussion is given in appendix E. A criticism of agglomerative clustering algorithms in their basic form is that their computational cost scales faster than \(O(n^{2})\). Approximations to standard agglomerative methods which improve computational scalability have been proposed [31, 5, 33]. Future research could investigate analogous approximations and speed-up of our method. Fairness in hierarchical clustering has been
recently studied in cost function-based settings by [6] and in greedy algorithm settings by [12]. Future work could investigate versions of our algorithm which incorporate fairness measures.
Figure 3: Analysis of the 20 Newsgroups data. Marker shapes correspond to newsgroup classes and marker colours correspond to topics within classes. The first/second columns show results for dot products/Euclidean distances respectively. First row: for each topic (\(x\)-axis), the affinity/distance (\(y\)-axis) to the top five best-matching topics, calculated using average linkage of PC scores between documents within topics. Second row: average affinity/distance between documents labelled ‘comp.windows.x’ and all other topics. Third row: dendrograms output from algorithm 1 and UPGMA applied to cluster topics. |
2303.09785 | ABAW : Facial Expression Recognition in the wild | The fifth Affective Behavior Analysis in-the-wild (ABAW) competition has
multiple challenges such as Valence-Arousal Estimation Challenge, Expression
Classification Challenge, Action Unit Detection Challenge, Emotional Reaction
Intensity Estimation Challenge. In this paper we have dealt only expression
classification challenge using multiple approaches such as fully supervised,
semi-supervised and noisy label approach. Our approach using noise aware model
has performed better than baseline model by 10.46% and semi supervised model
has performed better than baseline model by 9.38% and the fully supervised
model has performed better than the baseline by 9.34% | Darshan Gera, Badveeti Naveen Siva Kumar, Bobbili Veerendra Raj Kumar, S Balasubramanian | 2023-03-17T06:01:04Z | http://arxiv.org/abs/2303.09785v1 | # ABAW : Facial Expression Recognition in the wild
###### Abstract
The fifth Affective Behavior Analysis in-the-wild (ABAW) competition has multiple challenges such as Valence-Arousal Estimation Challenge, Expression Classification Challenge, Action Unit Detection Challenge, Emotional Reaction Intensity Estimation Challenge. In this paper we have dealt only expression classification challenge using multiple approaches such as fully supervised, semi-supervised and noisy label approach. Our approach using noise aware model has performed better than baseline model by 10.46% and semi supervised model has performed better than baseline model by 9.38% and the fully supervised model has performed better than the baseline by 9.34%.
Facial Expression Recognition Aff-Wild2 Semi-supervised Learning Noisy label approach Complementary label
## 1 Introduction
Facial expression recognition (FER) is also a rapidly growing field of research that has become increasingly important in recent years. The ability to accurately detect and interpret human emotions based on facial expressions has a wide range of potential applications, from improving human-computer interaction to enhancing mental health diagnosis and treatment. To get models that are independent of demographic features such as age, gender, region, we need to train the models on real-in-wild datasets such as RAF-DB, Affectnet, Aff-wild2 [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. These in-wild datasets poses multiple challenges such as variation in illumination, variation in poses, blur, etc. In this paper we have dealt with this problem in multiple approaches such as fully supervised,semi-supervised and noisy label approach.
## 2 Method
In this section, we present our solution to the Expression (Expr) Classification Challenge at the 5th Affective Behavior Analysis in-the-wild (ABAW) Competition.
### Baseline
The white paper[11] released by organisers of ABAW competition provided as baseline, a VGG16 network with fixed convolutional weights from a pre-trained checkpoint on VGGFACE dataset for feature extraction and the last three fully connected layers are trainable along with output layer equipped with softmax activation function that gives the predictions on 8 expression classes.
### Fully Supervised approach with finetuning
The approach taken in the baseline model of [11] is to use a pre-trained checkpoint as a fixed feature extractor. We improve this by using resnet-18 [18] as basenet with pre-trained weights from resnet-18 model trained on MS-Celeb [19] face recognition data-set as our initial point, which acts as feature extractor, this is fine-tuned on the current task of facial expression recognition. This basenet is followed by a dropout layer and fully connected layer equipped with softmax activation learns to predict on the 8 expression classes considered as part of this challenge.
To train this model, images were augmented using augmentations such as horizontal flip, random crop and were resized to 224*224*3 before feeding them to the model to obtain predictions. Cross entropy [1] was used as loss function for the baseline. We used Adam optimizer with learning rate 0.0005 and weight decay as \(e^{-4}\).
\[L_{CE}=(-\sum_{c=1}^{8}\tilde{y}_{i=1}log(p^{c}(x_{i},\theta)))) \tag{1}\]
Here \(\theta\) refers to the model parameters \(p^{c}\) refers to prediction probability of class c and \(\tilde{y}_{c}\) refers to ground truth value of that class on a sample.
### Semi-Supervised learning with complementary labels
There are a total of 1089929 images in train set, but nearly \(50\%\) of this data, precisely 502970 images have invalid label(-1), therefore do not belong to any of the eight classes considered as part of the challenge. Therefore in total we have 586959 images as actual training images. To improve upon the performance of the fully supervised approach we could use these invalid images as unlabeled images and make use of semi-supervised learning which benefits from unlabeled images. Motivated from a recent work in semi-supervised learning named MutexMatch [20] which effectively uses unlabeled data to improve overall performance of the model in limited label setting.
MutexMatch uses a fixed threshold to divide unlabeled samples as high confident and low confident ones. Unlike other works in semi-supervised learning which emphasise on utilizing the high confident samples in varied varieties, MutexMatch uses the low confident samples to predict negative labels which is a simpler goal by the means of a True-Negative classifier(TNC).
Similar to general approaches to semi-supervised learning it uses True-Positive classifier(TPC) to classify the images into considered set of classes. It has a supervised loss \(L_{sup}\) 1 on model predictions and ground truth labels, which helps the network learn features and weights, to classify appropriately. Once the model has learnt from the labeled data, this model is used to predict pseudo-labels on the unlabeled data using the True-Positive classifier. Fixed threshold is used to separate unlabeled samples into confident and non-confident samples. Predictions on confident samples are used as pseudo-labels to help the model utilize unlabeled data.
To learn pseudo-labels effectively a pseudo-label loss \(L_{p}\) is introduced which is the cross entropy[1] loss between predictions of the model on weak and strong augmentations of high confident samples. Two more losses \(L_{sep}\) and \(L_{n}\) involve the predictions from the complementary labels on low confident samples on the True-Negative classifier. \(L_{sep}\) loss is cross entropy[1] on TNC prediction of weak augmentation and the class with lowest confidence on weak augmented image as predicted by the TPC. \(L_{n}\)[3] is defined only on the low confident samples, which is negative consistency loss between prediction of TNC on weak and strong augmented versions of an image for top-k complementary predicted classes.
#### 2.3.1 Loss functions
\[L_{p}=\frac{1}{\mu_{B}}\sum_{n=1}^{\mu_{B}}1(max(p_{n}^{w})\geq\tau)H(\tilde{ p}_{n}^{w},\tilde{p}_{n}^{s})) \tag{2}\]
Here \(\mu_{B}\) is the number of unlabeled images in a batch, H is the cross entropy function as in 1, \(p_{n}^{w}\) is the TPC prediction on weak augmented image, \(\tilde{p}_{n}^{s}\) is the TPC prediction on strong augmented image, \(\tilde{p}_{n}^{w}\) is \(argmax(p_{n}^{w})\).
\[L_{n}=\frac{1}{\mu_{B}}\sum_{n=1}^{\mu_{B}}1(max(p_{n}^{w})<\tau)(-\frac{1}{k} \sum_{i=1}^{C}g_{n,(i)}r_{n,(i)}^{w}log(r_{n,(i)}^{s})) \tag{3}\]
Here \(\mu_{B}\) and \(p_{n}^{w}\) are as defined above, \(r_{n,(i)}^{w}\) is the \(i^{th}\) probability component in the prediction of TNC on weak augmented image, \(r_{n,(i)}^{s}\) is the \(i^{th}\) probability component in the prediction of TNC on strong augmented image, \(g_{n,(i)}\)
is a mask which selects the classes that a in top-k of largest probability components in the complementary label prediction.
Inspired from [21, 22, 23] which use dynamic adaptive threshold 4 that caters to inter and intra class differences that exist in facial expressions, we use dynamic adaptive threshold to divide the unlabeled samples into confident and non-confident ones. This class adaptive threshold is calculated by the model's performance on train set and is dynamically scaled up as the training progresses.
\[T^{c}=\frac{\beta*(\frac{1}{N^{s}}\sum_{i=1}^{N^{s}}\delta_{i}^{c}*p_{i})}{1+ \gamma^{-ep}}\quad,\quad\mathrm{where} \tag{4}\]
\[\delta_{i}^{c}=\left\{\begin{array}{ll}1\quad\mathrm{if}\quad\tilde{y}_{i}= c,\\ \\ 0\quad\mathrm{otherwise}.\end{array}\right.\]
The values \(\beta=0.95\) and \(\gamma=e\) are taken from [21]
### Noise aware model
During the training of the fully supervised model, we observed that accuracy and f1-score on an average was 98.7 and 98.2% respectively on the training set, whereas average accuracy was 52.6 and the best f1-score was obtained to be 32.35%.This shows that the model doesn't generalize well on unseen data. Generally, models do not generalize well due to the following reasons:
* Capacity of the model : If the capacity of the model is low, then the model underfits the data leading to poor generalization. This can be solved by using a model with more capacity. But it can't be the case since the accuracy and f1-score on train is very high implying over fitting of model.
* Distribution change : If there is a mismatch in the distribution of training data and the unseen validation data, then the model performs poorly on unseen data. But in general we treat the training data and unseen data are sampled from same distribution.
* Presence of noise : If there is noise in the labels of dataset and if the model over-fits on the training data, then the model can't generalize well.
Assuming that there is noise in the labels, To deal with the noisy label problem, We propose a noise aware model.
This model has its backbone as resnet-18 followed by a fully connected layer with softmax activation to predict the 8 classes in expression classification task. We use pretrained weights from resnet18 model trained on affectnet dataset. In this model, we use two different types of augmentations of the image for consistency. MSE loss[6] is used for consistency and Weighted cross entropy loss[5] is used for supervised loss. The weights are learnt by the model as per in SCN [24] paper. Not all the samples are sent to supervised loss. We take out the samples whose prediction probabilities that fail to be greater than dynamic adaptive threshold[25].
We use dynamic adaptive threshold in every epoch to tackle inter and intra class differences in prediction probabilities of expression classification. Dynamic adaptive threshold was calculated by taking the class vise mean of all the prediction probabilities in every batch. For a given class \(i\), we treat the samples whose ground truth prediction probabilities that are greater than the dynamic adaptive threshold for class \(i\) as clean and those samples that fails are treated as noisy samples. Only clean samples are used for supervised loss and all the samples are used for consistency loss where we force the consistency between the attention maps of the weak augmented image and strong augmented image.
The attention maps[26] are calculated in the following way:
We first extract the feature maps from last but second layer from the backbone and weights from the fully connected layer by multiplying the weights and feature maps we obtained the attention maps for every class.
\[L_{WCE}=\frac{1}{N}(-\sum_{i=1}^{N}log(\frac{e^{\alpha_{i}W_{yi}^{T}}x_{i}}{ \sum_{j=1}^{C}e^{\alpha_{i}W_{j}^{T}}x_{i}})) \tag{5}\]
\[L_{MSE}=-\frac{1}{NLHW}\sum_{i=1}^{N}\sum_{j=1}^{L}||AM_{ij}-AM_{ij}^{{}^{ \prime}}||_{2} \tag{6}\]
We used Adam optimizer with 0.0001 as learning rate on all the model parameters.
#### 2.4.1 Loss functions
We have used weighted cross entropy loss[5] for supervised loss where N is number of samples in batch C is number of classes and \(\alpha_{i}\) are the weights. The loss function that we have used for consistency loss is MSE loss[6] where N is number of samples in the batch, L is number of feature maps, H and W are height and width of feature maps, AM is attention maps on weak augmented image and \(AM^{{}^{\prime}}\) is attention maps on strong augmented flipped image.
### Problem formulation
Let \(D=\{(x_{i},\tilde{y}_{i})\}_{i=1}^{N}\) be the dataset of N samples. Here \(x_{i}\) is \(i^{th}\) image where \(\tilde{y}_{i}\) represents expression class \(y_{i}^{Exp}\)) of \(i^{th}\) image. The backbone network is parameterized by \(\theta\) ( ResNet-18 [18] pre-trained on different datasets like rafdb [27] and affectnet [28] for different approaches as backbone). We denote \(x_{w}\) as weak augmented image and \(x_{s}\) as strong augmented image, \(P_{Exp}\) represent probability distribution predicted by Expression classifier. Weak augmentations include random cropping with padding and horizontal flipping of the input image. Strong augmentations includes weak augmentations along with Randaugment [29].
## 3 Dataset
### Dataset
s-AffWild2 [17] database is a static version of Aff-Wild2 database and contains a total of 11,10,367 images for training and 4,53,535 images for validation. Out of which 5,02,970 images are to be disregarded from training set and 20,438 images are not present but the image paths are given. In validation data 2,79,749 are valid and remaining all are with -1 as label which we should disregard.
### Implementation Details
The performance measure fo experssion classification problem is the average F1 score on all the 8 classes.The F1 score is a harmonic mean of the recall (i.e. Number of positive class images correctly identified out of true positive class) and precision (i.e. Number of positive class images correctly identified out of positive predicted). The F1 score takes values in the range [0, 1]. Here we need to present F1 score as percentage. The F1 score is defined as:
\[F_{1}=\frac{2*precision*recall}{precision+recall} \tag{7}\]
## 4 Results
We report our results on the official validation set from the ABAW 2023 Challenge [11] in Table 1. Our best performance achieves overall score of 33.46% on validation set which is a significant improvement over baseline.
## 5 Conclusions
In this paper, we presented multiple methods. Firstly fully supervised model gave F1 score that is greater than the given baseline model[11] by 9.34%. Then to overcome the limitations of fully supervised model, we proposed semi supervised learning with complementary labels and noise aware model that perform better over baseline by a margin of 9.38% and 10.46% respectively.
\begin{table}
\begin{tabular}{c|c} \hline Method & Exp-F1 score \\ \hline \hline Baseline [11] & _23_ \\ Fully supervised model & _32.34_ \\ Semi-Supervised learning with complementary labels & _32.82_ \\ Noise aware model & _33.46_ \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison on Aff-Wild2 validation set
## Acknowledgments
We dedicate this work to Our Guru and Guide Bhagawan Sri Sathya Sai Baba, Divine Founder Chancellor of Sri Sathya Sai Institute of Higher Learning, Prasanthi Nilayam, Andhra Pradesh, India.
|
2307.14545 | Identifiability and Falsifiability: Two Challenges for Bayesian Model
Expansion | We study the identifiability of model parameters and falsifiability of model
predictions under conditions of model expansion in a Bayesian setting. We
present results and examples suggesting a tendency for identifiability and
falsifiability to decrease in this context and for the severity of these
problems to trade-off against one another. Additionally, we present two
extended examples that demonstrate how these difficulties can be partially
overcome by inferential methods that leverage the joint structure of the
posterior distribution. | Collin Cademartori | 2023-07-26T23:48:43Z | http://arxiv.org/abs/2307.14545v1 | # Identifiability and Falsifiability: Two Challenges for Bayesian Model Expansion
###### Abstract
We study the identifiability of model parameters and falsifiability of model predictions under conditions of model expansion in a Bayesian setting. We present results and examples suggesting a tendency for identifiability and falsifiability to decrease in this context and for the severity of these problems to trade-off against one another. Additionally, we present two extended examples that demonstrate how these difficulties can be partially overcome by inferential methods that leverage the joint structure of the posterior distribution.
## 1 Introduction
In this work we connect the process of (Bayesian) model expansion to two challenges for the interpretation and evaluation of statistical models, namely:
* the ability of the model to support sufficiently precise inferences about parameters of interest, and
* the readiness of the model to reveal deficiencies in its fit to the observed data.
These general concepts can be made precise in different ways. In practice, poor identifiability can manifest as marginal posterior distributions that are too wide to support substantively interesting conclusions about quantities of interest. Likewise, poor falsifiability can result in reduced power for tests of model fitness compared to nearby alternatives. In the Bayesian context, we will argue that identifiability can be quantified using the mutual information \(\mathbf{I}\left(\boldsymbol{\theta},\mathbf{y}\right)\), and that falsifiability can be quantified by the conditional mutual information \(\mathbf{I}\left(\boldsymbol{\theta},\mathbf{y}_{\text{rep}}\mid\mathbf{y}\right)\) - quantities from information theory which we will discuss in more detail in the following sections.
Model criticism has long been recognized as an essential component of applied statistical workflow, and this process commonly creates a need to expand our models to capture a more diverse collection of data behaviors [4, 10, 26]. However, this process is not without challenges, as higher dimensional models can exhibit more complex posterior distributions which frustrate simple conclusions. Our main result quantifies two of these challenges by showing that under appropriate classes of model expansion, there exist bounds on these quantities which exhibit two important properties: (i) a bias
towards reducing both falsifiability _and_ identifiability as the dimension of the expanded model grows and (ii) a tradeoff whereby model expansions which avoid reducing one of identifiability and falsifiability are in some sense more likely to reduce the other.
While we do not expect the behavior of bounds (which may be quite loose) to translate directly or universally to individual models and datasets, these two properties of our bounds qualitatively match the patterns that we observe in both simple cases where direct calculations are possible and in more complex examples with simulated and real data. We thus view the main contribution of our result as conceptually uniting and generalizing patterns observed in particular cases. For example:
1. The literature on Bayesian sparse regression has demonstrated that the identification problems inherent to high-dimensional regression problems can often be alleviated by imposing certain hierarchical priors on the coefficients (e.g. horseshoe or normal scale-mixture priors) [20, 19, 21]. Critically, these priors work by encoding dependence between the coefficients, and thus do not require the addition of prior information marginally to be effective. The tradeoff between identifiability and falsifiability we observe in our main result breaks down when the prior encodes enough dependence between the parameters. This suggests that the kind of dependence encoding that can resolve identification problems in regression models may be a good strategy for addressing identifiability deficits more generally.
2. The posterior predictive \(p\)-value has been criticized in the model checking literature as being conservative or under-powered [2, 23, 31]. Because these criticisms have hinged on frequency properties of the \(p\)-value, some Bayesians have responded by pointing out that the posterior predictive \(p\)-value is interpretable without reference to its distribution under frequentist replications. We will argue that our notion of falsifiability is directly linked to a general concept of power which does not require any reference to frequenctist considerations. Our main result suggests that the risk of such conservaitity problems is directly linked to the process of model expansion, but it also motivates a generlization of the posterior predictive \(p\)-value which we show is capable of resolving some of the practical problems caused by these problems.
Our overall conclusion is thus both negative and positive. On on hand, we believe that our main result suggests a real tension exists between some of the basic goals of applied modeling in the context of iterative model expansion. On the other hand, we do not believe our result militates against successful model expansion in general. Rather, by quantifying some features of this tradeoff, our result points towards possible tools which we believe can form the basis of an expansion-ready statistical methodology.
We illustrate the basic shape of this tradeoff with an extremely simple regression example to establish intuition. Suppose we have only two observations \((y_{1},y_{2})\) and known measurement variance \(\sigma^{2}=1\). In our first model, we have one predictor \(\mathbf{x}_{1}=(0,1)\) with coefficient \(\beta_{1}\). Assigning a normal prior, the resulting model is
\[y_{j}\mid\beta_{1}\sim\text{normal}\left(\beta_{1}x_{1j},1\right)\text{ for }j=1,2,\quad\beta_{1}\sim\text{normal}\left(0,\sigma_{b}\right), \tag{1}\]
where the hyperparameter \(\sigma_{b}\) is taken large so that the prior is weakly informative. We then expand this model by adding a second predictor \(\mathbf{x}_{2}\) with \(\|\mathbf{x}_{2}\|=1\) and with coefficient \(\beta_{2}\). Assuming \(\beta_{2}\) is a priori independent of \(\beta_{1}\) and assigning an identical
marginal prior, we get
\[y_{j}\mid\beta_{1},\beta_{2}\sim\text{normal}\left(\beta_{1}x_{1j}+\beta_{2}x_{2j},1\right)\text{ for }j=1,2,\quad\beta_{1},\beta_{2}\overset{iid}{\sim}\text{normal}\left(0, \sigma_{b}\right). \tag{2}\]
We consider five levels of nonnegative correlation between the predictors \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), which are plotted in the top row of Figure 1. To assess the effect of adding a second predictor on identifiability, we compare the marginal posterior of \(\beta_{1}\) to its prior, plotted in the second row of Figure 1.
As we would expect, as predictor correlation increases, the identification of the coefficient \(\beta_{1}\) decreases.
It is less obvious how we should assess the falsifiability of the model. We argue in Section 3 that falsifiability is connected to a measure of posterior confidence about the true data generating process, expressed as a distribution over independent, replicated data \(\mathbf{y}_{\text{rep}}\).
In particular, we will argue that falsifiability tends to decrease as the sampling distributions \(p(\mathbf{y}_{\text{rep}}\mid\boldsymbol{\beta})\) and the posterior predictive distribution \(p(\mathbf{y}_{\text{rep}}\mid\mathbf{y})=\int p\left(\mathbf{y}_{\text{rep}} \mid\boldsymbol{\beta}\right)p\left(\boldsymbol{\beta}\mid\mathbf{y}\right)d \boldsymbol{\theta}\) become more dissimilar.
The third row of Figure 1 partially visualizes this by plotting the sampling distributions for a replicated first observation \(y_{1}^{\text{rep}}\) at the posterior means \(\boldsymbol{\overline{\beta}}\) with the corresponding posterior predictive distributions.
As the correlation between the predictors decreases, the distributions \(p(y_{1}^{\text{rep}}\mid\boldsymbol{\overline{\beta}})\) and \(p(y_{1}^{\text{rep}}\mid\mathbf{y})\) become less similar.
The relationships displayed in the highlighted panels correspond to the relationships between the corresponding distributions that occur in the single-predictor model (1). These highlighted panels also correspond to the best-case behavior, showing that the expanded model can only perform worse than the base model on either metric. In fact, we further see that these behaviors are inversely correlated among the expanded models, i.e. the most precise marginal inference occurs when the sampling and posterior predictive distributions are most dissimilar and vice versa. We will find evidence of similar phenomena more generally in the next sections.
Figure 1: First row: \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), in order of increasing correlation. Second row: the priors \(p(\beta_{1})\) (blue) and the posteriors \(p(\beta_{1}\mid\mathbf{y})\) (red), both centered to allow for an easier comparison of scales. Narrower posteriors relative to the prior indicate better identification. Third row: the posterior predictive \(p(\mathbf{y}_{\text{rep}}\mid\mathbf{y})\) (red) and the sampling distributions \(p(\mathbf{y}\mid\boldsymbol{\overline{\beta}})\) (blue). More dissimilarity is connected to lower model check power on average.
### Outline
In Sections 2 and 3, we separately connect weakening of identifiability and falsifiability to some general conditions of model expansion. Section 4 presents our main result connecting identifiability and falsifiability, showing that expansions which decrease the severity of one challenge have an increased risk of worsening the other (as seen in the above regression example). In Sections 5 and 6, we demonstrate through two examples how richer inferences which leverage the full joint structure of the posterior distribution can alleviate some of the difficulties imposed by poor identifiability and falsifiability.
### Contributions
Our main result, Theorem 5 uses a generalizaton of the Bayesian Cramer-Rao bound [1] and matrix concentration inequalities [25] to establish bounds on the mutual and conditional mutual information which exhibit both a tradeoff and dimension dependence. We sketch the result here. Suppose that we have a base model and expasion thereof denoted \(p_{\mathrm{base}}\) and \(p\) respectively, both defined over common data \(\mathbf{y}\) and with some shared parameters \(\boldsymbol{\theta}\) (a notion which we will define unambiguously in the next section).
**Theorem 1**.: _Let \(\boldsymbol{\iota}_{\mathrm{base}}\) and \(\boldsymbol{\iota}\) be the eigivenvalues of the expected Fisher information matrices for \(\boldsymbol{\theta}\) in the base and expanded models \(p_{\mathrm{base}}\) and \(p\), respectively. Furthermore let \(\mathbf{I}_{\mathrm{base}}\) and \(\mathbf{I}\) denote the (conditional) mutual information in the base and expanded models. Finally, let \(d\) be the dimension of \(\boldsymbol{\theta}\) and \(d^{\mathrm{exp}}\geq d\) the dimension of the parameter space for the expanded model. Then, under technical conditions given in the statement of Theorem 5, we have_
\[\underbrace{\mathbf{I}_{\mathrm{base}}\left(\mathbf{y}, \boldsymbol{\theta}\right)}_{\begin{subarray}{c}\text{base model}\\ \text{identifability of }\boldsymbol{\theta}\end{subarray}}\leq\Psi_{i}\left( \boldsymbol{\iota}_{\mathrm{base}}\right),\qquad\underbrace{\mathbf{I}\left( \mathbf{y},\boldsymbol{\theta}\right)}_{\begin{subarray}{c}\text{expanded model}\\ \text{identifability of }\boldsymbol{\theta}\end{subarray}}\leq\Psi_{i}\left( \boldsymbol{\iota}\right)-\Delta_{i} \tag{3}\]
\[\underbrace{\mathbf{I}_{\mathrm{base}}\left(\mathbf{y}_{\mathrm{rep}}, \boldsymbol{\theta}\mid\mathbf{y}\right)}_{\begin{subarray}{c}\text{base model}\\ \text{falsifiability}\end{subarray}}\geq c_{d}\Psi_{f}\left(\boldsymbol{ \iota}_{\mathrm{base}}\right)\qquad\underbrace{\mathbf{I}\left(\mathbf{y}_{ \mathrm{rep}},\boldsymbol{\theta}\mid\mathbf{y}\right)}_{\begin{subarray}{c} \text{expanded model}\\ \text{falsifiability}\end{subarray}}\geq c_{d^{\mathrm{exp}}}\left[\Psi_{f} \left(\boldsymbol{\iota}\right)+\Delta_{f}\right], \tag{4}\]
_where \(\Psi_{i},\Psi_{f}\) are increasing in each of the components of the vector argument, \(\Delta_{m},\Delta_{c}\geq 0\) are terms which tend to increase in magnitude with \(d^{\mathrm{exp}}\), and \(c_{d},c_{d^{\mathrm{exp}}}\) are constants depending only on \(d\) and \(d^{\mathrm{exp}}\) respectively._
These inequalities are given from the adverse directions, in the sense that smaller mutual information and larger conditional mutual information are associated with reduced identifiability and falsifiability respectively. The dimensional dependence enters through the \(\Delta\) terms, which push the bounds in the corresponding adverse directions. The tradeoff between these bounds occurs through the \(\Psi\) terms. For instance, if the components of \(\boldsymbol{\iota}\) are all smaller than \(\boldsymbol{\iota}_{\mathrm{base}}\), then our mutual information bound will decrease in passing to the expanded model. In the reverse case, the conditional mutual information bound will increase.
In conjunction with numerous examples, this result suggests that at least one of reduced identifiability and reduced falsifiability should be expected in the process of iterative model expansion, which is the first major contribution of this work. The
generality of this phenomenon motivates considering methods of inference and model checking which can cope with these conditions by extracting as much useful information from our models as possible.
Our second contribution is to demonstrate methods of utilizing the joint structure of the posterior in practice which may allow these challenges to be partially overcome in many cases. We show in an extended example how the dependence structure of the posterior can contain significant, practically useful information even when the marginal inferences are too weak to support strong conclusions about individual parameters. And in the context of model checking, we provide an extension of the traditional posterior predictive \(p\)-value, which we validate in a real data example, and which we argue is often more useful and easily applied than previous solutions designed to resolve the posterior predictive \(p\)-value's claimed power deficiencies.
### Related Work
Recently, statistical workflow has enjoyed increased attention as a discrete topic in statistics. This literature has sought to provide a consistent framework and practical advice for each step of a statistical analysis, including the process of model expansion (see, e.g. [10, 26, 7]). Here we seek to complement this perspective by studying model expansion as a distinct regime. To this end, our main result provides interpretable bounds on the mutual and conditional mutual information, the former of which depends critically on Theorem 2 in [1].
Outside of the context of model expansion, the problem of weak/non-identification has been extensively studied in the classical and Bayesian contexts. In the Bayesian setting, methods of detecting and dealing with identification problems have been studied in, e.g. [29, 16]. Whereas these methods have usually been tied to particular (classes of) models, we study this problem in a general setting of model expansion.
As we will argue in Section 3, problems of falsifiability are directly connected to debates over the power and conservativity of the posterior predictive \(p\)-value. Various forms of this problem have been described, and possible solutions have been proposed in [3, 2, 23, 31]. We propose another possible solution - conditional \(p\)-values - which differ from these previous proposals both in their goal and method of use, and we will argue that our approach is more practically applicable in many cases.
Our approach to studying the problems of identifiability and falsifiability follows many previous successes in using information-theoretic tools to understand and quantify model behaviors in great generality. We enumerate a few connections of particular note:
1. We quantify identifiability by thinking of the information entropy of a posterior as representing our uncertainty about parameters of interest. This representation of uncertainty as entropy can be traced back to Jaynes, who used it to justify the use of maximum entropy posterior distributions [15].
2. Information-theoretic criteria have long been used to evaluate the predictive performance of models [28]. In the Bayesian context, the expected log predictive density (ELPD) has been used as a flexible and model-specific objective for model evaluation and comparison [27]. When our data consists of a scalar quantity \(y\), and the model is correctly specified, the ELPD can be given as \[\mathrm{D}\left(p(y\mid\boldsymbol{\theta}^{*})\mid\mid p(y\mid y^{\mathrm{ rep}})\right)+C,\]
where \(p(\mathbf{y}\mid\boldsymbol{\theta}^{*})\) is the true data generating process, \(C\) is a constant depending only on this true distribution, and \(y^{\text{rep}}\sim p(y\mid\boldsymbol{\theta})\) is an independent replication of the data. If we substitute the true value \(\boldsymbol{\theta}^{*}\) with an average over the posterior \(p(\boldsymbol{\theta}\mid\mathbf{y})\), then the first term recovers the conditional mutual information \(\mathbf{I}\left(\boldsymbol{\theta},\mathbf{y}_{\text{rep}}\mid\mathbf{y}\right)\) which we relate to the falsifiability of the model.
3. The Rashomon effect, first described by Breiman in [5], is a phenomenon whereby many models can achieve similar overall loss yet provide very different point predictions. In our work, we find that our concept of falsifiability is also threatened by the multiplicity of plausible sampling distributions in a model. And indeed, our conditional mutual information rests on a conceptually simiar KL divergence as a recently proposed metric for quantifying the Rashomon effect, the Rashomon capacity [12].
4. Mutual information-based quantities have also been deployed to bound measures of other adverse model behaviors, particular bias and generalization error [24, 30].
## 2 Weak Identifiability and Model Expansion
We start by defining the types of model expansions to which our results will apply. We will write \(p_{\text{base}}(\mathbf{y},\boldsymbol{\theta})\) for some base model defined over data \(\mathbf{y}\in\mathbb{R}^{n}\) and parameters \(\boldsymbol{\theta}\in\mathbb{R}^{d}\). We then consider certain expansions of this base model defined as follows.
**Definition 1** (Model Expansion).: _A model \(p(\mathbf{y},\boldsymbol{\theta},\boldsymbol{\lambda})\) defined with additional parameter \(\boldsymbol{\lambda}\in\overline{\mathbb{R}}^{k}\) is an expansion of \(p_{\text{base}}(\mathbf{y},\boldsymbol{\theta})\) if_
\[p_{\text{base}}(\mathbf{y},\boldsymbol{\theta})=p(\mathbf{y},\boldsymbol{ \theta}\mid\boldsymbol{\lambda}_{0})\text{ for }\boldsymbol{\lambda}_{0}\in \overline{\mathbb{R}}^{k}, \tag{5}\]
_where \(\overline{\mathbb{R}}=[-\infty,\infty]\)._
This framework includes many common examples of model expansion:
* Let \(p_{\text{base}}(\mathbf{y},\boldsymbol{\theta})\) be a generalized linear model with response vector \(\mathbf{y}\) and parameters \(\boldsymbol{\theta}\) including the coefficients and any additional parameters. Adding a new predictor and coefficient \(\lambda\) with independent prior is then an expansion since \(p_{\text{base}}(\mathbf{y},\boldsymbol{\theta})=p(\mathbf{y},\boldsymbol{ \theta}\mid\lambda=0)\).
* Let \(p_{\text{base}}(\mathbf{y},\boldsymbol{\theta})\) be an exchangeable Poisson model over the data \(y_{i}\) with \(\theta\) the Poisson rate. We can extend this with an overdispersion parameter \(\lambda\) (with independent prior). This is commonly modeled with a negative binomial distribution \[p(y\mid\theta,\lambda)=\begin{pmatrix}y+\lambda-1\\ y\end{pmatrix}\left(\frac{\theta}{\theta+\lambda}\right)^{y}\left(\frac{ \lambda}{\theta+\lambda}\right)^{\lambda}.\] Since this reduces to the Poisson as \(\lambda\to\infty\), we have that \(p_{\text{base}}(\mathbf{y},\boldsymbol{\theta})=p(\mathbf{y},\boldsymbol{ \theta}\mid\lambda=\infty)\), so this is again an expansion.
### Weak Identification and Marginal Entropy
We now formalize our notion of identification using the information entropy. First we establish some notation. For a joint model \(q(\boldsymbol{\theta},\mathbf{y})\), the (differential) entropy of \(q(\boldsymbol{\theta})\)
is denoted \(h_{q(\mathbf{\theta})}(\mathbf{\theta})\), and the conditional entropy of \(\mathbf{\theta}\) given \(\mathbf{y}\) is \(h_{q(\mathbf{\theta},\mathbf{y})}\left(\mathbf{\theta}\mid\mathbf{y}\right)\). The mutual information (\(\mathsf{mi}\)) is denoted \(\mathbf{I}_{q}\left(\mathbf{\theta},\mathbf{y}\right)\), which will at times be extended to a conditional mutual information (\(\mathsf{cmi}\)), denoted by \(\mathbf{I}_{q}\left(\mathbf{\theta},\mathbf{y}\mid\mathbf{x}\right)\), when the joint model extends over an additional quantity \(\mathbf{x}\). When distributions are clear from context, we may drop subscripts from entropies and mutual informations, writing e.g. \(h(\mathbf{\theta})\) and \(\mathbf{I}\left(\mathbf{\theta},\mathbf{y}\right)\). The reader who is unfamiliar with information theory may consult Appendix A for definitions of these quantities and statements of the basic results that we use. With these definitions, we can now give quantitative operational definitions of our notions of weak marginal identification for arbitrary subsets of \(\mathbf{\theta}\).
**Definition 2** (\(\epsilon\)-Weak Identification).: _Let \(I\subset[d]\). We say for any \(\epsilon>0\) that \(\mathbf{\theta}_{I}=\left(\theta_{i}\right)_{i\in I}\) is \(\epsilon\)-weakly identified for data \(\mathbf{y}\) if_
\[h_{p(\mathbf{\theta}_{I}|\mathbf{y})}\left(\mathbf{\theta}_{I}\right)>h_{p(\mathbf{\theta }_{I})}\left(\mathbf{\theta}_{I}\right)-\epsilon. \tag{6}\]
_For \(p_{1}\left(\mathbf{\theta},\mathbf{y}\right)\) and \(p_{2}(\mathbf{\theta},\mathbf{y})\), \(\mathbf{\theta}_{I}\) is more weakly identified in \(p_{2}\) than \(p_{1}\) if \(h_{p_{2}(\mathbf{\theta}_{I}|\mathbf{y})}\left(\mathbf{\theta}_{I}\right)>h_{p_{1}(\bm {\theta}_{I}|\mathbf{y})}\left(\mathbf{\theta}_{I}\right)\)._
We also define weak identification for entire models (regardless of data \(\mathbf{y}\)) by averaging over the prior predictive distribution.
**Definition 3** (\(\epsilon\)-Weakly Identifiable Model).: _Let \(I\subset[d]\). We say \(\mathbf{\theta}_{I}\) is \(\epsilon\)-weakly identifiable in \(p(\mathbf{\theta},\mathbf{y})\) if_
\[h\left(\mathbf{\theta}\mid\mathbf{y}\right)>h\left(\mathbf{\theta}\right)-\epsilon, \tag{7}\]
_or, equivalently, if \(\mathbf{I}(\mathbf{\theta},\mathbf{y})<\epsilon\)._
Henceforth, we will leave the \(\epsilon\)-dependence of this definition implicit and simply say that a parameter is weakly identified if it is \(\epsilon\)-weakly identified for an appropriate value of \(\epsilon\) (which will usually be given by domain understanding).
This operational definition of weak identification can only be interpreted relative to the prior. In many cases, this is a natural quantity to focus on (e.g. when we are concerned with the cost-benefit tradeoffs of data collection or the contribution of a research finding to existing knowledge). However, if we expand a model by adding prior information about \(\mathbf{\theta}\), then it possible for both the posterior entropy of \(\mathbf{\theta}\) and the mutual information to decrease. In other words, the identification relative to the prior may decrease while the posterior becomes more concentrated. This divergence between absolute and relative notions of identification can be avoided if we exclude from consideration expansions which decrease \(h(\mathbf{\theta})\).
We can now show that certain model expansions tend to weaken identification in the above sense. If \(p(\mathbf{\theta},\mathbf{\lambda},\mathbf{y})\) is an expansion of \(p_{\mathrm{base}}(\mathbf{\theta},\mathbf{y})\), then we have the following decomposition of the mutual information:
\[\mathbf{I}_{p(\mathbf{\theta},\mathbf{\lambda},\mathbf{y})}(\mathbf{\theta},\mathbf{y})= \mathbf{I}_{p_{\mathrm{base}}(\mathbf{\theta},\mathbf{y})}(\mathbf{\theta},\mathbf{y} )+\Delta_{I}^{\mathrm{exp}}+\Delta_{I}^{\mathrm{post}}, \tag{8}\]
where we define
\[\Delta_{I}^{\mathrm{exp}} =\mathbf{I}_{p(\mathbf{\theta},\mathbf{\lambda},\mathbf{y})}\left(\mathbf{ \theta},\mathbf{y}\mid\mathbf{\lambda}\right)-\mathbf{I}_{p(\mathbf{\theta},\mathbf{ \lambda},\mathbf{y})}\left(\mathbf{\theta},\mathbf{y}\mid\mathbf{\lambda}_{0}\right)\] \[\Delta_{I}^{\mathrm{post}} =\mathbf{I}_{p(\mathbf{\theta},\mathbf{\lambda},\mathbf{y})}\left(\mathbf{ \lambda},\mathbf{\theta}\right)-\mathbf{I}_{p(\mathbf{\theta},\mathbf{\lambda},\mathbf{y} )}(\mathbf{\lambda},\mathbf{\theta}\mid\mathbf{y}).\]
The \(\Delta_{I}^{\mathrm{exp}}\) term is the difference in amount of information \(\mathbf{y}\) provides about \(\mathbf{\theta}\) given \(\mathbf{\lambda}\) and given \(\mathbf{\lambda}_{0}\), averaging \(\mathbf{\lambda}\) over the expanded model. The \(\Delta_{I}^{\mathrm{post}}\) term is the difference
in the amount of information \(\mathbf{\lambda}\) provides about \(\mathbf{\theta}\) before and after observing the data \(\mathbf{y}\). We regard this as a measure of the a priori informativeness of \(\mathbf{\lambda}\) about \(\mathbf{\theta}\), which is justified by the fact that we have, \(\Delta_{I}^{\mathrm{post}}\geq-\mathbf{I}\left(\mathbf{\lambda},\mathbf{\theta}\mid \mathbf{y}\right)\) with equality if and only if \(\mathbf{\theta}\) and \(\mathbf{\lambda}\) are independent in the expanded model (i.e. if \(p(\mathbf{\theta},\mathbf{\lambda})=p(\mathbf{\theta})p(\mathbf{\lambda})\)). When \(\Delta_{I}^{\mathrm{post}}<0\), (8) shows that the expanded model is biased towards weaker identification of \(\mathbf{\theta}\) compared to the base model.
We can also use the decomposition (8) to define a concept which will be useful in the next sections. We say the parameter \(\mathbf{\lambda}\)**dilutes** the effect of shared parameter \(\mathbf{\theta}\) if \(h_{p(\mathbf{\theta},\mathbf{y}|\mathbf{\lambda})}\left(\mathbf{\theta}\mid\mathbf{y}\right)\) is larger than \(h_{p_{\mathrm{base}}(\mathbf{\theta},\mathbf{y})}(\mathbf{\theta}\mid\mathbf{y})\) on average over \(p(\mathbf{\lambda})\). If this relationship is reversed, we say that \(\mathbf{\lambda}\)**concentrates** the effect of \(\mathbf{\theta}\). In the case that \(p(\mathbf{\theta}\mid\mathbf{\lambda})=p_{\mathrm{base}}(\mathbf{\theta})\) for all \(\mathbf{\lambda}\), dilution and concentration are equivalent to \(\Delta_{I}^{\mathrm{exp}}<0\) and \(\Delta_{I}^{\mathrm{exp}}>0\) respectively.
### The Relation to Marginal Fisher Information
For model \(q(\mathbf{y},\mathbf{\theta})\), the observed and Fisher information matrices are defined as
\[[\mathbf{\mathcal{J}}_{q}(\mathbf{y},\mathbf{\theta})]_{ij} =-\frac{\partial^{2}}{\partial\theta_{i}\partial\theta_{j}}\log q (\mathbf{y}\mid\mathbf{\theta})\text{ for all }1\leq i,j\leq d,\] \[\mathbf{\mathcal{I}}_{q}(\mathbf{\theta})=\mathbb{E}_{q(\mathbf{y}|\mathbf{ \theta})}\mathbf{\mathcal{J}}_{q}(\mathbf{y},\mathbf{\theta}). \tag{9}\]
We drop the subscript when the model is clear from context. We now state a bound on the mutual information in terms of the Fisher information, which follows directly from Theorem 2 of Aras et al. [1].
**Theorem 2** (Mutual Information Upper Bound).: _Let \(q(\mathbf{y},\mathbf{\theta})\) be a model such that the prior \(p(\mathbf{\theta})\) is log-concave with covariance matrix \(\mathbf{\Sigma}\), then_
\[\mathbf{I}(\mathbf{\theta},\mathbf{y})\leq d\psi\left(\frac{1}{d}\mathrm{tr} \left(\mathbb{E}_{q(\mathbf{\theta})}\mathbf{\Sigma}^{1/2}\mathbf{\mathcal{I}}(\mathbf{\theta })\mathbf{\Sigma}^{1/2}\right)\right), \tag{10}\]
_where \(\psi(x)\) is the concave increasing function given by_
\[\psi(x)=\begin{cases}\sqrt{x},&0\leq x\leq 1\\ 1+\frac{1}{2}\log(x),&x>1\end{cases}.\]
Now let \(v_{\mathrm{pr}}\) be the maximum eigenvalue of \(\mathbf{\Sigma}\) (the covariance matrix over just the \(\mathbf{\theta}\) parameters). Then we also clearly have
\[\mathbf{I}(\mathbf{\theta},\mathbf{y})\leq d\psi\left(\frac{v_{\mathrm{pr}}}{d} \mathbb{E}_{q(\mathbf{\theta})}\mathrm{tr}\left(\mathbf{\mathcal{I}}(\mathbf{\theta}) \right)\right). \tag{11}\]
If \(v_{\mathrm{pr}}\) differs between a base model and expanded model, then we can rescale the prior over \(\mathbf{\theta}\) in the expanded model so that they are equal. The only possible difficulty is that we may no longer have a \(\mathbf{\lambda}_{0}\) for which \(p_{\mathrm{base}}\left(\mathbf{\theta}\right)=p\left(\mathbf{\theta}\mid\mathbf{\lambda}_ {0}\right)\). Such situations can always be resolved however by passing to a larger model which includes a prior scale hyperparameter for \(\mathbf{\theta}\) in \(\mathbf{\lambda}\). With such a hyperparameter, we always have the ability to set both the marginal prior scale and \(\mathbf{\lambda}_{0}\)-conditional prior scale for \(\mathbf{\theta}\) independently, allowing equality of \(v_{\mathrm{pr}}\) and preservation of the model expansion property.
We also note that rescaling \(\mathbf{\theta}\) leaves \(\mathbf{I}(\mathbf{\theta},\mathbf{y})\) unchanged since the mutual information is invariant to all invertible transformations of \(\mathbf{\theta}\) and \(\mathbf{y}\) separately. Thus, with loss
of little generality, we henceforth assume that \(v_{\rm pr}=1\) for all models. The weaker bound (11) will be useful for comparisons to other quantities in the next sections, and for deriving the following further upper bound, which applies more directly to model expansions, is easier to compute, and mirrors the relation (8).
**Theorem 3**.: _Define the partial Hessian with respect to \(\boldsymbol{\lambda}\) as \(\left[\mathbf{H}\left(\boldsymbol{\lambda};\boldsymbol{\theta},\mathbf{y} \right)\right]_{jk}=-\frac{\partial^{2}}{\partial\lambda_{j}\partial\lambda_{ k}}\log p(\mathbf{y},\boldsymbol{\theta},\boldsymbol{\lambda})\) for \(1\leq j,k\leq m\). Then under the regularity conditions in Appendix E,_
\[\mathbb{E}\mathrm{tr}\boldsymbol{\mathcal{I}}(\boldsymbol{\theta})\leq\sum_{j =1}^{d}\left[\mathbb{E}\left\{-\frac{\partial^{2}}{\partial\theta_{j}^{2}}\log p (\mathbf{y}\mid\boldsymbol{\theta},\boldsymbol{\lambda})\right\}+\Delta_{j} \right], \tag{12}\]
_where we define_
\[\Delta_{j}=\mathbb{E}\left\{-\frac{\partial^{2}}{\partial\theta_{i}^{2}}\log p \left(\boldsymbol{\lambda}\mid\boldsymbol{\theta}\right)\right\}-\frac{\left[ \sum_{j=1}^{m}\mathbb{E}\frac{\partial}{\partial\lambda_{j}}\frac{\partial}{ \partial\theta_{i}}\log p(\mathbf{y}\mid\boldsymbol{\theta},\boldsymbol{ \lambda})\right]^{2}}{\mathbb{E}\|\mathbf{H}\left(\boldsymbol{\lambda}; \boldsymbol{\theta},\mathbf{y}\right)\|_{\rm op}}.\]
The \(\Delta_{j}\) terms compare prior- and likelihood-based measures of dependence between \(\boldsymbol{\theta}\) and \(\boldsymbol{\lambda}\) and are thus analogous to the \(\Delta^{\rm post}\) term in (8). As with the \(\Delta^{\rm post}\) term, we have \(\Delta_{j}\leq 0\) when \(\boldsymbol{\theta}\) and \(\boldsymbol{\lambda}\) are independent under the prior. In particular, when \(p(\boldsymbol{\theta},\boldsymbol{\lambda},\mathbf{y})\) is an expansion of some base model and when \(\sum_{i=1}^{d}\Delta_{j}<0\), (12) again exhibits a downward bias on the Fisher information of the expanded model compared to the base model. We now work through two simple examples to illustrate.
1. Take a linear regression model \(p_{\rm base}\) with response \(\mathbf{y}\in\mathbb{R}^{n}\), predictors \(\mathbf{X}\in\mathbb{R}^{n\times m}\), coefficients \(\boldsymbol{\beta}\in\mathbb{R}^{m}\), intercept \(\alpha\), and log noise variance \(\tau\): \[\left(2\pi\exp\left(\tau\right)\right)^{-n/2}\exp\left[-\left(\mathbf{y}- \mathbf{X}\boldsymbol{\beta}-\alpha\mathbf{1}\right)^{T}\left(\mathbf{y}- \mathbf{X}\boldsymbol{\beta}-\alpha\mathbf{1}\right)/2\exp(\tau)\right].\] We consider an expansion \(p\) with additional predictor \(\mathbf{z}\in\mathbb{R}^{n}\) and coefficient \(\lambda\). Suppose the coefficients are assigned independent priors, and let \(\boldsymbol{\theta}=(\sigma,\beta_{1},\ldots,\beta_{m},\alpha)\). We assume without loss of generality that all predictors are centered as this does not affect the posterior entropy. We then find for all \(1\leq j\leq m\), \[-\mathbb{E}_{p}\frac{\partial^{2}}{\partial\beta_{j}^{2}}\log p( \mathbf{y}\mid\lambda,\boldsymbol{\theta})=\mathbb{E}_{p(\tau)}\left\{\frac{n \mathrm{var}\left(\mathbf{x}_{j}\right)}{\exp(\tau)}\right\}=-\mathbb{E}_{p_ {\rm base}}\frac{\partial^{2}}{\partial\beta_{j}^{2}}\log p_{\rm base}(\mathbf{y }\mid\boldsymbol{\theta}),\] \[-\mathbb{E}_{p}\frac{\partial^{2}}{\partial\tau}\log p(\mathbf{y} \mid\lambda,\boldsymbol{\theta})=\frac{n}{2}=-\mathbb{E}_{p_{\rm base}}\frac{ \partial^{2}}{\partial\tau^{2}}\log p_{\rm base}(\mathbf{y}\mid\boldsymbol{ \theta}),\ {\rm and}\] \[-\mathbb{E}_{p}\frac{\partial^{2}}{\partial\alpha^{2}}\log p( \mathbf{y}\mid\lambda,\boldsymbol{\theta})=\mathbb{E}_{p(\tau)}\left\{\frac{n }{\exp(\tau)}\right\}=-\mathbb{E}_{p_{\rm base}}\frac{\partial^{2}}{\partial \alpha^{2}}\log p_{\rm base}(\mathbf{y}\mid\lambda,\boldsymbol{\theta})\] These computations show that the first term in (12) is just \(\mathrm{tr}\left(\mathbb{E}\boldsymbol{\mathcal{I}}_{p_{\rm base}}(\boldsymbol{ \theta})\right)\). Assuming \(\mathbf{z}\) is also centered, computing the second term in (12) gives that \[\mathbb{E}_{p_{\rm base}}\mathrm{Tr}\left(\boldsymbol{\mathcal{I}}_{p_{\rm base }}\right)-\mathbb{E}_{p}\mathrm{Tr}\left(\boldsymbol{\mathcal{I}}_{p}\right) \geq n^{2}\mathbb{E}_{p(\tau)}\left\{\frac{1}{\exp(\tau)}\right\}\sum_{j=1}^{m }\left[\frac{\mathrm{cov}\left(\mathbf{x}_{j},\mathbf{z}\right)}{\mathrm{var} \left(\mathbf{z}\right)}\right]^{2},\] which reflects the familiar fact that the identifiability of regression models is reduced by significant correlation between predictors.
2. Next consider an exchangeable Poisson base model with likelihood \[\exp\left(\mu n\overline{y}-n\exp(\mu)\right)\Big{/}\left(y_{1}!\times y_{2}!\times \cdots\times y_{n}!\right).\] This can be expanded to a negative binomial model with likelihood \[\left[\prod_{i=1}^{n}\frac{\Gamma\left(y_{i}+\exp(\lambda)\right)}{\Gamma \left(y_{i}+1\right)\Gamma\left(\exp(\lambda)\right)}\right]\left(\frac{\exp( \mu)}{\exp(\mu)+\exp(\lambda)}\right)^{\sum_{i=1}^{n}y_{i}}\left(\frac{\exp( \lambda)}{\exp(\mu)+\exp(\lambda)}\right)^{n\exp(\lambda)}.\] This converges to the Poisson density as \(\lambda\to\infty\), so the expanded model is an expansion of the Poisson model. Next observe that the second derivatives with respect to \(\mu\) are given by \[-\frac{\partial^{2}}{\partial\mu^{2}}\log p\left(\mathbf{y}\mid\mu,\lambda \right)=n\exp\left(\mu\right)\left[1-\frac{\exp(\mu)}{\exp(\mu)+\exp(\lambda) }\right]\left[\frac{\overline{y}+\exp(\lambda)}{\exp(\mu)+\exp(\lambda)} \right],\] which has expected value \(n\exp\left(\mu\right)\left[1-\frac{\exp(\mu)}{\exp(\mu)+\exp(\lambda)}\right]\) under \(p(\mathbf{y}\mid\mu,\lambda)\). With this we can show using (12) that the Fisher information trace must fall in passing from the base to the expanded model: \[\mathbb{E}\mathrm{Tr}\left(\mathbf{\mathcal{I}}_{p}\right)\leq\mathbb{E}\left\{n \exp\left(\mu\right)\left[\frac{\exp(\lambda)}{\exp(\mu)+\exp(\lambda)} \right]\right\}<\mathbb{E}\left\{n\exp(\mu)\right\}=\mathbb{E}\mathrm{Tr} \left(\mathbf{\mathcal{I}}_{p_{\mathrm{base}}}\right),\]
The results of Theorems 2 and 3 both connect Bayesian and classical notions of identification and show that the marginal \(\mathsf{mi}\) is controlled by a quantity that is often easily approximated before fitting the expanded model. This latter property may be useful when posterior sampling is slow, providing an indication of difficult posterior geometry before it frustrates the sampling algorithm.
The results of this section, particularly (12), suggest that dilution of \(\mathbf{\theta}\) by \(\mathbf{\lambda}\) may be heuristically indicated by a positive difference:
\[\mathbf{\Delta}_{\mathrm{dilute}}=\mathbb{E}_{p(\mathbf{\theta})p(\mathbf{\lambda})}\left\{ \mathbf{\mathcal{I}}_{\mathrm{base}}\left(\mathbf{\theta}\right)-\mathbf{\mathcal{I}}(\bm {\theta}\mid\mathbf{\lambda})\right\},\]
where \(\mathbf{\mathcal{I}}_{\mathrm{base}}\) is the Fisher information of the base model, and \(\mathbf{\mathcal{I}}(\mathbf{\theta}\mid\mathbf{\lambda})\) is the Fisher information of the expanded model conditional on \(\mathbf{\lambda}\), i.e. the principal submatrix of the full Fisher information matrix \(\mathbf{\mathcal{I}}(\mathbf{\theta},\mathbf{\lambda})\) obtained by deleting those twos and columns involving derivatives with respect to the components of \(\mathbf{\lambda}\). We will also say that the effect of \(\mathbf{\lambda}\) is **totally diluting/concentrating** of \(\mathbf{\theta}\) if \(\mathbf{\Delta}_{\mathrm{dilute}}\) is positive/negative semidefinite, respectively.
## 3 Weak Falsifiability and Model Expansion
We now turn to the behavior of the posterior predictive distribution (\(\mathsf{ppd}\)) \(p(\mathbf{y}_{\mathrm{rep}}\mid\mathbf{y})\) under model expansion. For joint model \(p(\mathbf{y},\mathbf{\theta})\), the \(\mathsf{ppd}\) is
\[p(\mathbf{y}_{\mathrm{rep}}\mid\mathbf{y})=\int p(\mathbf{y}_{\mathrm{rep}} \mid\mathbf{\theta})p(\mathbf{\theta}\mid\mathbf{y})d\mathbf{\theta}. \tag{13}\]
Comparisons between posterior predictive samples and observed data are commonly used to check Bayesian models. It is often convenient to formalize these checks as posterior predictive \(p\)-values (\(\mathsf{ppp-vs}\)).
**Definition 4** (Posterior Predictive \(p\)-Value).: _For observed data \(\mathbf{y}\), joint model \(p(\mathbf{y},\boldsymbol{\theta})\), and real-valued test statistic \(T\), the right-tailed \(\mathsf{ppp}\)-\(\mathsf{v}\) for \(T\) is_
\[p_{T}=\int_{\{T(\mathbf{y}_{\mathrm{rep}})\geq T(\mathbf{y})\}}p(\mathbf{y}_{ \mathrm{rep}}\mid\mathbf{y})d\mathbf{y}_{\mathrm{rep}}. \tag{14}\]
_The left-tailed and two-tailed \(p\)-values are defined analogously._
It will be useful to set \(\mathsf{ppp}\)-\(\mathsf{vs}\) in a general framework of model evaluations.
**Definition 5** (Data Distribution Evaluation).: _Given \(p(\mathbf{y},\boldsymbol{\theta})\), let \(\mathcal{Y}\) be the (common) support of the densities \(p(\mathbf{y}\mid\boldsymbol{\theta})\), \(P(\mathcal{Y})\) be the space of all densities on \(\mathcal{Y}\), and \(\mathcal{E}\subset\mathbb{R}\). Then for any data \(\mathbf{y}\), an evaluation is a (measurable) map_
\[e_{\mathbf{y}}:P(\mathcal{Y})\to\mathcal{E}\]
For posterior sample \(\{\boldsymbol{\theta}_{(s)}\}_{s=1}^{S}\) and evaluation \(e_{\mathbf{y}}\), the modeler has data
\[\left\{\boldsymbol{\theta}_{(s)},e_{\mathbf{y}}\left(p(\mathbf{y}\mid \boldsymbol{\theta}_{(s)})\right)\right\}_{s=1}^{S}. \tag{15}\]
with which to evaluate the model. Since (15) can be complex and high-dimensional, it may not proffer easy conclusions about overall model fitness. Posterior predictive checks solve this by providing simple summaries of (15). For statistic \(T\), we define conditional \(\mathsf{ppp}\)-\(\mathsf{vs}\)\(p_{T}(\boldsymbol{\theta})\) as the evaluations \(e_{\mathbf{y}}\left(p(\cdot\mid\boldsymbol{\theta})\right)\) for the map \(q(\cdot)\to\int_{\{T(\mathbf{y}_{\mathrm{rep}})\geq T(\mathbf{y})\}}q(\mathbf{ y}_{\mathrm{rep}})d\mathbf{y}_{\mathrm{rep}}\). The usual \(\mathsf{ppp}\)-\(\mathsf{v}\) is then just the average:
\[p_{T}=\int e_{\mathbf{y}}\left(p(\cdot\mid\boldsymbol{\theta})\right)p( \boldsymbol{\theta}\mid\mathbf{y})d\boldsymbol{\theta}, \tag{16}\]
which is naturally estimated by \(\frac{1}{S}\sum_{s=1}^{S}e_{\mathbf{y}}\left(p(\cdot\mid\boldsymbol{\theta}_{ (s)})\right)\). Thus, the \(\mathsf{ppp}\)-\(\mathsf{v}\) may be limited if relevant information in (15) is lost in (16). If \(p(\mathbf{y}\mid\boldsymbol{\theta}_{(s)})=p(\mathbf{y}\mid\boldsymbol{ \theta}_{(t)})\) for all \(\mathbf{y}\) and \(1\leq s,t\leq S\), the \(\mathsf{ppd}\) reduces to this one distribution, and no information is lost in (16). But generally the \(\mathsf{ppd}\) will not be able to totally summarize all of the sampling distributions which are plausible under the posterior.
We quantify this loss of information with the Kullback-Leibler (KL) divergence, which is given for densities \(p\) and \(q\) over common support as \(D\left(p(\mathbf{y})||q(\mathbf{y})\right)=\mathbb{E}_{p(\mathbf{y})}\log \left(\frac{p(\mathbf{y})}{q(\mathbf{y})}\right)\). This is a measure of discrepancy between distributions, and with it, we define a metric for the average discrepancy between distributions \(p(\mathbf{y}_{\mathrm{rep}}\mid\boldsymbol{\theta})\) drawn from the posterior and the \(\mathsf{ppd}\).
**Definition 6** (Posterior Sampling Divergence).: _For data \(\mathbf{y}\) and model \(p(\boldsymbol{\theta},\mathbf{y})\), the posterior sampling divergence is_
\[\mathsf{psd}\left(\mathbf{y}\right)=\mathbb{E}_{p(\boldsymbol{\theta}|\mathbf{ y})}D\left(p(\mathbf{y}_{\mathrm{rep}}\mid\boldsymbol{\theta})||p(\mathbf{y}_{ \mathrm{rep}}\mid\mathbf{y})\right). \tag{17}\]
Using the Donsker-Varadhan representation and Jensen's inequality, we get
\[\mathsf{psd}(\mathbf{y})\leq\mathbb{E}_{p(\boldsymbol{\theta}|\mathbf{y})} \left\{\sup_{T:\mathcal{Y}\to\mathbb{R}}\left|\mathbb{E}_{p(\mathbf{y}_{ \mathrm{rep}}|\boldsymbol{\theta})}T\left(\mathbf{y}_{\mathrm{rep}}\right)- \mathbb{E}_{p(\mathbf{y}_{\mathrm{rep}}|\mathbf{y})}T\left(\mathbf{y}_{ \mathrm{rep}}\right)\right|\right\}\]
In words, the \(\mathsf{psd}\) lower bounds the degree to which typical sampling distributions \(p(\mathbf{y}_{\mathrm{rep}}\mid\boldsymbol{\theta})\) (with respect to the posterior) and the \(\mathsf{ppd}\) can be distinguished by a statistic
\(T\). A large psd thus indicates increased risk of information loss when using a ppp-v compared to (15). An example shows how this information loss can be relevant for _practical_ model evaluation by making it difficult to falsify the model using a ppp-v. Let \(\mathbf{y}=(-10,10)\), with model
\[y_{1},y_{2}\stackrel{{ iid}}{{\sim}}\textsf{student-t}\left( \theta,1,10\right),\qquad\theta\sim\textsf{uniform}\left(-15,15\right), \tag{18}\]
where student-t\((\mu,\sigma,d)\) is the t distribution with location \(\mu\), scale \(\sigma\), and \(d\) degrees of freedom. This results in a multimodal posterior, plotted in the left panel of Figure 2. The right panel plots joint samples from the ppd, which is also bimodal despite the unimodality of the individual sampling distributions.
Consider the test statistics \(T_{1}(y_{1},y_{2})=-y_{1}\) and \(T_{2}(y_{1},y_{2})=y_{2}\), and let \(p_{T_{1}}(\theta)\) and \(p_{T_{2}}(\theta)\) be the corresponding conditional ppp-vs for \(p(y_{1},y_{2}\mid\theta)\). Figure 3 plots these against \(\theta\). The ppp-vs \(p_{T_{1}}\) and \(p_{T_{2}}\) are \(\approx 0.165\), above usual thresholds for rejection and thus insufficient for falsification. However, the conditional \(p\)-values are vanishingly small over the bulk of the posterior support, suggesting that the model may be improved by introducing a scale parameter, or allowing the means to differ for the two observations, for example.
This example points towards a notion of power which does not make reference to the frequency properties of the ppp-v. Specifically, we will consider a model assessment to be underpowered if there is additional data (e.g. that contained in (15)) which would lead us to consider the model fitness deficient (with respect to the data feature we are testing) despite the particular model assessment passing (i.e. indicating acceptable compatibility between data feature and model). Therefore, in light of the above, we view an increasing psd as increasing the risk of our chosen model assessments suffering power deficits. In these cases, we have to work harder to find strong evidence for the falsity of the model (e.g. by examining the conditional \(p\)-value plots in Figure 3), and in this sense the model exhibits weaker falsifiability.
Figure 3: Conditional posterior predictive \(p\)-values for \(T_{1}\) (left panel) and \(T_{2}\) (right panel), evaluated at and plotted against posterior draws of \(\theta\).
Figure 2: Left: The posterior of \(\theta\). Right: The posterior predictive distribution of \((y_{1},y_{2})\).
### Posterior Sampling Divergence and Model Expansion
Since \(\mathbf{y}\) and \(\mathbf{y}_{\text{rep}}\) are conditionally independent given \(\boldsymbol{\theta}\), it follows that
\[\mathbb{E}_{p(\mathbf{y})}\mathsf{psd}\left(\mathbf{y}\right)=\mathbf{I}_{p( \boldsymbol{\theta},\mathbf{y},\mathbf{y}_{\text{rep}})}\left(\boldsymbol{ \theta},\mathbf{y}_{\text{rep}}\mid\mathbf{y}\right). \tag{19}\]
Then for base model \(p_{\text{base}}(\boldsymbol{\theta},\mathbf{y})\) and expansion \(p(\boldsymbol{\theta},\boldsymbol{\lambda},\mathbf{y})\), define
\[\Delta_{I}=\mathbb{E}_{p(\boldsymbol{\lambda})}\left[\mathbf{I}_{p( \boldsymbol{\theta},\mathbf{y}_{\text{rep}},\mathbf{y}\mid\boldsymbol{\lambda} )}(\boldsymbol{\theta},\mathbf{y}_{\text{rep}}\mid\mathbf{y})-\mathbf{I}_{p( \boldsymbol{\theta},\mathbf{y}_{\text{rep}},\mathbf{y}\mid\boldsymbol{\lambda }_{0})}(\boldsymbol{\theta},\mathbf{y}_{\text{rep}}\mid\mathbf{y})\right].\]
Using the chain rule for \(\mathsf{cmi}\), we have that
\[\mathbf{I}_{p}\left((\boldsymbol{\theta},\boldsymbol{\lambda}),\mathbf{y}_{ \text{rep}}\mid\mathbf{y}\right)=\mathbf{I}_{p_{\text{base}}}\left( \boldsymbol{\theta},\mathbf{y}_{\text{rep}}\mid\mathbf{y}\right)+\Delta_{I}+ \mathbf{I}_{p}\left(\boldsymbol{\lambda},\mathbf{y}_{\text{rep}}\mid\mathbf{y }\right). \tag{20}\]
As before, the nonnegative term \(\mathbf{I}_{p}\left(\boldsymbol{\lambda},\mathbf{y}_{\text{rep}}\mid\mathbf{y}\right)\) creates an upward bias for the overall \(\mathsf{cmi}\). In some simple examples, the \(\mathsf{cmi}\) can be computed exactly:
1. For base model \(y\mid\theta\sim\text{normal}\left(\theta,1\right)\) and \(\theta\sim\text{normal}\left(0,1\right)\), we get \[\mathbf{I}(\boldsymbol{\theta},\mathbf{y}_{\text{rep}}\mid\mathbf{y})=h \left(\boldsymbol{\theta}\mid\mathbf{y}\right)-h\left(\boldsymbol{\theta} \mid\mathbf{y},\mathbf{y}_{\text{rep}}\right)=\log\left(3/2\right)/2.\] Now we add a redundant location parameter, so \(\boldsymbol{\theta}=\left(\theta_{1},\theta_{2}\right)\), and \[y\mid\boldsymbol{\theta}\sim\text{normal}\left((\theta_{1}+\theta_{2})/ \sqrt{2},1\right),\qquad\theta_{1},\theta_{2}\stackrel{{ iid}}{{\sim}}\text{normal}\left(0,1 \right).\] After an invertible reparametrization \(\left(\mu_{1},\mu_{2}\right)=\phi(\theta_{1},\theta_{2})\), this model is \[y\mid\boldsymbol{\theta}\sim\text{normal}\left(\mu_{1},1\right),\qquad\mu_{1 },\mu_{2}\stackrel{{ iid}}{{\sim}}\text{normal}\left(0,1\right).\] By invariance of the \(\mathsf{cmi}\) under invertible reparametrization, we have \[\mathbf{I}(\boldsymbol{\theta},\mathbf{y}_{\text{rep}}\mid\mathbf{y})= \mathbf{I}(\boldsymbol{\mu},\mathbf{y}_{\text{rep}}\mid\mathbf{y})=h\left( \mu_{1}\mid\mathbf{y}\right)-h\left(\mu_{1}\mid\mathbf{y},\mathbf{y}_{\text{rep }}\right)=\log\left(3/2\right)/2.\]
2. Now consider a normal location model with data \(\mathbf{y}\in\mathbb{R}^{2n}\) for \(n\geq 1\): \[\mathbf{y}_{i}\stackrel{{ iid}}{{\sim}}\text{normal}\left(\theta,1 \right)\text{ for }1\leq i\leq 2n,\qquad\theta\sim\text{normal}\left(0,1\right).\] We expand this model by dividing \(\mathbf{y}\) as \(\mathbf{y}=\left(\mathbf{y}^{1},\mathbf{y}^{2}\right)\) with \(\mathbf{y}^{1},\mathbf{y}^{2}\in\mathbb{R}^{n}\) and introducing separate means \(\theta_{1},\theta_{2}\), arriving at: \[\mathbf{y}_{i}^{j}\stackrel{{ iid}}{{\sim}}\text{normal}\left( \theta_{j},1\right)\text{ for }1\leq i\leq n\text{ and }j=1,2,\qquad\theta_{1},\theta_{2} \stackrel{{ iid}}{{\sim}}\text{normal}\left(0,1\right).\] Now the \(\mathsf{cmi}\) of the base model is \(\text{CMI}_{\text{base}}(n)=\frac{1}{2}\log\left(\frac{4n+1}{2n+1}\right)\), whereas the \(\mathsf{cmi}\) of the expanded model is \(\text{CMI}_{\text{exp}}(n)=\log\left(\frac{2n+1}{n+1}\right)\). Figure 4 plots \((\text{CMI}_{\text{exp}}(n)-\text{CMI}_{\text{base}}(n))/\text{CMI}_{\text{base} }(n)\) against \(n\). Clearly \(\text{CMI}_{\text{exp}}(n)>\text{CMI}_{\text{base}}(n)\) for all \(n\), and \(\text{CMI}_{\text{exp}}(n)\to 2\text{CMI}_{\text{base}}(n)\) as \(n\to\infty\). The change in \(\mathsf{cmi}\) can be separated into two pieces. First, by splitting \(\mathbf{y}\), we reduce the data we have to estimate each of the means \(\theta_{1}\) and \(\theta_{2}\). This is reflected in the inequality \(\log\left(\frac{4n+1}{2n+1}\right)>\log\left(\frac{2n+1}{n+1}\right)\). But parametrizing with two independent means adds a degree of freedom in the sampling distribution of the expanded model, doubling the constant factor, which dominates the comparison. However, the latter effect will not always determine the change in \(\mathsf{cmi}\) between models, as the next example shows.
3. We take the base model from the last example with \(\mathbf{y}\in\mathbb{R}^{n}\) and expand it by adding a precision parameter and using a jointly normal-gamma prior: \[\mathbf{y}_{i}\stackrel{{ iid}}{{\sim}}\text{normal}\left(\theta_{1}, \theta_{2}^{-1/2}\right)\text{ for }1\leq i\leq n,\qquad(\theta_{1},\theta_{2})\sim \text{NG}\left(0,\mu_{\theta_{2}^{-1}},2,\mu_{\theta_{2}^{-1}}\right)\] Here, \(\mu_{\theta_{2}^{-1}}>0\) is the prior mean of the variance \(\theta_{2}^{-1}\). The marginal prior on \(\theta_{1}\) is \(\text{normal}\left(0,1\right)\), matching the prior in the base model. Figure 5 shows estimated percentage changes in cmi from base to expanded model against \(n\) for a range of noise levels \(r=\mu_{\theta_{2}^{-1}}/n\). As before, increasing \(n\) makes it easier to distinguish sampling distributions. Similarly, the added degree of freedom introduced by the precision parameter pushes the cmi larger. Hence, the majority of points in Figure 5 lie above 0. However, unlike the last example, the cmi can decrease in the expanded model if \(r\) is sufficiently large. This is because large values of \(r\) create priors that favor sampling distributions with large scales that are correspondingly harder to distinguish. Nevertheless, the effect of the added degree of freedom dominates this comparison. For example, for \(n=2\), the noise level in the base model is \(r=0.5\). Unless the prior average noise level in the expanded model is more than double that of the base model (i.e. \(r=1\)), we can see that the cmi will increase.
4. We now vary the prior scale in the model of the first example. Specifically, take
Figure 4: Percent change in cmi against \(1\leq n\leq 30\).
Figure 5: Percentage change in cmi from the base model to the expanded model against data size \(n\) for a range of noise levels \(r=\mu_{\theta_{2}^{-1}}\).
\(y\mid\theta\sim\operatorname{normal}\left(\theta,1\right)\) and \(\theta\sim\operatorname{normal}\left(0,\sigma_{p}\right)\). This has \(\mathsf{cmi}\) given by \[\frac{1}{2}\log\left(\frac{2\sigma_{p}^{2}+1}{\sigma_{p}^{2}+1}\right).\] This is increasing in \(\sigma_{p}\), and converges to \(0\) as \(\sigma_{p}\to 0\) and to \(\frac{1}{2}\log(2)\) as \(\sigma_{p}\to\infty\). In this case, there is no change in data size or degrees of freedom in the likelihood, and the \(\mathsf{cmi}\) changes only because of the prior.
In these examples we derived simple expressions for the \(\mathsf{cmi}\) that depended on sample size, sampling variance, and prior variance. In most cases, these expressions increased to a finite upper bound in relevant limits, despite the fact that the \(\mathsf{cmi}\) is unbounded above in general. The following lower bound in terms of the Fisher information demonstrates that this self-limiting behavior, as well as the dominance of the parameter dimension in driving increases the \(\mathsf{cmi}\), is not limited to these simple examples.
**Theorem 4** (Conditional Mutual Information Lower Bound).: _For \(M\geq 1\), define the \(M\)-replicated model:_
\[p\left(\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(M)},\boldsymbol{\theta}\right)=p (\boldsymbol{\theta})\prod_{i=1}^{M}p\left(\mathbf{y}^{(i)}\mid\boldsymbol{ \theta}\right).\]
_Suppose for \(M\) sufficiently large, we have that_
* _the posterior distributions_ \(p\left(\boldsymbol{\theta}\mid\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(M)}\right)\) _are normal,_
* _the observed information matrix of_ \(p(\mathbf{y},\boldsymbol{\theta})\) _is_ \(\gamma\)_-subexponential for some_ \(\gamma>0\) _(i.e. the observed information does not have heavy tails),_
* \(\mathbb{E}\lambda_{d}(\boldsymbol{\Sigma})\)_,_ \(\mathbb{E}\lambda_{1}^{-1}(\boldsymbol{\Sigma})\)_,_ \(\mathbb{E}\lambda_{d}^{2}\left(\boldsymbol{\mathcal{I}}(\boldsymbol{\theta})\right)\)_, and_ \(\mathbb{E}\lambda_{1}^{-1}(\boldsymbol{\mathcal{I}}(\boldsymbol{\theta})))\) _are bounded by some_ \(B>0\) _where_ \(\boldsymbol{\Sigma}=\operatorname{Cov}\left(\boldsymbol{\theta}\mid\mathbf{y}^ {(1)},\ldots,\mathbf{y}^{(M)}\right)\) _(i.e. the posterior covariance and Fisher information are neither too small nor too large on average)._
_Then for \(C\) a constant depending on \(\gamma\) and \(B\), we have_
\[\mathbf{I}(\boldsymbol{\theta},\mathbf{y}_{\operatorname{rep}}\mid\mathbf{y })\geq\frac{C}{\log d}\mathrm{tr}\left(\mathbb{E}_{p(\boldsymbol{\theta}, \mathbf{y})}\boldsymbol{\Sigma}_{\mathbf{y}}^{1/2}\boldsymbol{\mathcal{I}}( \boldsymbol{\theta})\boldsymbol{\Sigma}_{\mathbf{y}}^{1/2}\right), \tag{21}\]
_Remarks:_
* The assumption that \(\mathbb{E}\lambda_{1}^{-1}\left(\boldsymbol{\mathcal{I}}(\boldsymbol{\theta}) \right)<B\) rules out singular models with \(\lambda_{1}(\boldsymbol{\mathcal{I}}(\boldsymbol{\theta}))=0\). However, such models can often be reparametrized as \(\boldsymbol{\theta}^{\prime}=\Psi(\boldsymbol{\theta})\) using some \(\Psi:\mathbb{R}^{d}\to\mathbb{R}^{r}\) with \(r<d\) such that \(\boldsymbol{\mathcal{I}}(\boldsymbol{\theta}^{\prime})\) is nonsingular. Applying the result to such a parametrization gives a lower bound for the original \(\mathsf{cmi}\)\(\mathbf{I}(\boldsymbol{\theta},\mathbf{y}_{\operatorname{rep}}\mid\mathbf{y})\).
* Normality of \(p\left(\boldsymbol{\theta}\mid\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(M)}\right)\) for \(M\geq 1\) sufficiently large is almost certainly not satisfied unless it is satisfied for \(M=1\). Nevertheless, if the Bernstein-von-Mises theorem holds, we would expect \(p\left(\boldsymbol{\theta}\mid\mathbf{y}^{(1)},\ldots,\mathbf{y}^{(M)}\right)\) to be nearly normal for large \(M\) even if it is far from normal for \(M=1\). We thus conjecture a similar bound for more general posteriors.
We note that our bound depends on the parameter dimension through the number of terms in the trace, and the other influences on the \(\mathsf{cmi}\) observed in the above examples enter through the magnitudes of these terms. the self-limiting phenomenon can be seen
in this bound through this multiplication of covariance and information matrices. For instance, in a Bernstein-von-Mises type limit, the posterior concentrates around the true parameter \(\mathbf{\theta}_{0}\), we get \(\mathbf{\Sigma_{\mathbf{y}}}\approx\mathbf{\mathcal{I}}(\mathbf{\theta}_{0})\), and the bound becomes \(Cd/\log d\), which is independent of all non-dimensional factors.
The dependence on the model dimension in this bound comes directly through the dimensions of the covariance and information matrices. The sampling and prior effects appear through the individual eigenvalues of the posterior-normalized Fisher information matrix. We also note that the self-limiting phenomenon can be seen in this bound through this multiplication of covariance and information matrices. For instance, in a Bernstein-von-Mises type limit, the posterior concentrates around the true parameter \(\mathbf{\theta}_{0}\), we get \(\mathbf{\Sigma_{\mathbf{y}}}\approx\mathbf{\mathcal{I}}(\mathbf{\theta}_{0})\), and the bound becomes \(Cd/\log d\).
## 4 Marginal Entropy and Sampling Divergence
Let \(p_{\mathrm{base}}(\mathbf{y},\mathbf{\theta})\) and \(p(\mathbf{y},\mathbf{\theta},\mathbf{\lambda})\) be a base model and expansion. Consider the following two extreme scenarios:
* Let \(p_{\mathrm{base}}(\mathbf{y}\mid\mathbf{\theta})=q(\mathbf{y})\) for a density \(q\). Then the likelihood is constant, and \(h_{p_{\mathrm{base}}}(\mathbf{\theta})=h_{p_{\mathrm{base}}}(\mathbf{\theta}\mid \mathbf{y},\mathbf{y}_{\mathrm{rep}})\) for all \(\mathbf{y}\) and \(\mathbf{y}_{\mathrm{rep}}\). It follows immediately that \(\mathbf{I}(\mathbf{\theta},\mathbf{y})=\mathbf{I}(\mathbf{\theta},\mathbf{y}_{ \mathrm{rep}}\mid\mathbf{y})=0\), and so any nontrivial expansion of \(p_{\mathrm{base}}\) with \(\mathbf{\theta}\) and \(\mathbf{\lambda}\) independent must decrease the marginal posterior entropy of \(\mathbf{\theta}\) and increase the \(\mathsf{cmi}\).
* Let \(p_{\mathrm{base}}(\mathbf{y}\mid\theta)=\mathrm{normal}(\mathbf{y}\mid\theta,1)\), and then take the expansion \(p(\mathbf{y}\mid\theta,\lambda)=p_{\mathrm{base}}(\mathbf{y}\mid\theta+\lambda)\) with priors \(\theta\sim\mathrm{normal}(0,\sigma_{\theta}^{2})\) and \(\lambda\sim\mathrm{normal}\left(0,\sigma_{\lambda}^{2}\right)\). We then have for the expanded model that \[\mathbf{I}(\mathbf{\theta},\mathbf{y})=\frac{1}{2}\log\left(1+\frac{\sigma_{ \theta}^{2}}{1+\sigma_{\lambda}^{2}}\right),\quad\mathbf{I}\left((\mathbf{\theta},\mathbf{\lambda}),\mathbf{y}_{\mathrm{rep}}\mid\mathbf{y}\right)=\frac{1}{2}\log \left(\frac{1+2(\sigma_{\theta}^{2}+\sigma_{\lambda}^{2})}{1+\sigma_{\theta}^ {2}+\sigma_{\lambda}^{2}}\right).\] Similarly, the \(\mathsf{cmi}\) in the base model is \[\mathbf{I}\left(\mathbf{\theta},\mathbf{y}_{\mathrm{rep}}\mid\mathbf{y}\right)= \frac{1}{2}\log\left(\frac{1+2\sigma_{\theta}^{2}}{1+\sigma_{\theta}^{2}} \right).\] Both \(\mathsf{cmi}\) expressions are bounded above by \(\frac{1}{2}\log(2)\). Furthermore, by taking \(\sigma_{\theta}^{2}\) large, we can ensure that the \(\mathsf{cmi}\) for the base model is arbitrarily close to this limit. Then the \(\mathsf{cmi}\) can increase only negligibly in the expanded model regardless of \(\sigma_{\lambda}^{2}\). However, fixing \(\sigma_{\theta}^{2}\), the \(\mathsf{mi}\) in the expanded model tends to \(0\) as \(\sigma_{\lambda}^{2}\to\infty\).
In each case, one of the models has singular Fisher information, and either the \(\mathsf{mi}\) or \(\mathsf{cmi}\) degrades (i.e. decreases or increases respectively) while the other changes negligibly. When both the base model and expanded model have nonsingular Fisher information, we can show that in certain cases there is a strict trade-off asymptotically between worsening (i.e. decreasing) \(\mathsf{mi}\) and worsening (i.e. increasing) \(\mathsf{cmi}\).
**Theorem 5**.: _Let \(p\left(\mathbf{\theta},\mathbf{\lambda},\mathbf{y}\right)\) be an expansion of \(p_{\mathrm{base}}(\mathbf{\theta},\mathbf{y})\), and let \(\left\{\iota_{i}\right\}_{i=1}^{d}\) and \(\left\{\iota_{i}^{\mathrm{cond}}\right\}_{i=1}^{d}\) be the eigenvalues of \(\mathbb{E}\mathbf{\mathcal{I}}_{\mathrm{base}}(\mathbf{\theta})\) and \(\mathbb{E}\mathbf{\mathcal{I}}(\mathbf{\theta}\mid\mathbf{\lambda})\) respectively._
_Furthermore, suppose that the conditions of Theorems 2, 3, and 4 hold. In particular this requires that:_
1. _the marginal priors_ \(p_{\rm base}(\mathbf{\theta})\) _and_ \(p(\mathbf{\theta})\) _are log concave._
2. _the posteriors_ \(p_{\rm base}(\mathbf{\theta}\mid\mathbf{y}_{M})\) _and_ \(p(\mathbf{\theta},\mathbf{\lambda}\mid\mathbf{y}_{M})\) _are normal for all_ \(\mathbf{y}_{M}\) _for_ \(M\) _large enough, where_ \(\mathbf{y}_{M}\) _is a vector of_ \(M\) _i.i.d. replicated draws from_ \(p_{\rm base}(\mathbf{y}\mid\mathbf{\theta})\) _and_ \(p(\mathbf{y}\mid\mathbf{\theta},\mathbf{\lambda})\) _respectively,_
3. _the expected spectra of the Fisher information matrices_ \(\mathbf{\mathcal{I}}_{\rm base}(\mathbf{\theta})\) _and_ \(\mathbf{\mathcal{I}}(\mathbf{\theta},\mathbf{\lambda})\) _are bounded above and below by universal constants, and similarly for the expected spectra of the posterior covariance matrices,_
4. _and a few other regularity conditions on the smoothness of the relevant densities and tails of the observed information._
_Additionally, we further assume that the distributions of the Fisher information matrices and posterior covariance matrices are not too skewed in the sense of Lemma 10 in Appendix F. This ensures that the (random) Fisher information and covariance matrices are sufficiently well-summarized by their means._
_Under these conditions, for increasing functions \(\psi_{1}\) and \(\psi_{2}\), we have the following inequalities:_
\[\mathbf{I}_{\rm base}\left(\mathbf{y},\mathbf{\theta}\right)\leq\psi_{1}\left( \sum_{j=1}^{d}\iota_{j}\right),\qquad\mathbf{I}_{\rm base}\left(\mathbf{y}_{ \rm rep},\mathbf{\theta}\mid\mathbf{y}\right)\geq\frac{1}{\log d}\sum_{i=j}^{d} \psi_{2}\left(\iota_{j}\right) \tag{22}\]
\[\mathbf{I}\left(\mathbf{y},\mathbf{\theta}\right)\leq\psi_{1}\left( \sum_{j=1}^{d}\iota_{j}^{\rm cond}\right)-\Delta_{i}, \tag{23}\] \[\mathbf{I}\left(\mathbf{y}_{\rm rep},(\mathbf{\theta},\mathbf{\lambda}) \mid\mathbf{y}\right)\geq\frac{1}{\log d^{\rm exp}}\left[\sum_{j=1}^{d}\psi_{ 2}\left(\iota_{j}^{\rm cond}\right)+\Delta_{f}\right], \tag{24}\]
_where \(\Delta_{f}\geq 0\), and \(\Delta_{i}\geq 0\) so long as knowledge of \(\mathbf{y}\) does not decrease the information that \(\mathbf{\lambda}\) provides about \(\mathbf{\theta}\) in the sense that \(\sum_{j=1}^{d}\Delta_{j}\leq 0\) (where the \(\Delta_{j}\) are defined in Theorem 3). Furthermore, if \(p\) is a totally diluting expansion of \(p_{\rm base}\), then we have_
\[\psi_{i}\left(\sum_{j=1}^{d}\iota_{j}^{\rm cond}\right)\leq\psi_{i}\left(\sum_ {j=1}^{d}\iota_{j}\right), \tag{25}\]
_and if \(p\) is a totally nondiluting expansion of \(p\), then we have_
\[\sum_{j=1}^{d}\psi_{f}\left(\iota_{j}^{\rm cond}\right)\geq\sum_{j=1}^{d}\psi_{ f}\left(\iota_{j}\right). \tag{26}\]
Proof.: See Appendix F.
This result substantially generalizes the pattern we observed in our introductory regression example, where predictor correlation structures that offered better identification created greater posterior uncertainty about the sampling distribution and vice versa. We can interpret this theorem in a positive and negative light. Negatively, when our prior information is relatively unstructured, the process of model expansion may force
us to confront either weakly identified marginal inferences or a large posterior sampling divergence. Positively, a prior with sufficient dependence between the parameters may allow us to avoid these difficulties, even if this prior is weak in the sense of carrying relatively little marginal information about any particular parameter. In the next sections, we explore examples where these two difficulties occur and demonstrate how they can be partially overcome with sufficiently rich posterior summaries.
## 5 Example: Inference Under Poor Identification
The next two sections explore methodological implications of the above results with concrete examples. Here we consider an example in which we get'stuck' between an implausibly simple base model and an expanded model where the parameter of interest is too weakly identified to allow strong conclusions. We then show how the expanded model can still support nontrivial inferences which, in this case, inform how we should collect future data. In so doing, we hope to demonstrate (i) how the concerns raised in Sections 2 and 4 motivate looking beyond standard marginal posterior summaries, and (ii) that weak identification need not be an inferential dead end for a statistical analysis.
### Two Models for Grouped Data
We first define three simulated data sets, each generated by drawing random samples of \(M\) measurements from each of \(L\) subpopulations. We take \(M=2\) and \(L=20\) for all data sets, but vary the ratio of within- and between-subpopulation variances between them. Appendix B provides a complete description of the data generating process. Figure 6 plots the data, with a row for each subpopulation, a column for each data set, and dots for the individual measurements.
The unobserved grand (i.e. superpopulation-level) mean will be our quantity of interest. We also take more positive values of the grand mean to represent "better" outcomes (i.e. more desirable from the researcher's standpoint). We think about the identification of the grand mean in two ways:
Figure 6: Columns: the three data sets. Rows: the 20 subpopulations. Cells: the two data points drawn from each subpopulation, connected with a horizontal line to show their range.
1. In terms of the reduction in entropy from marginal prior to posterior or the ratio of their standard deviations (which is just a monotonic transform of the former when both distributions are normal). This notion is general, but being unit-free, it is somewhat unnatural for drawing practical conclusions.
2. The posterior probability that the grand mean is below a threshold of practical significance. Specifically, we consider the grand mean to be practically significant only if it exceeds 1.
While these notions are distinct, greater entropy will tend to be associated with larger probability of a not-practically-significant effect (so long as the prior is sufficiently constraining in the extremes).
#### 5.1.1 An Oversimplified Initial Model
In our base model, we assume the subpopulation-level distributions are identical (so the subpopulation means equal the grand mean). Letting \(\mathbf{y}_{ml}\) be the data with \(1\leq m\leq M\) indexing measurements and \(1\leq l\leq L\) indexing subpopulations, the sampling distribution is \(\mathbf{y}_{ml}\stackrel{{ iid}}{{\sim}}\text{normal}\left(\mu, \sigma_{*}\right)\).
Here \(\mu\) represents the grand mean, to which we assign prior \(\text{normal}\left(0,|\mu_{0}|\right)\), where \(\mu_{0}\) is the "true" value used in simulating the three data sets. We treat \(\sigma_{*}\) as a hyperparameter which we set to a prior guess.
Figure 7 shows histograms of draws from the posteriors for each data set along with the prior density. For each data set, the posterior standard deviation is approximately a quarter of the prior standard deviation, and the probabilities of a practically insignificant effect (i.e. \(\mu<1\)) are between 2% and 10%. Overall, we have improved on our prior knowledge, but the posterior cannot fully rule out the possibility of an insignificant effect.
#### 5.1.2 A More Plausible Expanded Model
Two features of the base model stand out for criticism:
1. We usually cannot confidently guess the scale \(\sigma_{*}\) with just prior information.
2. If we regard the subpopulations as distinct for data collection purposes, then we likely have reason to believe their distributions could be distinct.
Our expanded model thus adds a parameter for the subpopulation scale and allows the subpopulations to have distinct means (drawn from some superpopulation). In
Figure 7: Histograms of samples from the posterior distributions of \(\mu\) under the base model fit to each data set. The red curve shows the density of the prior on \(\mu\).
symbols,
\[\begin{split}\mathbf{y}_{ml}&\stackrel{{ iid}}{{\sim}}\text{normal}\left(\theta_{l},\sigma\right),\qquad\theta_{l} \stackrel{{ iid}}{{\sim}}\text{normal}\left(\mu,\tau\right),\\ \mu&\sim\text{normal}\left(0,|\mu_{0}|\right), \qquad\sigma\sim\text{gamma}\left(a_{\sigma},b_{\sigma}\right),\qquad\tau \sim\text{gamma}\left(a_{\tau},b_{\tau}\right).\end{split} \tag{27}\]
In the above, \(a_{\sigma},b_{\sigma},a_{\tau},b_{\tau}\) are hyperparameters of the model, and \(\mu_{0}\) is unchanged from the base model. The hyperparameters \(a_{\sigma}\) and \(b_{\sigma}\) are taken such that the prior mode for \(\sigma\) equals our prior guess \(\sigma_{*}\). The prior on \(\mu\) is unchanged from the base model, and \(\mu\) again corresponds to the grand mean. Setting \(\sigma=\sigma_{*}\) and \(\tau=0\) recovers the base model as a special case.
Figure 8 shows the posterior inferences for \(\mu\) in the expanded model. In the first and third data sets, the posterior standard deviation is \(\approx 1/3\) larger than in the base model. For the second data set, it is more than twice as large as in the base model. In all cases, the standard deviation is at least a third that of the prior. Similarly, the probabilities of a practically insignificant effect are all at least three times larger than in the base model, ranging from \(\approx 6\%\) in the best case to over \(30\%\).
We might now return to the base model or focus on subpopulation means to get better identification. However, the former is inadvisable since the base model was implausible, and the latter will likely fail since the subpopulation sample sizes are tiny. A less convenient but much safer approach is simply to gather more data. In fact, inferences from the expanded model can yield relevant information for future data collection despite the weak identification.
### A Bootstrap Comparison of Two Sampling Schemes
It has been recognized that bootstrapping new datasets, fitting a Bayesian model to each, and then aggregating the results can provide more information than the fit to the observed data alone [6]. This idea has been used, e.g., for model criticism and selection tasks [14, 13]. Here, we demonstrate that a similar procedure with a Bayesian parametric bootstrap of future data can help us learn about how to sample such data despite the underwhelming marginal identification of the posterior.
First, we must identify the candidate sampling schemes. We may sample the same subpopulations that we sampled originally, or we may sample from new subpopulations, or some combination of these. If the within-subpopulation variance is low, then sampling the same subpopulations may yield little improvement for identification. If instead the between-subpopulation variance is low, sampling the same subpopulations may provide enough information about the individual \(\theta_{l}\) to identify \(\mu\) well. The issue may be further complicated by cost differences. For our purposes, we assume that a sample from a new subpopulation is four times the cost of a sample from an existing subpopulation.
Figure 8: Histograms of samples from the marginal posteriors of \(\mu\) under the expanded model fit to each data set. The red curve shows the density of the prior on \(\mu\).
To simplify our comparison, we focus on whether it is more efficient to sample _exclusively_ from existing subpopulations or from new subpopulations. To answer this, we use the joint posterior over parameters and replicated data to simulate sampling new data with these two schemes. We then compare posteriors on \(\mu\) after refitting the model to these enlarged data sets. Replicating many times then allows us to assess the relative risks of each approach. Figure 9 diagrams these resampling schemes with full pseudocode in Appendix B.
We take \(M^{\text{new}}=8\) additional samples from each of the \(L=20\) existing subpopulations in the first scheme and \(M=2\) samples from \(L^{\text{new}}=20\) new subpopulations in the second scheme. These are performed at equivalent overall cost, hence the difference in total sample size. We replicate each scheme \(R=500\) times. We also repeat this process with the prior replacing the posterior at each step in order to demonstrate that the posterior inferences differ from prior inferences despite the marginal weak identification.
For each of the \(r=1,\ldots,R\) replications and each method, define the ratio \(\rho_{r}=\sigma^{(r)}/\sigma_{\text{obs}}\), where \(\sigma^{(r)}\) is the posterior standard deviation of \(\mu\) for the \(r^{\text{th}}\) expanded data set simulated from the given method, and where \(\sigma_{\text{obs}}\) is the posterior standard deviation of \(\mu\) given just the observed data set. Table 10 presents the averages of these ratios over all replications \(\overline{\rho}=\frac{1}{R}\sum_{r=1}^{R}\rho_{r}\) for each method and each of our observed data sets.
Despite weak identification, the posterior and prior columns differ, indicating that our simulated data reflects information learned from the observed data. Furthermore, the average improvement to identification for each scheme depends substantially on the data set, with each scheme winning in one data set and tying in the third.
Figure 11 gives a finer-grained comparison, plotting histograms of the \(\rho_{r}\) for the two schemes. While the first data set yields a tie in the comparison of averages, the distribution of \(\rho_{r}\) is wider when sampling the same subpopulations than when sampling new subpopulations. This suggests a risk-reward trade-off: the latter scheme gives a more predictable reduction in uncertainty while the former carries the possibility of a greater reduction. Together this demonstrates that we can get nontrivial inferences which depend strongly on the particular data observed despite weak identification.
## 6 Example: Conditional Model Checking
We next consider a model where the conditional ppp-vs contain substantially more information than the marginal \(p\)-value (i.e. the regular ppp-v) for a relevant test statistic.
Figure 9: Schematic representation of our two resampling schemes. Dashed arrows represent sampling the output variable conditional on the input. Solid arrows represent that the output variable is formed by evaluating the circled function on the input variables.
This conditional information motivates a modification which further improves model fitness and resolves a problem that was masked by the marginal \(p\)-value.
We take as our base model the election forecasting model of [11]. We pay particular attention to herding, a process in which pollsters systematically augment their raw data in such a way that their published poll numbers are closer to the existing consensus of recent polls than they would otherwise be. That the variance among presidential election polls fell well below the minimal expected variance (assuming independence between polls) just before the election has been used as evidence for the existence of herding in 2016 [22]. As this model does not explicitly account for herding, this will be starting point of our model checking.
### The Base Forecasting Model and a Check for Herding
A more complete discussion of the model specification is given in Appendix C, and full details can be found in [11]. The primary purpose of the model is to infer the level of support for the Democratic candidate over time and across states. This level of support is represented by a matrix parameter \(\boldsymbol{\mu}\in\mathbb{R}^{S\times T}\) with rows representing the \(S=51\) states (including Washington DC) and columns representing the \(T\) days from the start of measurement until election day. This parameter is assigned a time series prior:
\[\boldsymbol{\mu}_{t}\mid\boldsymbol{\mu}_{t+1}\sim\text{normal}\left( \boldsymbol{\mu}_{t+1},\boldsymbol{\Sigma}^{\mu}\right)\text{ for }1\leq t\leq T-1,\text{ and }\boldsymbol{\mu}_{T}\sim\text{ normal}\left(\mathbf{m}^{\text{f}},\mathbf{S}^{\text{f}}\right). \tag{28}\]
Here \(\boldsymbol{\Sigma}^{\mu}\in\mathbb{R}^{S\times S}\) is a hyperparameter encoding correlation between states and variation over time, constructed using demographic data, polling from previous elections, and domain knowledge. Likewise, \(\mathbf{m}^{f}\in\mathbb{R}^{S}\) and \(\mathbf{S}^{f}\in\mathbb{R}^{S\times S}\) are hyperparameters set using
Figure 11: Histograms of estimated posterior standard deviations of \(\mu\) for 500 simulations of future data under each sampling scheme and each data set.
Figure 10:
a 'fundamentals forecast' derived from variables known in political science to be good predictors of U.S. election outcomes.
Results of state and national polls are modeled with a binomial distribution that combines \(\boldsymbol{\mu}\) with terms representing sources of polling bias. Letting \(i=1,\ldots,N_{\text{state}}\) index state polls and \(y_{i}\) denote the number of respondents supporting the Democratic candidate out of \(n_{i}\) respondents in poll \(i\), we have
\[y_{i}\sim\text{binomial}\left(\mathsf{logit}^{-1}\left(\boldsymbol{\mu}_{s_{i },t_{i}}+\beta_{i}\right),n_{i}\right), \tag{29}\]
where \(s_{i}\), \(t_{i}\) denote the state and day for poll \(i\) and \(\beta_{i}\) models various sources of bias (see Appendix C for further discussion).
National polls are modeled similarly, except state-level terms including \(\boldsymbol{\mu}_{st}\) are averaged with weights accounting for each state's share of the national vote in the previous election.
Since most national polls lie in \((0.4,0.6)\) in any close race, the binomial sampling model (29) effectively places a lower bound of \(\sqrt{0.6\times 0.4/n_{i}}\approx 0.49/\sqrt{n_{i}}\) on the standard deviation of national poll \(i\). Thus, this model will be incompatible with sufficiently low poll variance over any interval with enough published polls. Figure 12 compares the observed variation in the last ten days with posterior predictive replications and shows that the observed polling variance is substantially smaller than in posterior simulations.
### A Simple Model of Herding
By allowing dependence among polls, adding a herding mechanism may resolve this tension between model and data. First define \(\theta_{i}^{\text{state}}=\mathsf{logit}^{-1}\left(\boldsymbol{\mu}_{s_{i},t_ {i}}+\beta_{i}\right)\) and \(\theta_{j}^{\text{nat}}=\mathsf{logit}^{-1}\left(\overline{\mu}_{t_{j}}+ \overline{\beta}_{j}\right)\) for \(1\leq i\leq N^{\text{state}}\) and \(1\leq j\leq N^{\text{nat}}\), where \(\overline{\mu}_{t_{j}}\) and \(\overline{\beta}_{j}\) are weighted averages of the means and biases over the states. The \(\theta\) represent the expected average support for the Democratic candidate in random samples drawn from the sampling frame of the corresponding poll. We model the sampling process of national poll \(j\) as follows.
1. The pollster samples their sampling frame, observing unherded result: \[y_{j}^{\text{nat}}\mid\theta_{j}^{\text{nat}}\sim\text{binomial}\left( \theta_{j}^{\text{nat}},n_{j}^{\text{nat}}\right).\]
Figure 12: The observed data is highlighted in red. Left: box plots of the last tend days of polls for the observed data and fifty posterior predictive replications, ordered by range. Right: histogram of standard deviations of the last ten days of polls for the observed data and 6000 posterior predictive replications.
2. The pollster calculates a herding target \(\mu_{j}^{\text{herd}}\) representing their quantification of the consensus of polls to which they compare their \(y_{j}^{\text{nat}}\).
3. The pollster herds \(y_{j}^{\text{nat}}\) by some fraction \(\lambda_{j}^{\text{herd}}\in(0,1)\) towards \(\mu_{j}^{\text{herd}}\). They publish this herded result: \[p_{j}^{\text{nat}}=\left(1-\lambda_{j}^{\text{herd}}\right)\frac{y_{j}^{\text{ nat}}}{n_{j}^{\text{nat}}}+\lambda_{j}^{\text{nat}}\mu_{j}^{\text{herd}}.\]
To simplify the implementation of this scheme, we use the normal approximation to the binomial, arriving at
\[\frac{y_{j}^{\text{nat}}}{n_{j}^{\text{nat}}}\left|\;\theta_{j}^{ \text{nat}}\sim\text{normal}\left(\theta_{j}^{\text{nat}},\sqrt{\theta_{j}^{ \text{nat}}\left(1-\theta_{j}^{\text{nat}}\right)\Big{/}n_{j}^{\text{nat}}} \right),\right.\] \[p_{j}^{\text{nat}}\left|\;\theta_{j}^{\text{nat}},\mu_{j}^{\text {herd}},\lambda_{j}^{\text{herd}}\sim\text{normal}\Big{(}\left(1-\lambda_{j}^{ \text{herd}}\right)\theta_{j}^{\text{nat}}+\lambda_{j}^{\text{nat}}\mu_{j}^{ \text{herd}}, \tag{30}\] \[\left.\left(1-\lambda_{j}^{\text{herd}}\right)\sqrt{\theta_{j}^{ \text{nat}}\left(1-\theta_{j}^{\text{nat}}\right)\Big{/}n_{j}^{\text{nat}}} \right). \tag{31}\]
The herding model for the \(1\leq j\leq N^{\text{state}}\) polls of states \(1\leq s\leq 51\) uses analogous targets \(\mu_{s_{j},j}^{\text{herd}}\) and percentages \(\lambda_{s_{j},j}^{\text{herd}}\), where \(s_{j}\) is the state in which poll \(j\) was conducted. Now let \(m^{\text{nat}}\), \(s^{\text{nat}}\), \(m_{s}^{\text{state}}\) and \(s_{s}^{\text{state}}\) be the sample averages and standard deviations of the national polls and polls of state \(1\leq s\leq 51\) respectively. We set the following priors for these parameters.
\[\mu_{i}^{\text{herd}}\stackrel{{ iid}}{{\sim}}\text{normal}\left(m^{ \text{nat}},\frac{2}{3}s^{\text{nat}}\right),\quad\mu_{s_{j},j}^{\text{herd}} \stackrel{{ iid}}{{\sim}}\text{normal}\left(m_{s_{j}}^{\text{ state}},\frac{2}{3}s_{s_{j}}^{\text{state}}\right). \tag{32}\]
These essentially just constrain the herding targets to lie in \(\pm 2\) standard deviations of the overall polling means, keeping them away from the extremes of our data. Next let \(\mathcal{C}=\{1\leq i\leq N^{\text{nat}}\mid t_{i}\geq T-10\}\) be the indices of national polls conducted in the last ten days prior to the election. We then set priors:
\[\lambda_{i}^{\text{nat}},\lambda_{j,s_{j}}^{\text{state}} \stackrel{{ iid}}{{\sim}}\text{Beta}\left(1,9\right),\text{ for all }1\leq j\leq N^{\text{state}}\text{ and }i\in\mathcal{C}^{c},\] \[\lambda_{i}^{\text{nat}}\Bigg{|}\mu_{\lambda}^{\text{last}},k_{ \lambda}^{\text{last}}\stackrel{{ iid}}{{\sim}}\text{Beta}\left(\mu_{ \lambda}^{\text{last}}k_{\lambda}^{\text{last}},\left(1-\mu_{\lambda}^{\text{ last}}\right)k_{\lambda}^{\text{last}}\right),\text{ for all }i\in\mathcal{C}, \tag{33}\]
where we use hyperpriors \(\mu_{\lambda}^{\text{last}}\sim\text{uniform}\left[0,1\right]\) and \(k_{\lambda}^{\text{last}}\sim\text{normal}\left(200,120\right)\). We also constrain the \(\lambda\) explicitly to \(\left[0,0.9\right]\) since values too close to \(1\) create adverse geometry that frustrates the sampler, and since our prior beliefs rules out such values. The hierarchical prior on the last ten days of national herding parameters serves two purposes. Since the polls in this period are our primary evidence for herding, inference for \(\mu_{\lambda}^{\text{last}}\) and \(k_{\lambda}^{\text{last}}\) is of substantive interest. Also, since data is especially dense in this period, we can better estimate the herding parameters for these polls (and can thus afford the weaker marginal priors implied by the hierarchical structure).
Figure 13 displays the posterior predictive check of polling variation in the last ten days for the expanded model. While the observed variation is still relatively small, it is no longer implausible. We may now be tempted to stop and declare our model good enough, as it is not obvious what further improvements to make. We can, however, extract more information than is revealed by this (marginal) posterior predictive check.
### A More Informative Conditional Model Check
Recall that the conditional posterior predictive \(p\)-values (cppp-vs) are defined for test statistic \(T\) as \(p_{T}(\mathbf{\theta})=\mathbb{E}_{p(\mathbf{y}_{\text{rep}}|\mathbf{\theta})}\,\{T( \mathbf{y}_{\text{rep}})\geq T(\mathbf{y})\}\). The top panel of Figure 14 plots thecppp-vs with \(T\) equal to the standard deviation of the last ten days of national polls against posterior samples of \(\mu_{\lambda}^{\text{last}}\), the average herding percentage in the last ten days of polling. The marginal distributions displayed on the axes show that the distribution ofcppp-vs has a heavy right tail with the preponderance of sampled \(p\)-values less than the average of 0.05. Furthermore, the fit of the model appears much better for larger values of \(\mu_{\lambda}^{\text{last}}\), suggesting that model fitness may improve if the posterior favored more herding. But it is unclear _how_ we should achieve this as using a more informative prior to favor higher values of \(\mu_{\lambda}^{\text{last}}\) would be inconsistent with our (lack of) prior knowledge. Another comparison tells us more. The bottom panel of Figure 14 plots the samecccp-vs against the standard deviation of the herding targets \(\mu_{i}^{\text{herd}}\) in the last ten days before the election. Less variability in the herding targets is associated with better fit, indicating another avenue for improvement.
By leveraging only the gross features of the poll results, the prior on \(\mu_{i}^{\text{herd}}\) was designed to be weakly informative. However, this prior only enforces that \(\mu_{i}^{\text{herd}}\) not be extreme compared to all polls in the series. But clearly \(\mu_{i}^{\text{herd}}\) should also be non-extreme
Figure 14: Conditionalcppp-vs for the standard deviation of the last ten days of national polls in the expanded model plotted against the population average herding percentage over that time frame (top) and the standard deviation of the herding targets over that time frame (bottom). Black lines indicate the marginalppp-v. Estimated marginal distributions are displayed on the margins.
Figure 13: The observed data is highlighted in red. Left: box plots of the last tend days of polls for the observed data and fifty posterior predictive replications, ordered by total range. Right: histogram of the standard deviation of the last ten days of national polls for the observed data and six thousand posterior predictive replications.
compared to polls around the _specific_ time that poll \(i\) was conducted. Including this prior information should reduce the posterior standard deviation of herding targets and thus hopefully improve the fit.
### Using the Conditional Check to Improve the Model
To construct a new prior on the targets \(\mu_{i}^{\text{herd}}\), we calculate the trailing averages
\[c_{d}^{i}=\frac{1}{|\mathcal{C}_{t_{i}}^{d}|}\sum_{k\in\mathcal{C}_{d}^{i}} \frac{y_{k}^{\text{nat}}}{n_{k}^{\text{nat}}} \tag{34}\]
for \(d=2,\ldots,7\) days before each national poll \(i\), where \(\mathcal{C}_{t}^{d}=\left\{1\leq i\leq N^{\text{nat}}\mid t-d\leq t_{i}\leq t\right\}\).
We then define the average and spread of these:
\[m_{c}^{i}=\frac{1}{6}\sum_{d=2}^{7}c_{d}^{i},\text{ and }s_{c}^{i}=\sqrt{ \frac{1}{6}\sum_{d=2}^{7}\left(c_{d}^{i}-m_{c}^{i}\right)^{2}}+0.0002. \tag{35}\]
The addition of a small constant \(2^{-4}\) ensures the spread cannot shrink to \(0\) when data is sparse. We then define the improved prior \(\mu_{i}^{\text{herd}}\sim\text{normal}\left(m_{c}^{i},s_{c}^{i}\right)\), with priors for the state herding targets constructed similarly from the state-level time series. This prior now encodes the idea that \(\mu_{i}^{\text{herd}}\) should look like a consensus of _recent_ polls.
Figure 16 displays the same pair of conditional posterior predictive checks using this more informative prior. The (marginal) ppp-v is nearly \(0.3\), and the unpleasant'spike and slab' shape of the cppp-vs that appeared in Figure 14 has been attenuated. Furthermore, we no longer find intervals of large posterior probability where the cppp-vs are vanishingly small in either plot. Thus, the marginal check looks good, and the conditional check no longer indicates obvious directions for improvement.
The improvement in cppp-vs versus the standard deviation of \(\mu_{i}^{\text{herd}}\) is expected, but the improvement in the comparison with the mean herding percentage (achieved by higher mean herding in the posterior) may be somewhat surprising. The source of the improvement is revealed in Figure 15, which displays a strong negative association between the standard deviations of the \(\mu_{i}^{\text{herd}}\) and the \(\mu_{\lambda}^{\text{last}}\) in the last ten days for the first expanded model. Thus, solving the problem for the herding targets also solved the herding percentage problem for free.
This example shows that the concern of Section 3 that marginalized model checks like the ppp-v could obscure information useful for assessing model fitness is more than
Figure 15: Posterior samples under the first expanded model of the average herding percentage for national polls against the standard deviation of herding targets for national polls both over the last ten days of polls. Estimated marginals are given on the margins.
purely theoretical. More positively, it also clearly demonstrates how the cppp-v can be a powerful tool for motivating specific model improvements, especially when the marginal ppp-v is neither implausibly small nor reassuringly large.
## 7 Conclusions
When constructing a model for a given data analysis, a statistician should balance various desiderata, including:
* model predictions compatible with what is known about the world;
* inferences sufficiently well identified to support nontrivial conclusions;
* model checks powerful enough to reveal frictions between model and data.
When model checks reveal deficiencies, the first item is no longer satisfied, and a better model should be sought. In practice, this is often an expansion of the previous model. If such an expansions are not accompanied by sufficiently strong prior information (in the form of prior dependence of parameters, not the marginal scales), then our results demonstrate that a tension may easily arise within these three goals as the model dimension grows. Insofar as the first desideratum is most essential, this motivates methods that can extract useful information even when one of the latter two desiderata are not satisfied. One avenue is to pursue richer inferential summaries. As demonstrated by the last two examples, the full posterior often contains such rich inferential data. This data is both accessible (by looking beyond e.g. marginal means, standard deviations, and \(p\)-values) and capable of supporting nontrivial conclusions (which are concealed by the common marginal summaries).
Many directions for future work remain. Determining whether the trade-off observed in Lemma 11, which depended on Fisher information-based bounds, could be strengthened (e.g. to a relation between information theoretic quantities directly) would be of particular interest. We would also like to have better tools for extracting joint inferences from the posterior. For example, the posterior predictive resampling we performed in Section 5 could become prohibitively expensive in large models, and methods to rapidly approximate such results would be useful. More broadly, there is at the time of writing no canonical method for estimating conditional analogs of the usual marginal
Figure 16: cppp-vs for the standard deviation of national polls under the second expanded model, plotted against the population average herding percentage (top) and the standard deviation of the herding targets (bottom), all computed over the last ten days of polls. Black lines indicate marginal ppp-vs. Estimated marginals are given on the margins.
summaries (e.g. means and variances), which would aid the study of joint inferences in practice.
|
2306.03310 | LIBERO: Benchmarking Knowledge Transfer for Lifelong Robot Learning | Lifelong learning offers a promising paradigm of building a generalist agent
that learns and adapts over its lifespan. Unlike traditional lifelong learning
problems in image and text domains, which primarily involve the transfer of
declarative knowledge of entities and concepts, lifelong learning in
decision-making (LLDM) also necessitates the transfer of procedural knowledge,
such as actions and behaviors. To advance research in LLDM, we introduce
LIBERO, a novel benchmark of lifelong learning for robot manipulation.
Specifically, LIBERO highlights five key research topics in LLDM: 1) how to
efficiently transfer declarative knowledge, procedural knowledge, or the
mixture of both; 2) how to design effective policy architectures and 3)
effective algorithms for LLDM; 4) the robustness of a lifelong learner with
respect to task ordering; and 5) the effect of model pretraining for LLDM. We
develop an extendible procedural generation pipeline that can in principle
generate infinitely many tasks. For benchmarking purpose, we create four task
suites (130 tasks in total) that we use to investigate the above-mentioned
research topics. To support sample-efficient learning, we provide high-quality
human-teleoperated demonstration data for all tasks. Our extensive experiments
present several insightful or even unexpected discoveries: sequential
finetuning outperforms existing lifelong learning methods in forward transfer,
no single visual encoder architecture excels at all types of knowledge
transfer, and naive supervised pretraining can hinder agents' performance in
the subsequent LLDM. Check the website at https://libero-project.github.io for
the code and the datasets. | Bo Liu, Yifeng Zhu, Chongkai Gao, Yihao Feng, Qiang Liu, Yuke Zhu, Peter Stone | 2023-06-05T23:32:26Z | http://arxiv.org/abs/2306.03310v2 | # LIBERO: A Benchmark for Lifelong Robot Learning
###### Abstract
Lifelong learning offers a promising paradigm of building a generalist agent that learns and adapts over its lifespan. Unlike traditional lifelong learning problems in image and text domains, which primarily involve the transfer of declarative knowledge of entities and concepts, lifelong learning in decision-making (LLDM) also necessitates the transfer of procedural knowledge, such as actions and behaviors. To advance research in LLDM, we introduce LIBERO, a novel benchmark of lifelong learning for robot manipulation. Specifically, LIBERO highlights five key research topics in LLDM: **1)** how to efficiently transfer declarative knowledge, procedural knowledge, or the mixture of both; **2)** how to design effective policy architectures and **3)** effective algorithms for LLDM; **4)** the robustness of a lifelong learner with respect to task ordering; and **5)** the effect of model pretraining for LLDM. We develop an extendible _procedural generation_ pipeline that can in principle generate infinitely many tasks. For benchmarking purpose, we create four task suites (130 tasks in total) that we use to investigate the above-mentioned research topics. To support sample-efficient learning, we provide high-quality human-teleoperated demonstration data for all tasks. Our extensive experiments present several insightful or even _unexpected_ discoveries: sequential finetuning outperforms existing lifelong learning methods in forward transfer, no single visual encoder architecture excels at all types of knowledge transfer, and naive supervised pretraining can hinder agents' performance in the subsequent LLDM.2
Footnote 2: Check the website at [https://libero-project.github.io](https://libero-project.github.io) for the code and the datasets.
## 1 Introduction
A longstanding goal in machine learning is to develop a generalist agent that can perform a wide range of tasks. While multitask learning [10] is one approach, it is computationally demanding and not adaptable to ongoing changes. Lifelong learning [63], however, offers a practical solution by amortizing the learning process over the agent's lifespan. Its goal is to leverage prior knowledge to facilitate learning new tasks (forward transfer) and use the newly acquired knowledge to enhance performance on prior tasks (backward transfer).
The main body of the lifelong learning literature has focused on how agents transfer _declarative_ knowledge in visual or language tasks, which pertains to _declarative knowledge_ about entities and concepts [38; 7]. Yet it is understudied how agents transfer knowledge in decision-making tasks, which involves a mixture of both _declarative_ and _procedural_ knowledge (knowledge about how to _do_ something). Consider a scenario where a robot, initially trained to retrieve juice from a fridge, fails after learning new tasks. This could be due to forgetting the juice or fridge's location (declarative knowledge) or how to open the fridge or grasp the juice (procedural knowledge). So far, we lack methods to systematically and quantitatively analyze this complex knowledge transfer.
To bridge this research gap, this paper introduces a new simulation benchmark, LIFelong learning Bechmark on RObot manipulation tasks, LIBERO, to facilitate the systematic study of lifelong learning in decision making (LLDM). An ideal LLDM testbed should enable continuous learning across an expanding set of diverse tasks that share concepts and actions. LIBERO supports this through a procedural generation pipeline for endless task creation, based on robot manipulation tasks with shared visual concepts (declarative knowledge) and interactions (procedural knowledge).
For benchmarking purpose, LIBERO generates 130 language-conditioned robot manipulation tasks inspired by human activities [22] and, grouped into four suites. The four task suites are designed to examine distribution shifts in the object types, the spatial arrangement of objects, the task goals, or the mixture of the previous three (top row of Figure 1). LIBERO is scalable, extendable, and designed explicitly for studying lifelong learning in robot manipulation. To support efficient learning, we provide high-quality, human-teleoperated demonstration data for all 130 tasks.
We present an initial study using LIBERO to investigate five major research topics in LLDM (Figure 1): **1)** knowledge transfer with different types of distribution shift; **2)** neural architecture design; **3)** lifelong learning algorithm design; **4)** robustness of the learner to task ordering; and **5)** how to leverage pre-trained models in LLDM (bottom row of Figure 1). We perform extensive experiments across different policy architectures and different lifelong learning algorithms. Based on our experiments, we make several insightful or even **unexpected** observations:
1. Policy architecture design is as crucial as lifelong learning algorithms. The transformer architecture is better at abstracting temporal information than a recurrent neural network. Vision transformers work well on tasks with rich visual information (e.g., a variety of objects). Convolution networks work well when tasks primarily need procedural knowledge.
2. While the lifelong learning algorithms we evaluated are effective at preventing forgetting, they generally perform _worse_ than sequential finetuning in terms of forward transfer.
3. Our experiment shows that using pretrained language embeddings of semantically-rich task descriptions yields performance _no better_ than using those of the task IDs.
4. Basic supervised pretraining on a large-scale offline dataset can have a _negative_ impact on the learner's downstream performance in LLDM.
Figure 1: **Top**: LIBERO has four procedurally-generated task suites: LIBERO-Spatial, LIBERO-Object, and LIBERO-Goal have 10 tasks each and require transferring knowledge about spatial relationships, objects, and task goals; LIBERO-100 has 100 tasks and requires the transfer of entangled knowledge. **Bottom**: we investigate five key research topics in LLDM on LIBERO.
Background
This section introduces the problem formulation and defines key terms used throughout the paper.
### Markov Decision Process for Robot Learning
A robot learning problem can be formulated as a finite-horizon Markov Decision Process: \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{T},H,\ \mu_{0},R)\). Here, \(\mathcal{S}\) and \(\mathcal{A}\) are the state and action spaces of the robot. \(\mu_{0}\) is the initial state distribution, \(R:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function, and \(\mathcal{T}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) is the transition function. In this work, we assume a sparse-reward setting and replace \(R\) with a goal predicate \(g:\mathcal{S}\rightarrow\{0,1\}\). The robot's objective is to learn a policy \(\pi\) that maximizes the expected return: \(\max_{\pi}\mathcal{J}(\pi)=\mathbb{E}_{s_{t},a_{t}\sim\pi,\mu_{0}}[\sum_{t=1}^ {H}g(s_{t})]\).
### Lifelong Robot Learning Problem
In a _lifelong robot learning problem_, a robot sequentially learns over \(K\) tasks \(\{T^{1},\ldots,T^{K}\}\) with a single policy \(\pi\). We assume \(\pi\) is conditioned on the task, i.e., \(\pi(\cdot\mid s;T)\). For each task, \(T^{k}\equiv(\mu_{0}^{k},g^{k})\) is defined by the initial state distribution \(\mu_{0}^{k}\) and the goal predicate \(g^{k}\).3 We assume \(\mathcal{S},\mathcal{A},\mathcal{T},H\) are the same for all tasks. Up to the \(k\)-th task \(T^{k}\), the robot aims to optimize
Footnote 3: Throughout the paper, a superscript/subscript is used to index the task/time step.
\[\max_{\pi}\ J_{\text{LRL}}(\pi)=\frac{1}{k}\sum_{p=1}^{k}\bigg{[}\mathop{ \mathbb{E}}_{s_{t}^{p},a_{t}^{p}\sim\pi(\cdot;T^{p}),\ \mu_{0}^{p}}\bigg{[}\sum_{t=1}^{L}g^{p}(s_{t}^{p})\bigg{]}\bigg{]}. \tag{1}\]
An important feature of the lifelong setting is that the agent loses access to the previous \(k-1\) tasks when it learns on task \(T^{k}\).
Lifelong Imitation LearningDue to the challenge of sparse-reward reinforcement learning, we consider a practical alternative setting where a user would provide a small demonstration dataset for each task in the sequence. Denote \(D^{k}=\{\tau_{i}^{k}\}_{i=1}^{N}\) as \(N\) demonstrations for task \(T^{k}\). Each \(\tau_{i}^{k}=(o_{0},a_{0},o_{1},a_{1},\ldots,o_{l^{k}})\) where \(l^{k}\leq H\). Here, \(o_{t}\) is the robot's sensory input, including the perceptual observation and the information about the robot's joints and gripper. In practice, the observation \(o_{t}\) is often non-Markovian. Therefore, following works in partially observable MDPs [25], we represent \(s_{t}\) by the aggregated history of observations, i.e. \(s_{t}\equiv o_{\leq t}\triangleq(o_{0},o_{1},\ldots,o_{t})\). This results in the _lifelong imitation learning problem_ with the same objective as in Eq. (1). But during training, we perform behavioral cloning [4] with the following surrogate objective function:
\[\min_{\pi}\ J_{\text{BC}}(\pi)=\frac{1}{k}\sum_{p=1}^{k}\mathop{\mathbb{E}}_{ \alpha_{t},a_{t}\sim D^{p}}\bigg{[}\sum_{t=0}^{l^{p}}\mathcal{L}\big{(}\pi(o_{ \leq t};T^{p}),a_{t}^{p}\big{)}\bigg{]}\,, \tag{2}\]
where \(\mathcal{L}\) is a supervised learning loss, e.g., the negative log-likelihood loss, and \(\pi\) is a Gaussian mixture model. Similarly, we assume \(\{D^{p}:p<k\}\) are not fully available when learning \(T^{k}\).
## 3 Research Topics in LLDM
We outline five major research topics in LLDM that motivate the design of LIBERO and our study.
(T1) Transfer of Different Types of KnowledgeIn order to accomplish a task such as _put the ketchup next to the plate in the basket_, a robot must understand the concept _ketchup_, the location of the _plate/basket_, and how to _put_ the ketchup in the basket. Indeed, robot manipulation tasks in general necessitate different types of knowledge, making it hard to determine the cause of failure. We present four task suites in Section 4.2: three task suites for studying the transfer of knowledge about spatial relationships, object concepts, and task goals in a disentangled manner, and one suite for studying the transfer of mixed types of knowledge.
(T2) Neural Architecture DesignAn important research question in LLDM is how to design effective neural architectures to abstract the multi-modal observations (images, language descriptions, and robot states) and transfer only relevant knowledge when learning new tasks.
(T3) Lifelong Learning Algorithm DesignGiven a policy architecture, it is crucial to determine what learning algorithms to apply for LLDM. Specifically, the sequential nature of LLDM suggests that even minor forgetting over successive steps can potentially lead to a total failure in execution. As such, we consider the design of lifelong learning algorithms to be an open area of research in LLDM.
(T4) Robustness to Task Ordering It is well-known that task curriculum influences policy learning [6; 46]. A robot in the real world, however, often cannot choose which task to encounter first. Therefore, a good lifelong learning algorithm should be robust to different task orderings.
(T5) Usage of Pretrained Models In practice, robots will be most likely pretrained on large datasets in factories before deployment [28]. However, it is not well-understood whether or how pretraining could benefit subsequent LLDM.
## 4 Libero
This section introduces the components in LIBERO: the procedural generation pipeline that allows the never-ending creation of tasks (Section 4.1), the four task suites we generate for benchmarking (Section 4.2), five algorithms (Section 4.3), and three neural architectures (Section 4.4).
### Procedural Generation of Tasks
Research in LLDM requires a systematic way to create new tasks while maintaining task diversity and relevance to existing tasks. LIBERO procedurally generates new tasks in three steps: **1)** extract behavioral templates from language annotations of human activities and generate sampled tasks described in natural language based on such templates; **2)** specify an initial object distribution given a task description; and **3)** specify task goals using a propositional formula that aligns with the language instructions. Our generation pipeline is built on top of Robosuite [73], a modular manipulation simulator that offers seamless integration. Figure 2 illustrates an example of task creation using this pipeline, and each component is expanded upon below.
Behavioral Templates and Instruction GenerationHuman activities serve as a fertile source of tasks that can inspire and generate a vast number of manipulation tasks. We choose a large-scale activity dataset, Ego4D [22], which includes a large variety of everyday activities with language annotations. We pre-process the dataset by extracting the language descriptions and then summarize them into a large set of commonly used language templates. After this pre-processing step, we use the templates and select objects available in the simulator to generate a set of task descriptions in the form of language instructions. For example, we can generate an instruction "Open the drawer of the cabinet" from the template "Open...".
Figure 2: LIBERO’s procedural generation pipeline: Extracting behavioral templates from a large-scale human activity dataset **(1)**, Ego4D, for generating task instructions **(2)**; Based on the task description, selecting the scene and generating the PDDL description file **(3)** that specifies the objects and layouts **(A)**, the initial object configurations **(B)**, and the task goal **(C)**.
Initial State Distribution (\(\mu_{0}\))To specify \(\mu_{0}\), we first sample a scene layout that matches the objects/behaviors in a provided instruction. For instance, a kitchen scene is selected for an instruction _Open the top drawer of the cabinet and put the bowl in it_. Then, the details about \(\mu_{0}\) are generated in the PDDL language [41, 61]. Concretely, \(\mu_{0}\) contains information about object categories and their placement (Figure 2-**(A)**), and their initial status (Figure 2-**(B)**).
Goal Specifications \((g)\)Based on \(\mu_{0}\) and the language instruction, we specify the task goal using a conjunction of predicates. Predicates include _unary predicates_ that describe the properties of an object, such as \(\mathtt{Open(X)}\) or \(\mathtt{TurnOff(X)}\), and _binary predicates_ that describe spatial relations between objects, such as \(\mathtt{On(A,B)}\) or \(\mathtt{In(A,B)}\). An example of the goal specification using PDDL language can be found in Figure 2-**(C)**. The simulation terminates when all predicates are verified true.
### Task Suites
While the pipeline in Section 4.1 supports the generation of an unlimited number of tasks, we offer fixed sets of tasks for benchmarking purposes. LIBERO has four task suites: LIBERO-Spatial, LIBERO-Object, LIBERO-Goal, and LIBERO-100. The first three task suites are curated to disentangle the transfer of _declarative_ and _procedural_ knowledge (as mentioned in (T1)), while LIBERO-100 is a suite of 100 tasks with entangled knowledge transfer.
LibERO-X LIBERO-Spatial, LIBERO-Object, and LIBERO-Goal all have 10 tasks 4 and are designed to investigate the controlled transfer of knowledge about spatial information (declarative), objects (declarative), and task goals (procedural). Specifically, all tasks in LIBERO-Spatial request the robot to place a bowl, among the same set of objects, on a plate. But there are two identical bowls that differ only in their location or spatial relationship to other objects. Hence, to successfully complete LIBERO-Spatial, the robot needs to continually learn and memorize new spatial relationships. All tasks in LIBERO-Object request the robot to pick-place a unique object. Hence, to accomplish LIBERO-Object, the robot needs to continually learn and memorize new object types. All tasks in LIBERO-Goal share the same objects with fixed spatial relationships but differ only in the task goal. Hence, to accomplish LIBERO-Goal, the robot needs to continually learn new knowledge about motions and behaviors. More details are in Appendix D.
Footnote 4: A suite of 10 tasks is enough to observe catastrophic forgetting while maintaining computation efficiency.
LibERO-100 LIBERO-100 contains 100 tasks that entail diverse object interactions and versatile motor skills. In this paper, we split LIBERO-100 into 90 short-horizon tasks (LIBERO-90) and 10 long-horizon tasks (LIBERO-Long). LIBERO-90 serves as the data source for pretraining **(T5)** and LIBERO-Long for downstream evaluation of lifelong learning algorithms.
### Lifelong Learning Algorithms
We implement three representative lifelong learning algorithms to facilitate research in algorithmic design for LLDM. Specifically, we implement Experience Replay (ER) [13], Elastic Weight Consolidation (EWC) [33], and PackNet[39]. We pick ER, EWC, and PackNet because they correspond to the memory-based, regularization-based, and dynamic-architecture-based methods for lifelong learning. In addition, prior research [66] has discovered that they are state-of-the-art methods. Besides these three methods, we also implement sequential finetuning (SeqL) and multitask learning (MTL), which serve as a lower bound and upper bound for lifelong learning algorithms, respectively. More details about the algorithms are in Appendix C.1.
### Neural Network Architectures
We implement three vision-language policy networks, ResNet-RNN, ResNet-T, and ViT-T, that integrate visual, temporal, and linguistic information for LLDM. Language instructions of tasks are encoded using pretrained BERT embeddings [19]. The ResNet-RNN [40] uses a ResNet as the visual backbone that encodes per-step visual observations and an LSTM as the temporal backbone to process a sequence of encoded visual information. The language instruction is incorporated into the ResNet features using the FiLM method [48] and added to the LSTM inputs, respectively. ResNet-T architecture [72] uses a similar ResNet-based visual backbone, but a transformer decoder [64] as the temporal backbone to process outputs from ResNet, which are a temporal sequence of visual tokens. The language embedding is treated as a separate token in inputs to the transformer alongside
the visual tokens. The ViT-T architecture [31], which is widely used in visual-language tasks, uses a Vision Transformer (ViT) as the visual backbone and a transformer decoder as the temporal backbone. The language embedding is treated as a separate token in inputs of both ViT and the transformer decoder. All the temporal backbones output a latent vector for every decision-making step. We compute the multi-modal distribution over manipulation actions using a Gaussian-Mixture-Model (GMM) based output head [8; 40]. In the end, a robot executes a policy by sampling a continuous value for end-effector action from the output distribution. Figure 5 visualizes the three architectures.
For all the lifelong learning algorithms and neural architectures, we use behavioral cloning (BC) [4] to train policies for individual tasks (See (2)). BC allows for efficient policy learning such that we can study lifelong learning algorithms with limited computational resources. To train BC, we provide 50 trajectories of high-quality demonstrations for every single task in the generated task suites. The demonstrations are collected by human experts through teleoperation with 3Dconnexion Spacemouse.
## 5 Experiments
Experiments are conducted as an initial study for the five research topics mentioned in Section 3. We first introduce the evaluation metric used in experiments, and present analysis of empirical results in LIBERO. The detailed experimental setup is in Appendix E and the study on **Q5** is in Appendix F.2. Our experiments focus on addressing the following research questions:
**Q1**: How do different architectures/LL algorithms perform under specific distribution shifts?
**Q2**: To what extent does neural architecture impact knowledge transfer in LLDM, and are there any discernible patterns in the specialized capabilities of each architecture?
**Q3**: How do existing algorithms from lifelong supervised learning perform on LLDM tasks?
**Q4**: To what extent does language embedding affect knowledge transfer in LLDM?
**Q5**: How robust are different LL algorithms to task ordering in LLDM?
**Q6**: Can supervised pretraining improve downstream lifelong learning performance in LLDM?
### Evaluation Metrics
We report three metrics: FWT (forward transfer) [20], NBT (negative backward transfer), and AUC (area under the success rate curve). All metrics are computed in terms of success rate, as previous literature has shown that the success rate is a more reliable metric than training loss for manipulation policies [40] (Detailed explanation in Appendix F.3). Lower NBT means a policy has better performance in the previously seen tasks, higher FWT means a policy learns faster on a new task, and higher AUC means an overall better performance considering both NBT and FWT. Specifically, denote \(c_{i,j,e}\) as the agent's success rate on task \(j\) when it learned over \(i-1\) previous tasks and has just learned \(e\) epochs (\(e\in\{0,5,\dots,50\}\)) on task \(i\). Let \(c_{i,i}\) be the best success rate over all evaluated epochs \(e\) for the current task \(i\) (i.e., \(c_{i,i}=\max_{e}c_{i,i,e}\)). Then, we find the earliest epoch \(e_{i}^{*}\) in which the agent achieves the best performance on task \(i\) (i.e., \(e_{i}^{*}=\arg\min_{e}c_{i,i,e_{i}}=c_{i,i}\)), and assume for all \(e\geq e_{i}^{*}\), \(c_{i,i,e}=c_{i,i}\).5 Given a different task \(j\neq i\), we define \(c_{i,j}=c_{i,j,e_{i}^{*}}\). Then the three metrics are defined: \(\text{FWT}=\sum_{k\in[K]}\frac{\text{FWT}_{k}}{K},\quad\text{FWT}_{k}=\frac{1 }{11}\sum_{e\in\{0\dots 50\}}c_{k,k,e}\), \(\text{NBT}=\sum_{k\in[K]}\frac{\text{NBT}_{k}}{K},\quad\text{NBT}_{k}=\frac{1 }{K-k}\sum_{\tau=k+1}^{K}\left(c_{k,k}-c_{\tau,k}\right)\), and \(\text{AUC}=\sum_{k\in[K]}\frac{\text{AUC}_{k}}{K},\quad\text{AUC}_{k}=\frac{ 1}{K-k+1}\big{(}\text{FWT}_{k}+\sum_{\tau=k+1}^{K}c_{\tau,k}\big{)}\). A visualization of these metrics is provided in Figure 4.
Footnote 5: In practice, it’s possible that the agent’s performance on task \(i\) is not monotonically increasing due to the variance of learning. But we keep the best checkpoint among those saved at epochs \(\{e\}\) as if the agent stops learning after \(e_{i}^{*}\).
### Experimental Results
We present empirical results to address the research questions. Please refer to Appendix F.1 for the full results across all algorithms, policy architectures, and task suites.
Study on the Policy's Neural Architectures (Q1, Q2)Table 1 reports the agent's lifelong learning performance using the three different neural architectures on the four task suites. Results are reported when ER and PackNet are used as they demonstrate the best lifelong learning performance across all task suites.
_Findings:_ First, we observe that ResNet-T and ViT-T work much better than ResNet-RNN on average, indicating that using a transformer on the "temporal" level could be a better option than using an RNN model. Second, the performance difference among different architectures depends on the underlying lifelong learning algorithm. If PackNet (a dynamic architecture approach) is used, we observe no significant performance difference between ResNet-T and ViT-T except on the LIBERO-Long task suite where ViT-T performs much better than ResNet-T. In contrast, if ER is used, we observe that ResNet-T performs better than ViT-T on all task suites except LIBERO-Object. This potentially indicates that the ViT architecture is better at processing visual information with more object varieties than the ResNet architecture when the network capacity is sufficiently large (See the MTL results in Table 8 on LIBERO-Object as the supporting evidence). The above findings shed light on how one can improve architecture design for better processing of spatial and temporal information in LLDM.
**Study on Language Embeddings as the Task Identifier (Q4)** To investigate to what extent language embedding play a role in LLDM, we compare the performance of the same lifelong learner using four different pretrained language embeddings. Namely, we choose BERT [19], CLIP [50], GPT-2 [51] and the Task-ID embedding. Task-ID embeddings are produced by feeding a string such as "Task 5" into a pretrained BERT model.
_Findings:_ From Table 3, we observe _no_ statistically significant difference among various language embeddings, including the Task-ID embedding. This, we believe, is due to sentence embeddings functioning as bag-of-words that differentiates different tasks. This insight calls for better language encoding to harness the semantic information in task descriptions. Despite the similar performance, we opt for BERT embeddings as our default task embedding.
**Study on How Pretraining Affects Downstream LLDM (Q6)** Fig 3 reports the results on LIBERO-Long of five combinations of algorithms and policy architectures, when the underlying model is pretrained on the 90 short-horizon tasks in LIBERO-100 or learned from scratch. For pretraining, we apply behavioral cloning on the 90 tasks using the three policy architectures for 50 epochs. We save a checkpoint every 5 epochs of training and then pick the checkpoint for each architecture that has the best performance as the pretrained model for downstream LLDM.
_Findings:_ We observe that the basic supervised pretraining can _hurt_ the model's downstream lifelong learning performance. This, together with the results seen in Table 2 (e.g., naive sequential fine-tuning has better forward transfer than when lifelong learning algorithms are applied), indicates that better pretraining techniques are needed.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Lifelong Algo. & FWT(\(\uparrow\)) & NBT(\(\downarrow\)) & AUC(\(\uparrow\)) & FWT(\(\uparrow\)) & NBT(\(\downarrow\)) & AUC(\(\uparrow\)) \\ \hline \multicolumn{6}{c}{LIBERO-Long} & \multicolumn{3}{c}{LIBERO-Spatial} \\ \cline{2-6} SeqL & **0.54**\(\pm\) 0.01 & 0.63 \(\pm\) 0.01 & 0.15 \(\pm\) 0.00 & **0.72**\(\pm\) 0.01 & 0.81 \(\pm\) 0.01 & 0.20 \(\pm\) 0.01 \\ ER & 0.48 \(\pm\) 0.02 & 0.32 \(\pm\) 0.04 & **0.32**\(\pm\) 0.01 & 0.65 \(\pm\) 0.03 & 0.27 \(\pm\) 0.03 & 0.56 \(\pm\) 0.01 \\ EWC & 0.13 \(\pm\) 0.02 & 0.22 \(\pm\) 0.03 & 0.02 \(\pm\) 0.00 & 0.23 \(\pm\) 0.01 & 0.33 \(\pm\) 0.01 & 0.06 \(\pm\) 0.01 \\ PackNet & 0.22 \(\pm\) 0.01 & **0.08**\(\pm\) 0.01 & 0.25 \(\pm\) 0.00 & 0.55 \(\pm\) 0.01 & **0.07**\(\pm\) 0.02 & **0.63**\(\pm\) 0.00 \\ MTL & & & 0.48 \(\pm\) 0.01 & & & 0.83 \(\pm\) 0.00 \\ \hline \multicolumn{6}{c}{LIBERO-Object} & \multicolumn{3}{c}{LIBERO-Goal} \\ \cline{2-6} SeqL & **0.78**\(\pm\) 0.04 & 0.76 \(\pm\) 0.04 & 0.26 \(\pm\) 0.02 & **0.77**\(\pm\) 0.01 & 0.82 \(\pm\) 0.01 & 0.22 \(\pm\) 0.00 \\ ER & 0.67 \(\pm\) 0.07 & 0.43 \(\pm\) 0.04 & 0.44 \(\pm\) 0.06 & 0.64 \(\pm\) 0.01 & 0.34 \(\pm\) 0.02 & 0.49 \(\pm\) 0.02 \\ EWC & 0.56 \(\pm\) 0.03 & 0.69 \(\pm\) 0.02 & 0.16 \(\pm\) 0.02 & 0.32 \(\pm\) 0.02 & 0.48 \(\pm\) 0.03 & 0.06 \(\pm\) 0.00 \\ PackNet & 0.60 \(\pm\) 0.07 & **0.17**\(\pm\) 0.05 & **0.60**\(\pm\) 0.05 & 0.63 \(\pm\) 0.02 & **0.06**\(\pm\) 0.01 & **0.75**\(\pm\) 0.01 \\ MTL & & & 0.54 \(\pm\) 0.02 & & & 0.80 \(\pm\) 0.01 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of three lifelong algorithms and the SeqL and MTL baselines on the four task suites, where the policy is fixed to be ResNet-T. Results are averaged over three seeds and we report the mean and standard error. The best performance is **bolded**, and colored in **purple** if the improvement is statistically significant over other algorithms, when a two-tailed, Student’s t-test under equal sample sizes and unequal variance is applied with a \(p\)-value of 0.05.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Embedding Type & Dimension & FWT(\(\uparrow\)) & NBT(\(\downarrow\)) & AUC(\(\uparrow\)) \\ \hline BERT & 768 & 0.48 \(\pm\) 0.02 & **0.32**\(\pm\) 0.04 & 0.32 \(\pm\) 0.01 \\ CLIP & 512 & **0.52**\(\pm\) 0.00 & 0.34 \(\pm\) 0.01 & **0.35**\(\pm\) 0.01 \\ GPT-2 & 768 & 0.46 \(\pm\) 0.01 & 0.34 \(\pm\) 0.02 & 0.30 \(\pm\) 0.01 \\ Task-ID & 768 & 0.50 \(\pm\) 0.01 & 0.37 \(\pm\) 0.01 & 0.33 \(\pm\) 0.01 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of a lifelong learner using four different language embeddings on LIBERO-Long, where we fix the policy architecture to ResNet-T and the lifelong learning algorithm to ER. The Task-ID embeddings are retrieved by feeding “Task + ID” into a pretrained BERT model. Results are averaged over three seeds and we report the mean and standard error. The best performance is **bolded**. No statistically significant difference is observed among the different language embeddings.
Attention Visualization:To better understand what type of knowledge the agent forgets during the lifelong learning process, we visualize the agent's attention map on each observed image input. The visualized saliency maps and the discussion can be found in Appendix F.4.
## 6 Related Work
This section provides an overview of existing benchmarks for lifelong learning and robot learning. We refer the reader to Appendix C.1 for a detailed review of lifelong learning algorithms.
Lifelong Learning BenchmarksPioneering work has adapted standard vision or language datasets for studying LL. This line of work includes image classification datasets like MNIST [18], CIFAR [34], and ImageNet [17]; segmentation datasets like Core50 [36]; and natural language understanding datasets like GLUE [65] and SuperGLUE [57]. Besides supervised learning datasets, video game benchmarks (e.g., Atari [44], XLand [62], and VisDoom [30]) in reinforcement learning (RL) have also been used for studying LL. However, LL in standard supervised learning does not involve procedural knowledge transfer, while RL problems in games do not represent human activities. ContinualWorld [66] modifies the 50 manipulation tasks in MetaWorld for LL. CORA [49] builds four lifelong RL benchmarks based on Atari, Procgen [15], MiniHack [56], and ALFRED [60]. F-SIOL-310 [3] and OpenLORIS [59] are challenging real-world lifelong object learning datasets that are captured from robotic vision systems. Prior works have also analyzed different components in a LL agent [43; 67; 21], but they do not focus on robot manipulation problems.
Robot Learning BenchmarksA variety of robot learning benchmarks have been proposed to address challenges in meta learning (MetaWorld [70]), causality learning (CausalWorld [1]), multi-task learning [27], policy generalization to unseen objects [45; 24], and compositional learning [42]. Compared to existing benchmarks in lifelong learning and robot learning, the task suites in LIBERO are curated to address the research topics of LLDM. The benchmark includes a large number of tasks based on everyday human activities that feature rich interactive behaviors with a diverse range of objects. Additionally, the tasks in LIBERO are procedurally generated, making the benchmark scalable and adaptable. Moreover, the provided high-quality human demonstration dataset in LIBERO supports and encourages learning efficiency.
## 7 Conclusion and Limitations
This paper introduces LIBERO, a new benchmark in the robot manipulation domain for supporting research in LLDM. LIBERO includes a procedural generation pipeline that can create an infinite number of manipulation tasks in the simulator. We use this pipeline to create 130 standardized tasks and conduct a comprehensive set of experiments on policy and algorithm designs. The empirical results suggest several future research directions: 1) how to design a better neural architecture to better process spatial information or temporal information; 2) how to design a better algorithm to improve forward transfer ability; and 3) how to use pretraining to help improve lifelong learning performance. In the short term, we do not envision any negative societal impacts triggered by LIBERO. But as the lifelong learner mainly learns from humans, studying how to preserve user privacy within LLDM [2 ] can be crucial in the long run.
Figure 3: Performance of different combinations of algorithms and architectures without pretraining or with pretraining. The multi-task learning performance is also included for reference. |
2304.05118 | Pointless Global Bundle Adjustment With Relative Motions Hessians | Bundle adjustment (BA) is the standard way to optimise camera poses and to
produce sparse representations of a scene. However, as the number of camera
poses and features grows, refinement through bundle adjustment becomes
inefficient. Inspired by global motion averaging methods, we propose a new
bundle adjustment objective which does not rely on image features' reprojection
errors yet maintains precision on par with classical BA. Our method averages
over relative motions while implicitly incorporating the contribution of the
structure in the adjustment. To that end, we weight the objective function by
local hessian matrices - a by-product of local bundle adjustments performed on
relative motions (e.g., pairs or triplets) during the pose initialisation step.
Such hessians are extremely rich as they encapsulate both the features' random
errors and the geometric configuration between the cameras. These pieces of
information propagated to the global frame help to guide the final optimisation
in a more rigorous way. We argue that this approach is an upgraded version of
the motion averaging approach and demonstrate its effectiveness on both
photogrammetric datasets and computer vision benchmarks. | Ewelina Rupnik, Marc Pierrot-Deseilligny | 2023-04-11T10:20:32Z | http://arxiv.org/abs/2304.05118v1 | # Pointless Global Bundle Adjustment With Relative Motions Hessians
###### Abstract
Bundle adjustment (BA) is the standard way to optimise camera poses and to produce sparse representations of a scene. However, as the number of camera poses and features grows, refinement through bundle adjustment becomes inefficient. Inspired by global motion averaging methods, we propose a new bundle adjustment objective which does not rely on image features' reprojection errors yet maintains precision on par with classical BA. Our method averages over relative motions while implicitly incorporating the contribution of the structure in the adjustment. To that end, we weight the objective function by local hessian matrices - a by-product of local bundle adjustments performed on relative motions (e.g., pairs or triplets) during the pose initialisation step. Such hessians are extremely rich as they encapsulate both the features' random errors and the geometric configuration between the cameras. These pieces of information propagated to the global frame help to guide the final optimisation in a more rigorous way. We argue that this approach is an upgraded version of the motion averaging approach and demonstrate its effectiveness on both photogrammetric datasets and computer vision benchmarks. The code is available at [https://github.com/erupnik/pointlessGBA](https://github.com/erupnik/pointlessGBA)
## 1 Introduction
Photogrammetry and computer vision are nowadays widely used to produce up-to-date 2D and 3D maps of territories on a national scale as well as at the level of a city, for cultural heritage documentation, in agriculture, geology, gaming and many other domains [28]. To generate convincing 3D representations of a scene, hundreds or thousands of images are usually involved. More importantly, quality of the reconstructed 3D scene relies heavily on the quality of the camera positions and rotations, the so-called camera poses.
Our work focuses on bundle adjustment (BA) [37] - a refinement step advantageous for finding the most optimal camera poses by taking simultaneously into account all available observations relative to a set of images (i.e., image features, ground control points, _a priori_ knowledge on perspective centers or rotations, etc.). Such refinement can occur twice during an _SfM_ (_Structure from Motion_) pipeline [32]: (1) as a systematic phase to avoid error accumulation and the subsequent drift effect when incrementally building the initial solution, and (2) as a final adjustment
Figure 1: _Pointless_ **BA pipeline**. We refine global camera poses (and thus the 3D structure) in global bundle adjustment by rigorously taking into account the stochastics of the relative motions. Our inputs are \(S\) relative motions \(\{r_{k},\mathbf{c}_{k}\}_{s}\) (a), their initial 3D similarity transformations \(\{\lambda,\alpha,\beta\}_{s}\) relating them to the global frame, and initial global poses \(\{R,\mathbf{C}\}^{0}\) (b). We first run in parallel \(S\) local bundle adjustments to retrieve _camera reduced matrices_\(h_{s}\) which encapsulate the rich stochastic information. We then find the optimal camera poses (c) by combining all our inputs, including the \(h_{s}\) matrices. Concretely, our refinement minimises an error metric defined as the difference between the observed (-,-,-) and predicted relative motions (-,-,-,-) (a), where the predictions are obtained by applying a 3D similarity to the initial global camera poses. Additionally, the error is _weighted_ by the \(h_{s}\) matrix which virtually incorporates the effect of feature points in the adjustment. In this example \(k\in<1,3>\), and \(S\in<1,3>\).
once all images have been initialised.
As the numbers of images grow, BA routines quickly become inefficient. Solving the arising systems of equations with exact methods such as Levenberg-Marquardt implies growing space and time complexities by the second and third power in the number of BA parameters [2]. The common way to address the high computational cost is to exploit the particular structure of BA equations. The strategy known as the _Schur trick_ involves rearranging the equations such that the unknowns corresponding to the (few) camera parameters form an independent block, thus can be solved without intervention of 3D points. This said, for very large problems matrix rearranging and construction of the Schur complement also becomes prohibitive [32].
To further reduce this burden, one can exploit the structure of the camera graph (i.e., viewgraph), divide a large problem in many smaller sub-problems and treat them separately, as is done in hierarchical or hybrid [4, 6, 36]_SfMs_. The splitting is typically carried out via graph partitioning, then each small problem is solved independently with direct methods (i.e., space resection, F-Matrix, etc.), and aggregated in a common frame (e.g., with global or structure-less approaches, or 3D similarity transformation). This protocol is interleaved with bundle adjustments as the solution is progressively built which assures optimal results but imposes a certain processing cost. Similar in concept but different in execution is the consensus based bundle adjustment (CBA) [11, 25]. Unlike previous approaches, CBA breaks an _SfM_'s objective function into parts and solves it in a distributed way while preserving a _consensus_ at the break points.
The new global motion averaging [14, 24] and structureless [19] approaches to camera pose estimation both factor out the structure from the estimation problem and leverage the geometric constraints between cameras. While this manoeuvre reduces the computation times significantly, there remains a trade off in the precision of the recovered poses. Global motion methods are thus very good at initialising an _SfM_ but never considered optimal.
Contributions of this paperOur work on bundle adjustment extends the global motion averaging methods and is presented in Fig. 1. We address their compromised precision while maintaining their computational efficiency, ultimately transforming them into optimal solutions, as opposed to being merely initialisation methods. We achieve this goal by indirectly incorporating information about the removed structure. More precisely, we define our _pointless_ global bundle adjustment as a function of local Hessians (i.e., the inverse of the covariance) constructed during the relative motions computation (i.e., pairs, triplets). In doing so, the quality of the relative motions, including the random errors related to features and the correlations between camera parameters, is propagated to the global solution. This approach is similar in philosophy to [31] where the authors attempt to propagate the structure information per relative motion at a low cost by compressing it to 5 points. Here, in contrast, we rigorously propagate equivalent information while supplanting entirely the points from the equation. We also note that our approach is not restrained to motion averaging methods. It can be similarly adopted in any _SfM_ method that builds a consistent 3D structure and camera poses from many independent sub-problems. We evaluate our approach on several datasets: a typical aerial photogrammatric dataset, two computer vision benchmarks (ETH3D [33], Tanks & Temples [21]), and a challenging, very long focal length terrestrial acquisition [20]. Our method is compared against global motion averaging _SfM_ implemented in openMVG [27], incremental _SfM_ in Mic-Mac [30], 5-Point bundle adjustment [31] and in-house implementation of the IRLS motion averaging [7].
This paper is organized as follows. In the next section a brief review of the global motion averaging methods is given, including a discussion on robustness. In Sec. 3 derivation of the proposed method is outlined, followed by a description of the adjustment pipeline implementation details in Sec. 3. Finally, experiments are presented in Sec. 4.
## 2 Related work
Global motion averagingMotion averaging methods use elementary relative motions, typically pairs or triplets of images, to resolve the camera poses in a global and fast manner. Because the poses are computed all at once, motion averaging surmounts the pitfall of incremental methods [32] where errors accumulate all along the initialisation step, and lead to trajectory drift. However, such methods give rise to new challenges. First, by relying exclusively on pairs or triplets of images, motion averaging methods ultimately renounce higher observation redundancy (i.e., long feature tracks), which we know negatively impacts both the camera pose estimation robustness and precision [22]. Second, once the relative poses computed, the structure used in the calculation is discarded, and all relative relationships, whether derived from erroneous observations or not, are treated equally.
As a result, there have been many works addressing the precision as well as the mechanisms of handling low quality and outlier relative relationships in motion averaging. For instance, [15] proposed sampling random spanning trees and RANSAC on the pose viewgraph (i.e., a graph where the nodes and edges represent the images and relative relationships), while [35, 39] explored the viewgraph's structure to prune inconsistent loops or optimise the initial relative constraints. Moulon _et al_. [26] leverage the trifocal tensor to strengthen the relative translation retrieval. Instead, _1DSfM_[38] casts translations as 1D problems and
recovers inconsistencies through 1D graph ordering of pairwise constraints. A complete two-stage robust pipeline was introduced in [9]. The authors embed the cameras relative relationships and 3D points within a Markov Random Field graph, then simultaneously solve for initial camera poses using discrete belief propagation. The rotations parameterised by a set of discrete 3D rotations provide only a coarse result, which serves to eliminate outliers and initialise the subsequent continuous optimisation.
Others suggest to build-in the robustness in the estimation step itself. Arrigoni _et al_. [3] represents the rotation averaging as a matrix decomposition problem. A measurement matrix decomposed into a sum of low-rank and sparse terms naturally groups the gross errors in the latter. Having identified the gross errors, they participate in a modified \(l_{2}\) rotation averaging that follows, with minimal impact on the output. Nevertheless, storing all relative motions in the measurement matrix might turn prohibitive for very large scale _SfMs_. Instead of resorting to the non-robust \(l_{2}\) rotation distance averaging, Hartley _et al_. [17] rigorously average rotations in the orthogonal \(SO(3)\) group through application of the \(l_{1}\) Weiszfeld algorithm. Such formulation is equivalent of computing a geometric median over multiple rotations, and its major merit is its simplicity. To its disadvantage, the one-by-one rotation update makes it a slow convergence optimisation [8]. The golden standard for robust motion averaging in the presence of outliers is unarguably the _iteratively reweighted least squares_ (IRLS) introduced in [7, 8]. Given a set of reliable initial estimates of the global rotations (e.g., obtained with robust \(l_{1}\) optimisation), IRLS simultaneously finds their optimal values through iterative regression. The influence of individual errors on the solution is governed by a suitable loss function. IRLS demonstrated superior performance with respect to the _state-of-the-art_ in speed and accuracy.
Unlike the _state-of-the-art_ approaches which discard entirely any information related to feature points from the global averaging, our pipeline retains and conveys the features in a compact form via local hessians. Our local hessians propagated to the global frame rigorously guide the global camera pose refinement. Outliers are handled implicitly by a robust cost function, however, we assume that the majority of gross errors has been removed prior to the adjustment.
Exploiting hessian matrices.The hessian matrix (or its inverse - the covariance) resulting from a bundle adjustment encapsulate information about random observation errors, and inter-dependencies between estimated parameters, i.e., in our case the cameras and 3D points. These information-rich matrices have been long used in photogrammetry for theoretical accuracy analyses. For instance, the _a posteriori_ retrieved variances and co-variances have been used (i) as a quality measure of 3D intersections [12, 25], (ii) as a tool to design optimal imaging network [13] or for next-best view selection [16], (iii) to analyse correlation between camera intrinsic and extrinsic parameters [40], as well as (iv) in airborne laser strip adjustment when GNSS/IMU trajectory is not available [29]. Other common uses of the covariances include _Kalman filtering_ in recursive pose estimations or visual SLAM [10]. There, each new camera pose predictions are made from a product of covariance-weighted current state and available new measurements. To the best of our knowledge, this paper is the first to exploit hessian/covariance matrices in global motion averaging.
## 3 Global optimisation with local Hessians
Problem formulationBuilding a global orientation of a block of images involves two steps: recovery of the initial global orientation of all images through incremental, global or hybrid _SfM_; followed by a final bundle adjustment that refines simultaneously all poses and the 3D structure. Our goal is to refine initial poses \(\{R,\mathbf{C}\}_{j}^{0}\) of a number of images where \(R\) is a rotation matrix and \(\mathbf{C}\) is a perspective center defined in the global reference frame. However, unlike in the classical BA that minimises the point's re-projection errors, our _pointless_ BA's objective function relies exclusively on three ingredients (cf. Fig. 1):
* the relative motions,
* a by product of the relative motions' estimation (i.e., the local bundle adjustment), and
* the initial 3D similarity transformations relating the global and the local frame of the relative motion.
Differently to the standard IRLS approach which considers all relative motions as static, our motions come with unique uncertainty signatures contained in the hessian matrices. Those are subsequently integrated in the global cost function minimising over all camera poses.
For the sake of completeness of this derivation, in the coming section we lay out the local bundle adjustment step and hessian retrieval. We then follow up with global to local frame propagation and the derivation of our _pointless_ global bundle adjustment cost function.
Local bundle adjustment.For every relative motion composed of \(N\) views and \(M\) features, we can write the cost function expressed in local frame of the relative motion as:
\[\begin{split} E_{BA}^{l}=\sum_{k=0}^{N}\sum_{i=0}^{M}\left(F( \mathbf{p}_{i})\right)^{2}\\ =\sum_{k=0}^{N}\sum_{i=0}^{M}\rho_{ki}\left(f(\mathbf{p}_{i})- \mathbf{o}_{ki}\right)^{2},\end{split} \tag{1}\]
where \(\mathbf{o}_{ik}\) are the observations corresponding to image features in \(k^{th}\) view, and \(\mathbf{p}_{i}\) are their respective 3D coordinates expressed in the local frame of the relative motion. The function \(f(\cdot)\) relates a 3D point \(\mathbf{p}_{i}\) with its predicted image observation \(\bar{\mathbf{o}}_{ki}\) and follows the known projection function with \(\mathcal{J}\) as the intrinsic parameters, and \(\{r_{k},\mathbf{c}_{k}\}\) as the extrinsic parameters: \(f(\mathbf{p}_{i})=\bar{\mathbf{o}}_{ki}=\mathcal{J}_{k}\left(\pi_{k}\left(r_{k }\left(\mathbf{p}_{i}-\mathbf{c}_{k}\right)\right)\right)\). The loss function \(\rho\) reduces the impact of outliers on the solution.
By minimising the quadratic form in Eq. (1) we obtain \(\delta\mathbf{x}\) updates to all unknowns (i.e., extrinsic parameters and the 3D coordinates of feature points):
\[\begin{split}\delta\mathbf{x}^{*}=\operatorname*{arg\,min}_{ \delta\mathbf{x}}(J\delta\mathbf{x}+F_{0})^{2}=\\ \operatorname*{arg\,min}_{\delta\mathbf{x}}\left(\delta\mathbf{x }^{T}\underbrace{J^{T}J}_{H}\delta\mathbf{x}+\underbrace{2F_{0}^{T}J}_{G} \delta\mathbf{x}+F_{0}^{2}\right)\\ \equiv-H^{-1}\cdot G,\end{split} \tag{2}\]
where \(J\) is a \((2MN\times 6N+3M)\) Jacobian matrix, \(H\) and \(G\) are the hessian (aka the normal equations) and the gradient of the cost function, \(F_{0}\) is the value of the cost evaluated at current estimate of the unknowns, and \(\delta\mathbf{x}\) is the difference between the current \(\mathbf{x}\) and initial \(\mathbf{x}_{0}\) estimate of the unknowns.
The hessian matrix in Eq. (2) describes all unknowns while we are only interested in the unknowns corresponding to the extrinsic parameters. Thus, we re-write it with the help of the _Schur complement_, and note \(h\) the \(6N\times 6N\)_camera reduced matrix_. We then transcribe the cost in Eq. (2) to a cost relying only on the relative camera extrinsics:
\[\delta\mathbf{x}^{*}=\operatorname*{arg\,min}_{\delta\mathbf{x}}\left(\delta \mathbf{x}^{T}\cdot h\cdot\delta\mathbf{x}+g^{T}\delta\mathbf{x}+\mathbf{m} \right). \tag{3}\]
Global to local frame propagation.Note that the local extrinsic parameters \(\{\mathbf{c}_{k},r_{k}\}\) are related to their global equivalents \(\{\mathbf{C},R\}\) by a 3D similarity transformation \(d\):
\[\mathbf{x}_{k}=\left\{\overbrace{\lambda\cdot\alpha\cdot\mathbf{C}+\beta},\overbrace{\alpha\cdot R}^{r_{k}}\right\}=d\left(\{\mathbf{C},R\}\right)\, \tag{4}\]
where \(\mathbf{x}_{k}\) is a \(6\times 1\) vector of the local extrinsics of \(k^{th}\) view within some relative motion; \(\lambda\), \(\alpha\) and \(\beta\) are the scale factor, \(3\times 3\) rotation matrix and \(3\times 1\) translation vector between the local and global frames. By injecting Eq. (4) in Eq. (3) we can express our cost function in terms of the global camera extrinsic parameters. Observe that optimising the cost written in this way will change the initial global poses by rigorously taking into account the stochastic properties of the parameters computed in the relative frame and encapsulated within the camera reduced matrix \(h\).
Pointless global bundle adjustment.Our objective is to compute refined camera extrinsics by integrating three pieces of information in a global bundle adjustment: relative motions, their local hessians, and the transformation relating local and global frames. For convenience, we transform the quadratic cost in Eq. (3) to a sum of linear terms which can then be readily used in any least squares solver. To do that, we decompose the small hessian into \(6N\times 6N\) matrix \(V\) of eigenvectors and the corresponding eigenvalues matrix \(D\). Furthermore, we integrate the global poses in the cost function by predicting the current estimate of the relative motion from its corresponding current global values (see Eq. (4) and Fig. 1). With this, our global bundle adjustment cost function defined over \(S\) relative motions takes the following form:
\[\begin{split} E_{BA}^{g}=&\sum_{s=0}^{S}E_{s}^{g}= \sum_{s=0}^{S}\delta\mathbf{x}_{s}^{T}\cdot h_{s}\cdot\delta\mathbf{x}_{s}\\ &=\sum_{s=0}^{S}\delta\mathbf{x}_{s}^{T}\cdot V_{s}^{T}D_{s}^{2}V _{s}\cdot\delta\mathbf{x}_{s}\\ &=\sum_{s=0}^{S}\left(D_{s}\left(V_{s}\cdot\delta\mathbf{x}_{s} \right)\right)^{2}\\ =&\sum_{s=0}^{S}\left(D_{s}\left(V_{s}\cdot d( \mathbf{X})-V_{s}\cdot\mathbf{x}_{0s}\right)\right)^{2},\end{split} \tag{5}\]
where the relative motion parameters \(\mathbf{x}_{0}\) are the _observations_ in the adjustment, while the global camera poses \(\mathbf{X}\), and the 3D similarity parameters \(\{\lambda,\alpha,\beta\}\) within \(d\) are the _unknowns_ with known initial values. Every relative motion adds a \(6N\times 1\) observation vector to the global cost, and the number of observations accumulated over all motions equals \(6NS\). We omit the gradient \(\mathbf{g}\) and the constant \(\mathbf{m}\) terms because their values are cancelled in the preceding relative motion bundle adjustment.
Complete adjustment pipeline.Taking all the ingredients into account, the full pipeline involves the following steps:
1. features extraction (e.g., SIFT [23]),
2. generation of observations, including the relative motions and the initial global solution,
3. per-motion local bundle adjustments, and
4. propagation and refinement in global bundle adjustment.
We rely on MicMac solution [30] for steps 1-2, and limit the relative motions set to three-view relationships (i.e., triplets), thus \(N=3\). This choice is justified by the
fact that triplets (i) provide additional redundancy hence are more reliable than pairs, and (ii) they are easy to compute thanks to the powerful modern feature extractors. To obtain the hessian matrices we run, in parallel, single-iteration local bundle adjustments with triplet poses and SIFT features from steps 1-2 as inputs. Note that steps 2 and 3 are typically seamlessly performed in a single step. We rely on a third-party solution for relative motion thus we separate them in two. Finally, the outputs from steps 2 are 3 are used to simultaneously refine all initial global poses.
## 4 Experiments
### Implementation details
Rotation parameterisation.Rotations in 3D Euclidean space form a special orthonormal group \(SO(3)\). Optimising rotations without taking extra precautions might destroy this property. Among the common parameterisations that conserve the matrix orthogonality are the Lie algebra, angle-axis representation or quaternions [18]. We describe the rotations as a product of the known initial rotation \(R_{0}\) and an unknown skew-symmetric small rotation \(\omega_{\times}\): \(\hat{R}=R_{0}\left(I+\omega_{\times}\right)\). We enforce the orthogonality of the final rotation by mapping it to the closest rotation with SVD [24]. The small rotation matrix is initially set to zero and optimised during the adjustment.
Local and global bundle adjustments.We run single-iteration local bundle adjustment per each triplet following the cost defined in Eq. (1). Dense Shur solver of Ceres library [1] is used for optimisation. The inputs are: a triplet of images with their initial relative poses and image features. Our cost function is weighted by a Huber loss, and an attenuation loss \(\gamma\). The first minimises the influence of the outliers, while the latter harmonises the triplets between them in terms of the number of feature points. We want to avoid penalising triplets with many features which might naturally lead to larger hessian values. To that end, we weight each image feature observation by \(\gamma\) which simulates an equal number of observations for everyone: \(\gamma=\frac{M\cdot Q}{M+Q}\) where \(Q\) is the fictitious number of points, and \(M\) is the input number (in our experiments \(Q=10\)). To compute the inverse of the local hessians one must fix the gauge ambiguity. This can be done in many ways, for instance by fixing the pose of the first camera and the base between the first and second camera, or by considering all camera extrinsics as observed. In our experiments we choose the latter. Triplets with less than 30 image features are ignored in the processing.
In the global adjustment, we accumulate observations corresponding to all triplets in the triplet graph following the Eq. (5) and solve it using sparse Shur solver in Ceres [1]. In analogy to IRLS we weigh the observations by the residual fitting error and apply the Huber loss.
### Evaluation
Datasets.To evaluate our method we look at four datasets (see Fig. 2):
* a typical photogrammetric acquisition with a 80/60% along
- and across-track overlap composed of 2000 calibrated images over a sub-urban taken with the UltraCAM Eagle (26460x17004pix, F=120mm).
* a SLAM benchmark,
Figure 2: **Datasets. We test our method on a classical photogrammetric aerial acquisition, two computer vision benchmarks (ETH3D, Temple) and a challenging long focal length scenario. Top: Camera poses (in green and red) and sparse 3D structure. Bottom: Triplet graphs where the blue edges correspond to known relative motions. In (d) during testing only blue edges are exploited (i.e., no loop), while in evaluation the trajectory’s drift is computed using feature points common to images linked by the red edges.**
a highly overlapping video acquisition of a flat surface consisting of 630 calibrated images (739x458pix, F=726pix).
* a 3D reconstruction benchmark Tanks & Temple, 282 calibrated images of a temple (1920x1080pix, F=1163pix)
* a challenging very long focal length acquisition composed of 93 calibrated images taken around a sculpture (5616x3744pix, F=1000mm)
Comparisons with existing methodsWe compare our method against the bundle adjustments within the incremental _S/M_ in MicMac [30] and the global _S/M_ in openMVG [27], the 5-Point BA [31], and our own implementation of IRLS motion averaging [7].
Metrics.As our bundle adjustment objective function implicitly minimises the features reprojection error (also true for BA implementations of the _S/Ms_ we test against), we decide to use that metric as our only evaluation measure. Comparing absolute pose accuracies would involve choosing a reference pose estimation algorithm which is known to induce a bias on the evaluation itself [5].
Moreover, in the long focal length dataset we benefit from the acquisition geometry forming a closed-loop to evaluate the trajectory's drift. During BA, the connections between the first and last few images of the acquisition are removed (i.e., no features in common and no relative relationships, see Fig. 2(d)). During evaluation, for a perfectly recovered trajectory, reprojection errors computed on features common to the beginning and the end of the acquisition should be close to zero. Nevertheless, pose errors accumulated along the trajectory incur a trajectory drift resulting in compromised precisions (see Tab. 2).
To asses the sensitivity of our method to outliers we randomly infuse the relative rotations with outliers as observe their effect on the reprojection error across bundle adjustment's iterations, as shown in Fig. 4.
The MicMac and openMVG _S/Ms_ are complete pipelines and singleing out the runtime contribution of just the BA step is not straightforward. For that reason, we use the number of parameters per problem and the convergence rate as proxy for runtime.
### Results and discussion
Feature reprojection errors on the Photogrammetric dataset, ETH3D planar_mono and Temple benchmarks are given in Tab. 1, while the loop closure error on the Long focal length dataset is shown in Tab. 2.
\begin{table}
\end{table}
Table 1: **Reprojection errors**. We evaluate the precision of Our\({}_{BA}\) and compare it with competitive methods. \(\sigma_{init}\) and \(\sigma_{final}\) are the initial and final reprojection errors, \(Aver_{BA}\) corresponds to our implementation of IRLS [7], while \(\#param\) is the number of unknowns constituting the BA problem (\(k\equiv\times 10^{3}\)). The difference between (b) and (c) is in the size of the triplet graph, the latter being filtered to contain \(\approx 10\%\) of the initial count of relative motions. High residuals in (d) are due to the presence of outliers among the features. All methods except for openMVG were initialised with the same approximate global poses. Our\({}_{BA}\) performs as good as the BAs within incremental _S/Ms_ and the light 5-Pts\({}_{BA}\); Aver\({}_{BA}\) performs least good.
\begin{table}
\end{table}
Table 2: **Loop-closure error**. For the long focal length dataset we evaluate the precision of our method and compare it with competitive methods using the loop closure metric. This metric refers to the pixel reprojection error computed on features common to images linked by red edges in Fig. 2(d). \(\#params\) refers to the size of the BA problem (\(k\equiv\times 10^{3}\)). In the \(REF_{BA}\) we impose the closed loop and run bundle adjustment in MicMac, therefore we consider this result as our reference. Thanks to the rigorous propagation of the relative motions’ stochastics, Our\({}_{BA}\) performs best among the _fast_ BAs (5-Pts\({}_{BA}\), Aver\({}_{BA}\)), and almost as good as the best performing point-based BA in MicMac.
In terms of precision, our _pointless_ BA performs as good as the classical BAs and the 5-Point BA. It significantly outperforms the IRLS averaging (i.e., \(Aver_{BA}\)). This tendency repeats across all datasets. The trajectory loop closure error in the challenging Long focal length dataset reveals the superiority of our _pointless_ BA against the 5-Point BA. It highlights the power of the hessian propagation which, by bringing the stochastics of the local bundle adjustment into the global adjustment, prevents large trajectory drifts.
We reduce the size of the BA problem by at least a factor of 4 with respect to the standard BA, and up to 40 times for the Photogrammetric dataset (5,545k vs 135k unknowns). This is thanks to the controlled acquisition pattern and the resulting optimality of the dataset's viewgraph contaning a limited number of redundant triplets. Compared to 5-Point BA, we halve the number of parameters. One can safely assume that reducing the triplet graphs for other datasets would proportionally increase their reduction factors.
As shown in Fig. 3 all tested methods except the \(\text{Aver}_{BA}\) follow similar convergence rates, yet \(\text{Ours}_{BA}\) with much fewer unknowns is the lightest among the best-converging. Finally, faced with outliers our hessian BA, weighted by the fitting residual error and the Huber loss function, shows only marginal deterioration of the reprojection metric (see Fig. 4).
Inclusion of ground control points.Although not presented in this study, our BA can be easily extended to include ground control points (i.e., GCPs or landmarks). To that end, the initial global solution is first transferred to the coordinate frame of the GCPs (i.e., the new global frame), and the initial 3D similarity transformations are changed correspondingly. Then, for each relative motion where a GCP is seen in at least two images, the global BA in Eq. (5) is extended to include the GCP's residual: \(\mathbf{r}_{GCP}=\rho_{GCP}||\mathbf{P}_{GCP}-d^{-1}\left(\mathbf{p}_{GCP} \right)||^{2}\), where \(\mathbf{P}_{GCP}\) and \(\mathbf{p}_{GCP}\) are the GCP's 3D coordinates in global and local frames, \(d^{-1}\) is the inverse 3D similarity transformation moving from the local to global frame from Eq. (4), and \(\rho_{GCP}\) is an appropriate weighting function.
Self-calibrating bundle adjustment.Our method assumes calibrated cameras with precisely known intrinsic parameters, but it could be extended to self-calibration. We lay out this extension below, stipulating that we have not conducted experiments proving its practicality or effectiveness.
To refine the camera intrinsics in the final bundle adjustment, two key steps are required. First, the camera intrinsics must be included as parameters in the local bundle adjustment. Second, the Schur complement applied to the local hessian matrix in Eq. (2) must extract both the extrinsic and intrinsic parameters. This increases the size of the reduced camera matrix to at least \((6N+3\times 6N+3)\), if the camera is shared among all images and has no distortions. In the global bundle adjustment, the local hessian matrices are accumulated as in Eq. (5) where the observations \(\mathbf{x}_{0}\) are complemented by the input initial intrinsics. The intrisics appear thus as the observed unknowns in our _pointless_ BA. Note that the local BA with camera intrinsics as unknowns should _free_ the intrinsics only in the very last iteration in
Figure 4: **Sensitivity to outliers experiment**. We infuse between 0 and 22\(\%\) of outliers within the relative rotations, and observe their impact on the final reprojection errors (expressed in logscale). As the portion of outliers grows the metrics deteriorate in all cases, however, \(\text{Our}_{BA}\) deteriorates at a lower pace. The \(+\) signifies that outliers are added to the initial triplet graph, i.e., the accumulated ratio of outliers might be slightly higher. Sensitivity tests are performed on Temple benchmark.
Figure 3: **Convergence experiment.** We evaluate the rate of convergence for all of the tested methods. Our method (\(\text{Ours}_{BA}\)) minimizes at a rate comparable to point-based BA in MicMac and the 5-\(\text{Pts}_{BA}\) across all datasets, while the version of IRLS motion averaging (\(\text{Aver}_{BA}\)) performs worst. Note that \(\text{Our}_{BA}\) is effectively the lightest among the best-converging methods (\(\text{MicMac}_{BA}\), 5-\(\text{Pts}_{BA}\)) because it engages much less unknowns (see Tab. 1). Reprojection errors are expressed in logscale.
which the hessian matrix \(h\) should not be inverted. This is for two reasons: (i) inverting the hessian of a 3-image block with unknown intrinsics would be very unstable and (ii) for shared camera intrinsics it violates the sharing property.
Limitations.For highly overlapping acquisitions, such as the video acquisition of ETH3D, viewgraph pre-selection is necessary and can be done for instance through skeletonization techniques [34]. Running our method on a full graph consisting of all possible relative relationships incurs a computational cost equal or higher to that of the standard BA. The same limitation and the necessity to reduce the viewgraph would apply to crowed-sourced image collections. Note that randomly reducing the number of triplets by a factor of 10 (see Tab. 1(b) and (c)), had only a minimal impact on the reprojection error in our hessian-based BA.
## 5 Conclusion
We have presented a _Pointless_ Global Bundle Adjustment - a new way to optimise camera poses which disengages explicit feature points from the adjustment. Instead, our BA implicitly incorporates the feature points through rigorous propagation of the camera hessians defined in their relative frame into the global frame.
By examining the feature reprojection errors, trajectory drift and a runtime proxy metric, we demonstrated that our bundle adjustment remains as efficient as the _state-of-the-art_ motion averaging bundle adjustment while being competetive with traditional point-based bundle adjustments in terms of precision.
We have presented our method as an efficient approach to the final global bundle adjustment. However, we think of _pointless_ BA as more generic, and we argue that it can be integrated as an intermediary adjustment routine within any _SfM_ pipeline.
|
2303.07970 | Study with WhoSGlAd of the acoustic depth of the helium glitch across
the seismic HR diagram and its impact on the inferred helium abundance | The acoustic glitches' signature present in solar-like stars holds invaluable
information. Indeed, it is caused by a sharp variation in the sound speed,
therefore carrying localised information. One such glitch is the helium glitch
caused by the hydrogen and first and second partial helium ionisation region,
allowing us to constrain the surface helium abundance. However, the function
adjusted to the glitch signature depends non-linearly on the acoustic depth at
which it occurs, He. Retrieving the faint glitch signature and estimating
$\tau_{\textrm{He}}$ are difficult but crucial tasks to accurately measure the
glitch parameters and, ultimately, accurately infer the helium abundance.
In the present paper, we aim at providing a way to estimate
$\tau_{\textrm{He}}$ using precise seismic indicators, independent of stellar
modelling. Consequently, we aim at improving the WhoSGlAd (Whole Spectrum and
Glitches Adjustment) method by automatically providing a model independent
measure of the glitch's parameters.
We compute the evolution of $T_{\textrm{He}}$, a dimensionless form of the
acoustic depth, along a grid of models and adjust an empirical linear relation
between $T_{\textrm{He}}$ and the mean large separation and frequency ratio as
defined in WhoSGlAd. We further optimise over the value of this estimate to
ensure the stability and accuracy of the approach.
The proposed approach provides an excellent estimate of the acoustic depth
and allows us to swiftly retrieve the glitch signature of observed spectra. We
demonstrate that the we can accurately model the helium abundance of four
Kepler targets by comparing model (both versions of WhoSGlAd) and literature
values. | Martin Farnir, Angelo Valentino, Marc-Antoine Dupret, Anne-Marie Broomhall | 2023-03-14T15:23:33Z | http://arxiv.org/abs/2303.07970v1 | Study with WhoSGIAd of the acoustic depth of the helium glitch across the seismic HR diagram and its impact on the inferred helium abundance
###### Abstract
The acoustic glitches' signature present in solar-like stars holds invaluable information. Indeed, it is caused by a sharp variation in the sound speed, therefore carrying localised information. One such glitch is the helium glitch caused by the hydrogen and first and second partial helium ionisation region, allowing us to constrain the surface helium abundance. However, the function adjusted to the glitch signature depends non-linearly on the acoustic depth at which it occurs, He. Retrieving the faint glitch signature and estimating \(\tau_{\text{He}}\) are difficult but crucial tasks to accurately measure the glitch parameters and, ultimately, accurately infer the helium abundance.
In the present paper, we aim at providing a way to estimate \(\tau_{\text{He}}\) using precise seismic indicators, independent of stellar modelling. Consequently, we aim at improving the WhoSGIAd (**W**hole **S**pectrum and **G**litches **A**djustment) method by automatically providing a model independent measure of the glitch's parameters.
We compute the evolution of \(T_{\text{He}}\), a dimensionless form of the acoustic depth, along a grid of models and adjust an empirical linear relation between \(T_{\text{He}}\) and the mean large separation and frequency ratio as defined in WhoSGIAd. We further optimise over the value of this estimate to ensure the stability and accuracy of the approach.
The proposed approach provides an excellent estimate of the acoustic depth and allows us to swiftly retrieve the glitch signature of observed spectra. We demonstrate that the we can accurately model the helium abundance of four Kepler targets by comparing model (both versions of WhoSGIAd) and literature values.
keywords: asteroseismology - stars:oscillations - stars:solar-type - stars:abundances
## 1 Introduction
In recent years the detection and precise measurement of stellar oscillation modes has propelled forward the field of stellar physics. Indeed, thanks to recent space-based surveys (such as CoRoT and Kepler Baglin et al., 2009; Borucki et al., 2010) providing data of unprecedented quality, asteroseismology, the science of relating stellar oscillations to their structural origin, thrived. Furthermore, due to the excellent precision of the data at hand, it was made possible to detect extremely faint features and take advantage of these to constrain the stellar structure. Acoustic glitches are such features and present themselves as an oscillating signal in the observed frequencies (e.g. Houdek & Gough, 2007). Although it was predicted that such signatures could be detected for solar-like stars other than the Sun (Monteiro & Thompson, 1998), due to lack of precision in the data, they had only been observed in the solar case (with the first mention of such signatures as early as almost four decades ago Hill & Rosenwald, 1986; Vorontsov, 1988; Gough, 1990). As these glitches are caused by sharp variations in the stellar structure, they hold valuable and localised information. For example, the glitch caused by the second ionisation zone of helium carries information about the surface helium content. Indeed, Basu et al. (2004) demonstrated a positive correlation between the strength of the helium glitch signal and the envelope helium abundance in the solar case. Therefore, it allows us to lift the degeneracy between the stellar mass and helium abundance of low mass stars uncovered by Lebreton & Goupil (2014) that greatly reduces the precision of stellar models. As a consequence, many studies interested themselves in such glitches, for example Houdayer et al. (2021, 2022) theoretically related the properties of the ionisation region (hydrogen, first and second partial helium) to the glitch's signature in a model independent fashion. Other approaches (such as Monteiro, 2002; Basu et al., 2004; Mazumdar et al., 2014; Verma et al., 2014, 2017, 2022; Farnir et al., 2019) focused on providing means to retrieve the helium glitch signature and building accurate models reproducing this signature.
With upcoming missions such as PLATO (Rauer et al., 2014) striving to accurately characterise low-mass stars, the automated and swift retrieval of the glitch parameters is an essential milestone. Nevertheless, this task is arduous and may require advanced and rather slow techniques. The cause is the non-linear nature of the function that is adjusted to the observed frequencies (i.e. Houdek & Gough, 2007; Verma et al., 2014). To address these issues and provide a robust adjustment of the glitch, in order to accurately provide a constraint on the surface helium content, Farnir et al. (2019) provided a lin
earised approach, which has the advantage of being extremely fast (only a fraction of a second per star) and provides constraints to the stellar structure which show very little correlation, the WhoSGIAd1 (**W**hole **S**pectrum and **G**l**itches **A**djustment) method. However, their technique requires an estimate for the acoustic depth at which the helium glitch occurs. This leads to several difficulties. Firstly, while physically motivated by previous studies, the exact glitch formulation used in Farnir et al. (2019) (and other similar ones) is determined empirically and does not directly relate to the properties of the glitch it aims at representing. Secondly, the actual definition of the acoustic depth of the helium glitch is somewhat arbitrary. For example, as illustrated in (Verma et al., 2014; Houdayer et al., 2021, Figs. 1 and 3, respectively), the depression in the first adiabatic index \(\Gamma_{1}\), causing the helium glitch signature, is actually the composite effect of the hydrogen ionisation and helium first and second ionisations. The first adiabatic index is defined as
Footnote 1: [https://github.com/Yuglut/WhoSG1Ad-python](https://github.com/Yuglut/WhoSG1Ad-python)
\[\Gamma_{1}\ \equiv\ \frac{d\ln P}{d\ln\rho}\bigg{|}_{S}\,, \tag{1}\]
with \(P\) the pressure, \(\rho\) the density, and \(S\) the entropy. Consequently, the \(\Gamma_{1}\) depression presents a broad feature and defining its exact position is somewhat arbitrary, which may impact the retrieved final glitch properties. Both Verma et al. (2014); Farnir et al. (2019) use the depth of the peak between the first and second helium ionisation zones. Additionally, the WhoSGIAd approach defined in Farnir et al. (2019) relies on stellar models to provide a value for this depth in order to preserve the linearity of the method. This hampers the ability of the method to be automated as preliminary stellar models have to be built before the helium glitch signature can be adjusted. Another consequence is that the adjusted glitch signature becomes somewhat model dependent. This is again something one aims to avoid as stellar models themselves present a large number of uncertainties (e.g. reference solar mixture, mixing prescription, opacities,...). Following on from Farnir et al. (2019), we aim in this paper at studying the evolution of the helium acoustic depth across the seismic HR diagram (defined using WhoSGIAd indicators) and the impact of different approaches for its determination on the glitch amplitude and its use as a seismic indicator. This is motivated by the fact that the second ionisation of helium is mostly determined by the local temperature and density, as one would expect from Saha's relation (Saha, 1920). We then provide a prescription to automatically estimate the helium glitch acoustic depth and assess the impact of this prescription on the inferred helium abundance, crucial to the accurate determination of stellar parameters. This new prescription renders Farnir et al. (2019)'s WhoSGIAd method completely automatic and makes it a swift candidate for the survey of the glitches of large samples of solar-like stars as is expected of the PLATO mission (Rauer et al., 2014).
This paper is organised as follows: In Sect. 2 we detail the motivation behind our approach and recall the basic principle behind the WhoSGIAd method. We pursue with our theoretical results on a grid of models in Sect. 3. We then apply the developed approach to a set of four observed targets in Sect. 4 and characterise both its accuracy and efficiency. Finally, we conclude our paper in Sect. 5.
## 2 Motivation
### WhoSGIAd fitting reminder
The peculiarity of the WhoSGIAd method lies in the definition of the fitting function. An orthonormal basis of functions is built over the vector space of frequencies. These functions are separated into a smooth - slowly varying trend as a function of frequency, as per the asymptotic theory (e.g. Tassoul, 1980) - and a glitch contribution, which are independent of one another. Indeed, by construction, the basis functions used to represent the frequencies are totally orthonormal to those of the smooth part.2 This has the advantage of rendering the indicators defined over the glitch contribution completely independent of their smooth counterparts. The general representation of a fitted frequency of radial order \(n\) and spherical degree \(I\) is the following
Footnote 2: We note that it is possible, from a physical point of view, that the glitch functions may contain some information related to the smooth contribution and conversely, although they have been selected to minimise such effect. We merely mean that, by construction, the functions used to represent both contributions are mathematically independent of one another. It is clear that the oscillation frequencies of a star are not physically independent. However, if their _measurements_ are treated as independent probability variables, then our orthogonalisation ensures that the measured seismic indicators associated with the smooth component are statistically independent from those associated to the glitch(es).
\[\nu_{n,I,\text{fit}}=\sum_{k}a_{k}\,q_{k}\,\left(n\right), \tag{2}\]
with \(a_{k}\) the projected reference frequency over the basis function of index \(k\), \(q_{k}\), evaluated at \(n\). These orthonormal functions are obtained by applying the Gram-Schmidt algorithm to the polynomials of increasing degrees \(n^{k}\) for the smooth part and a parametrised oscillating component for the glitch (see Farnir et al., 2019, and Eq. 5 below) The projection is done for each spherical degree according to the scalar product defined in Farnir et al. (2019). The specificity of this definition of the scalar product is that it accounts for the observational uncertainties on the oscillation frequencies.
Because of the orthornormalisation, the \(a_{k}\) coefficients are completely independent of one another and of unit uncertainty. By consequence, combining them appropriately allows us to construct indicators that are as little correlated as possible. Central to the present discussion is the helium glitch amplitude indicator, \(A_{\text{He}}\), which has the advantage of being completely independent of the indicators defined over the smooth part of the oscillation spectrum. In the present paper, we propose a revised definition of the amplitude defined in Farnir et al. (2019) that scales with the uncertainties on the observed frequencies
\[\mathcal{H}_{\text{He}}=\frac{A_{\text{He}}}{\sqrt{\sum_{i=1}^{N}1/\sigma_{i}^ {2}}}, \tag{3}\]
with \(A_{\text{He}}\) the helium glitch amplitude as defined in Farnir et al. (2019) and \(\sigma_{i}\) the uncertainty on the i-th of the N observed frequencies. Nevertheless, to retrieve the helium glitch signature, fitting methods need to determine the acoustic depth at which the glitch occurs (see for example Monteiro, 2002; Basu et al., 2004; Houdek & Gough, 2007; Verma et al., 2014). It is defined as,
\[\tau_{\text{He}}=\int_{r_{\text{He}}}^{R_{*}}\frac{dr}{c(r)}, \tag{4}\]
where \(r\) is the radius of the considered layer of the star, \(R_{\star}\) the radius of the stellar surface, \(r_{\rm He}\) the radius of the helium glitch, and \(c\) (\(r\)) the local sound speed. The exact value of \(r_{\rm He}\) is somewhat arbitrary as the depression in \(\Gamma_{1}\) corresponds to the contribution of the hydrogen, first and second partial helium ionisation zones (See for example Figs. 1 and 3 in Verma et al., 2014; Houdayer et al., 2021). Determining its value therefore constitutes an uncertainty of glitch fitting approaches. The WhoSGIAd method is no exception as the glitch orthonormal basis elements are function of this acoustic depth. For clarity, we recall the basis elements used to describe the helium glitch (before orthonormalisation)
\[p_{\rm He,k,j}(\widetilde{n})\ =\ f_{j}\left(4\pi T_{\rm He}\widetilde{n} \right)\widetilde{n}^{-k}, \tag{5}\]
with \(k\ =\ (4,5)\), \(j\ =\ (1,2)\), \(f_{1}\ (\bullet)\ =\ \sin\left(\bullet\right)\), \(f_{2}\ (\bullet)\ =\ \cos\left(\bullet\right)\), \(\widetilde{n}\ =\ n+l/2\), and
\[T_{\rm He}=\tau_{\rm He}\Delta, \tag{6}\]
the dimensionless acoustic depth of the helium glitch where \(\Delta\) is the large frequency separation as defined in WhoSGIAd. This redefinition of the acoustic depth prevents the frequency function to be fitted from being implicit in frequency and further reduces non-linearities. This \(T_{\rm He}\) parameter remains the only non-linear parameter present in the WhoSGIAd formulation. Farnir et al. (2019, 2020) resolve this issue by keeping it fixed to a model value obtained from a partial modelling. While their method has proven efficient and accurate for the 16CygA and B system, the need for a \(T_{\rm He}\) value, which is model dependent, prevents their approach from being fully automated. This is the issue we aim to address in the present publication.
### Some intuition
The objective of the present paper is to relate the dimensionless helium glitch acoustic depth to observables that are easy to obtain. To do so, we may build some intuition from Saha's relation (Saha, 1920)
\[\frac{He^{\epsilon+n}n_{e}}{He^{\epsilon}}\ =\ \frac{g}{h^{3}}\ (2\pi m_{e}k_{B}T)^{ \frac{3}{2}}\ e^{-\frac{\chi}{k_{B}T}}, \tag{7}\]
where \(g\) is the ratio of the statistical weights between the first and second ionisation states of helium, \(h\) Planck's constant, \(k_{B}\) Boltzmann's constant, \(m_{e}\) the electron mass, and \(\chi\) the helium second ionisation energy. This equation allows us to compute the number of fully ionised helium atoms \(He^{\epsilon+}\), that of partially ionised helium atoms \(He^{\epsilon+}\) and the number of free electrons \(n_{e}\), which actually conceals a dependency in density. While we assume here that helium is the only species to be partially ionised - hydrogen is fully ionised and metals are not -, Eq. 7 shows that both the temperature and density at the layer impact the relative number of helium atoms in their first and second states of ionisation. Therefore, we expect the acoustic depth of the glitch to be mostly determined by the profile in the stellar interior of these two quantities. The exact nature of this profile should vary with stellar parameters. Therefore, the first natural step of this work is to study the evolution of the dimensionless acoustic depth of the helium glitch (Eq. 6) across the HR diagram, where the effective temperature directly intervenes. Rather than using the classical \(T_{\rm eff}\) - \(L\) diagram, we take advantage of the precise seismic data and build a seismic HR diagram as in Christensen-Dalsgaard (1988) but using the \(\Delta_{0}\) and \(\dot{r}_{02}\) seismic indicators (as defined in Farnir et al., 2019). These correspond to the large separation of radial modes and the average small separation ratio between radial and quadrupolar modes.
## 3 Theoretical study of the helium acoustic depth over a grid of models and linear estimation
### Evolution across the seismic HR diagram
To motivate the use of a linear estimation of \(T_{\rm He}\), we show its evolution on the main sequence along a grid of models with masses ranging from \(0.9\ M_{\odot}\) to \(1.18\ M_{\odot}\). We selected this range to focus on models that do not have a convective core on the main sequence, which could introduce higher order contributions in the acoustic depth evolution. (We indeed observed for higher masses that the helium acoustic depth varies in a strongly non-linear fashion as a function of \(\Delta\) and \(\dot{r}_{02}\), as opposed to the lower masses.) All the models have been built using the CLES stellar evolution code (Scuflaire et al., 2008) as described in Farnir et al. (2019).
Figure 1 shows the evolution of the dimensionless acoustic depth, as a colour gradient, with \(\Delta_{0}\) and \(\dot{r}_{02}\) for a composition of \(X_{0}=0.68\) and \(Z_{0}=0.024\), which is typical of solar-like stars. In this figure, each track corresponds to an evolutionary track on the main sequence (starting at the ZAMS on the top right and finishing at the TAMS on the bottom left) for a given mass, increasing from right to left. We first observe that its evolution along the grid seems both monotonic and linear, with increasing values from top to bottom. Therefore, we adjust a linear relation in \(\Delta_{0}\) and \(\dot{r}_{02}\) of the form
\[T_{\rm He,lin}\simeq a\Delta_{0}+b\dot{r}_{02}+c, \tag{8}\]
where \(a\), \(b\), and \(c\) are the coefficients to be adjusted.
To validate our adjustment, we now compute the reduced differences in dimensionless acoustic depth between the fitted and exact (with the elected location of the helium glitch) values over the same grid. These are expressed as
\[\delta T_{\rm He}=\frac{T_{\rm He,model}-T_{\rm He,lin}}{T_{\rm He,model}}, \tag{9}\]
with the'model' and 'lin' subscripts representing the model value obtained by integration (Eqs. 4 and 6) and the one obtained by the adjusted linear relation (Eq. 8), respectively. This is represented as a colour gradient in Fig. 2 where the difference is expressed in percentage. We observe that the relation fares rather well with a maximum
Figure 1: Evolution during the main sequence of the dimensionless helium glitch acoustic depth along a grid of models. The models have an initial composition of \(X_{0}=0.68\) and \(Z_{0}=0.024\) and masses ranging from \(0.90M_{\odot}\) to \(1.18M_{\odot}\) (\(0.02M_{\odot}\) step, right to left). The colour gradient represents the value of \(T_{\rm He}\).
discrepancy of at most 3 % (which corresponds to an absolute error of \(T_{\text{He,model}}-T_{\text{He,lin}}\simeq 0.0025\), about a tenth of the \(\sim 0.02\) span of the grid). For the sake of comparison, Farnir et al. (2019) had stated that, in 16CygA's case, a change in the dimensionless acoustic depth of 10 % had no significant impact on the measured glitch amplitude, meaning that the inferred helium abundance would remain untouched. This however has to be regarded with caution as it corresponds only to a specific star and the physical conditions can substantially vary from star to star. We also note in Fig. 2 that the difference does not vary monotonically along the grid, showing an hourglass shape with a zero crossing close to its center and maximum differences at the zero-age main sequence and terminal-age main sequence. This is probably a calibration effect and suggests that a higher order relation could improve the agreement. However, given the already satisfactory agreement, such more complex relations are unnecessary. Furthermore, we will later touch upon means to further improve our results (see Sect. 4).
### Impact of the composition
As one would expect, the stellar composition may vary between stars and we expect that it will impact the measured glitch acoustic depth. To test this hypothesis, we compute the relative difference in acoustic depth (Eq. 9) over grids of models with different chemical compositions. We consider here the composition pairs \((X_{0},Z_{0})=(0.68,0.012)\), \((0.74,0.012)\), and \((0.74,0.024)\) which should display large enough variations to observe an impact on the estimated helium glitch acoustic depth (e.g. typical ranges of values such as the one considered by Nsamba et al., 2021, Fig. 4, encompass the one considered here). We illustrate these results in Figs. 3 to 5 where we use the values of the coefficients fitted to our reference case (\(X_{0}=0.68\) and \(Z_{0}=0.024\), Figs. 1 and 2).
almost tripling in value. However, considering values typical of the middle of our grid, this change in fitted coefficients with composition, only leads to an approximate difference of \(\sim\,0.3\%\) in \(T_{\text{He}}\) with respect to our reference case. We also add that in the worst case (\(X_{0}=0.68,\,Z_{0}=0.012,\,\text{Fig.~{}3}\)), only the models with a mass greater than \(1.12\,M_{0}\) present relative differences as high as \(\sim 15\%\). These actually correspond to models that preserve a convective core from their pre-main sequence phase, while stellar models of lower mass do not. This introduces non-linearities which may explain the large differences we observe in Fig. 3. Indeed, the large discrepancies only appear for these masses while the other tracks are confined in a smaller range around zero. Removing tracks with a convective core from Fig. 3 indeed reduces the discrepancies as displayed in Fig. 6, where the differences now span the \(-4\%\) to \(6\%\) range. Consequently, all the values computed for models without convective cores are within \(10\%\) of the actual value. This demonstrates that, on the main sequence and for models without a convective core, the linear estimate should provide results of reasonable quality.
Performance of different estimates with observed targets and impact on the inferred helium abundance
In the present section, we compare the results of three different approaches to estimate the helium glitch acoustic depth and assess their ability to reliably estimate the helium glitch amplitude, to in turn model the helium abundance. These three approaches are the linear formulation in \(\Delta\) and \(\dot{r}_{02}\) (Eq. 8), an optimised value via Brent's optimisation algorithm (minimising the differences between reference and fitted frequencies) and the value obtained by partial modelling of the target and computation of the integrated form, Eq. 4 (as formerly used with WhoSGIad) - dubbed 'old' in the present paper. To do so, we consider four Kepler targets: 16Cyga, 16CygB, KIC8006161, and KIC8394589. These are all solar-like stars that are within the Kepler Legacy sample (Lund et al., 2017) which constitute the most precise seismic data to this day. Except for 16Cyga and B for which we use the revised frequencies of Davies et al. (2015), we use the frequencies stated in Lund et al. (2017). From these frequencies, we ignore the ones with uncertainties greater than \(1\leavevmode\nobreak\ \mu Hz\) as they are the most uncertain and may destabilise the search for models representative of WhoSGIAd indicators.
### Accuracy of the linear estimate
Before assessing whether the linear estimate of the helium acoustic depth allows us to retrieve an accurate helium abundance, we assess the accuracy of its value. To do so, we plot the evolution of the \(\chi^{2}\) function - evaluating the agreement between observed and fitted frequencies - with the value of the dimensionless acoustic depth. This function is defined as
\[\chi^{2}=\sum_{i=1}^{N}\frac{\left(\nu_{i,\text{obs}}-\nu_{i,\text{fit}} \right)^{2}}{\sigma_{i}^{2}}, \tag{10}\]
where \(\nu_{i}\) is the i-th of the N frequencies, the 'obs' and 'fit' subscript corresponding to observed and WhoSGIad fitted values, respectively, and \(\sigma_{i}\) the uncertainty of the i-th frequency. We show the two characteristic cases of 16CygB and KIC8394589 in Figs. 7 and 8, which constitute two extremes (the plots for the two remaining stars are given in Appendix A). In both figures, the horizontal axis corresponds to the value of the dimensionless acoustic depth, either as a relative difference from the linear estimate expressed as a percentage at the bottom, or as the exact value shown at the top. Because the bottom axis corresponds to a relative difference with respect to the linear estimate derived in the present paper, a zero value corresponds to this estimate, shown as the red dashed line for clarity. In both figures, we consider values in the range \(\left[0.5\leavevmode\nobreak\ T_{\text{He,lin}},1.5\leavevmode\nobreak\ T_{ \text{He,lin}}\right]\), with the linear estimate denoted as \(T_{\text{He,lin}}\).
Focusing first on Fig. 7, we observe that the \(\chi^{2}\) is well behaved and presents a clear minimum. In addition, we note that this minimum is approximately \(5\%\) from the linear estimate. This means that the linear estimate provides a good guess of the optimal value. Figure 8 depicts a slightly more complicated picture for two reasons. First, the \(\chi^{2}\) landscape produces a less clear minimum, due to a flat region at the bottom of the \(\chi^{2}\) depression. Second, we observe now that this minimum lies at almost \(1.4\) times the linearly computed value. This could have a significant impact on the inferred helium abundance. This will be assessed in Sect. 4.3.
### Impact of the helium acoustic depth on the measured glitch amplitude
One of the most crucial end products of helium glitch fitting is the ability to precisely constrain the helium abundance of solar-like stars. As we have provided several estimates for \(T_{\rm He}\), which may in turn impact the helium glitch amplitude and, therefore the inferred helium abundance, it becomes necessary to compare these approaches and assess their accuracy when modelling the helium abundance. We first assess the impact of the estimated value of \(T_{\rm He}\) on the measured glitch amplitude, \(\mathcal{A}_{\rm He}\). Using the same set of benchmark stars as previously, we compute the evolution of the helium glitch amplitude over the same range of acoustic depth values as in Figs. 7 and 8. The results are presented in Figs. 9 and 10 (additional plots are presented in Appendix A). In these figures, the value corresponding to the optimum of \(\chi^{2}\) - as retrieved with Brent's optimisation procedure -, is shown by the continuous red vertical line. The meaning of the axes and vertical lines are the same as in Figs. 7 and 8, with the dotted line corresponding to the 'old' approach and the dashed line to the linear estimate presented in this paper. Individual estimates of the helium glitch acoustic depth are provided in Table 2. Additionally, for better visualisation purposes, we show a departure of one \(\sigma\) (\(\mathcal{A}_{\rm He}\)) with respect to the optimised value as the horizontal blue line. Observing these figures, it is striking that all the \(T_{\rm He}\) estimates provide a measurement of the glitch amplitude within one \(\sigma\) of the optimised value, which we would consider to be the better one. Only a large change in \(T_{\rm He}\) would lead to a significant change in \(\mathcal{A}_{\rm He}\). Consequently, these results demonstrate the relative insensitivity of our approach to the exact value of this somewhat arbitrary parameter - we recall that its exact definition is mostly a matter of convention as the feature from which the helium glitch originates is rather broad. From these considerations, we expect that a simple approach is best for the swiftness and automation of WhoSGlAd as it should not significantly impact the inferred helium abundance. Therefore, we recommend to use the value obtained by Brent's optimisation scheme as it is easy to implement, fast in execution, robust, and relatively model independent.
### Impact on the inferred helium abundance
In order to assess the impact of using the optimised value for \(T_{\rm He}\) on the inferred helium abundance, we build two stellar models for each of the four targets. The procedure is similar to that presented in Farnir et al. (2020). Considering the same set of reference physical
\begin{table}
\begin{tabular}{c c c c} \hline Id & model \(T_{\rm He}\) & fit \(T_{\rm He}\) & optimal \(T_{\rm He}\) \\ \hline
16CygA & 0.0756 & 0.0702 & 0.0839 \\
16CygB & 0.0751 & 0.0717 & 0.0753 \\ KIC894589 & 0.0699 & 0.0661 & 0.0888 \\ KIC8006161 & 0.0731 & 0.0708 & 0.0657 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison between different estimates of the dimensionless acoustic depth. ‘model’ values corresponds to the ones obtained through the older approach via a stellar model, ‘fit’ values are the results of using the linearly adjusted relation (Eq. 8), and ‘optimal’ values are obtained after the Brent optimisation step, corresponding to the \(\chi^{2}\) minimum.
Figure 8: Same as Fig. 7 in KIC8394589’s case.
Figure 7: Evolution of the agreement between fitted and observed frequencies of 16CygB, expressed as a \(\chi^{2}\) value, as a function of the helium glitch acoustic depth, expressed as the relative difference with respect to the linear estimate (in %). The \(\chi^{2}\) minimum, obtained via Brent’s minimisation procedure, is represented by the red vertical line. We also show the linear estimate as a vertical dashed line and the value obtained with the ‘old’ approach as a dotted one (due to its proximity with the continuous one, it is barely visible).
Figure 9: Evolution of the helium glitch amplitude of 16CygB with the variation of the helium glitch acoustic depth with respect to the linear estimate (Eq. 8). This variation is expressed as a relative difference in percent. The \(\chi^{2}\) minimum, obtained via Brent’s minimisation procedure, is represented by the red vertical line. We also show the linear estimate as a vertical dashed line and the value obtained with the ‘old’ approach as a dotted one (due to its proximity with the continuous one, it is barely visible). The blue horizontal line corresponds to a 1 \(\sigma\) variation in amplitude from the optimal value.
ingredients as their reference models, we use a Levenberg-Marquardt minimisation algorithm to find a CLES model that best matches the four observed seismic indicators: \(\Delta\), \(\dot{r}_{01}\), \(\dot{r}_{02}\) and \(\dot{\mathcal{A}}_{\rm He}\). The free parameters of the procedure are the stellar age (t), mass (M), initial hydrogen abundance (\(X_{0}\)), and initial metals ratio (\((Z/X)_{0}\)). We consider that a model properly fits the observed data when the theoretical seismic indicators are within one \(\sigma\) of their observed value. In other words, a model is considered acceptable when \(\chi^{2}~{}<~{}=~{}1\), with
\[\chi^{2}=\sum_{i=1}^{L}\frac{\left(\dot{r}_{i,\rm obs}-\dot{r}_{i,\rm th} \right)^{2}}{\sigma_{i}^{2}}, \tag{11}\]
where \(\dot{r}_{i}\) is the i-th of the L constraints, the 'obs' and 'th' subscript corresponding to observed and theoretical values, respectively, and \(\sigma_{i}\) the uncertainties associated with the indicator.
The only difference between the two models lies in the determination of the helium glitch amplitude. For one set of models, dubbed 'old', \(T_{\rm He}\) is obtained following Farnir et al. (2020)'s approach. That is, we first build a model that adjusts \(\Delta\), \(\dot{r}_{01}\), and \(\dot{r}_{02}\) and compute the integral form in Eq. 4 to provide an estimate of \(T_{\rm He}\). The amplitude of the helium glitch signature can then be measured and used as a constraint to produce the final model, yielding the helium abundance estimate. The motivation behind this approach was that these three indicators are completely independent of the helium glitch contribution expressed in the WhoSGIAd basis and should allow us to constrain the most part of the stellar structure. The second estimate, dubbed 'new', uses the optimised - via Brent's optimisation scheme - \(T_{\rm He}\) to retrieve \(\mathcal{A}_{\rm He}\) without needing a partial modelling of the star. The provided value is therefore completely model independent. Then, a model representative of the four indicators is built to estimate the helium abundance of the target.
The results are shown in Fig. 11 where we compute the difference in helium abundances between the 'old' and 'new' approaches for each of the four stars. We observe that the differences in values obtained with the two approaches, never exceed \(0.002\) dex. For comparison, typical uncertainties found in the literature are in the range \(\sigma\left(Y_{0}\right)\in[0.01,0.05]\)(Farnir et al., 2020; Nsamba et al., 2021; Verma et al., 2022, for example, either using more sophisticated modelling approaches or acounting for the impact of the elected physical prescription in the stellar models). In our opinion, the uncertainties on the initial helium abundance provided by our local model search algorithm are not realistic as they are the product of the inverse Hessian matrix, computed through finite differences. Therefore, these are prone to imprecisions in the derivatives computation and we do not display them. To robustly estimate these uncertainties, more sophisticated modelling approaches (such as MCMC simulations) would be necessary. We also insist on the fact that the uncertainties on the individual seismic indicators used are the result of the propagation of the frequency uncertainties. In all the cases, we observe that the helium abundance estimate has barely changed in comparison to the 'old' approach and that the values remain within typical literature uncertainties of a zero difference (for the sake of completeness, stellar parameters for the four stars in the two cases are given in Appendix B). This is a direct consequence of the robustness of our approach and its relative insensitivity to the acoustic depth parameter, as shown in Sect. 4.2. Additionally, we observe that, in some instances, the models computed with the older approach are already compatible with the new measure of \(\mathcal{A}_{\rm He}\) and, as a consequence, the optimal model, and inferred helium abundance, has not changed. For comparison's sake, we compare the values of the initial helium abundance we infer with the optimised value of \(T_{\rm He}\) to the ones obtained by Verma et al. (2019) and Nsamba et al. (2021). This is presented in Table 3. To avoid an artificial spread of the values we considered only the models computed with MESA (Paxton et al., 2011) in Verma et al. (2019)'s study and only the models computed with grid A from Nsamba et al. (2021). This prevents us from factoring in the impact of different physical prescriptions and evolution models within the same studies. We observe that, in all cases, our values agree within one \(\sigma\) of the ones retrieved by Verma et al. (2019). Compared to Nsamba et al. (2021)'s values, we find that they agree for KIC8006161 and KIC8394589 that have the largest uncertainties in \(Y_{0}\) and that they barely disagree for 16Cyg A and B. This might be due to the fact that Nsamba et al. (2021)'s results are on the low side of \(Y_{0}\) values and that they do not account for the helium glitch information in their fitting procedure, they rather use the individual frequencies. Overall, this shows the validity of our refined approach.
## 5 Conclusion
The precise measurement of the helium glitch signature is an essential body of work in order to accurately estimate the helium abundance
Figure 11: Difference between the two inferred values for the initial helium abundance for the four stars considered in this paper. These two values use either the model value of \(T_{\rm He}\) or the Brent optimised one.
Figure 10: Same as Fig. 9 in KIC8394589’s case with the exception that the blue horizontal line corresponding to a one \(\sigma\) change in the optimised helium glitch amplitude is not visible as it sits outside of the range of acoustic depth values considered.
of low mass stars and lift the degeneracy between helium content and mass (Lebreton and Goupil, 2014), hindering our ability to retrieve precise stellar parameters. Nonetheless, due to its faint nature, its detection is a difficult task and numerous approaches have been proposed (e.g. Mazumdar et al., 2014; Verma et al., 2014; Farnir et al., 2019). While the most important piece of information carried by the glitch is its amplitude, as it correlates with the surface helium content (Basu et al., 2004; Verma et al., 2014; Farnir et al., 2019; Houdayer et al., 2021), its acoustic depth remains an important parameter as it is necessary to estimate it to then measure the amplitude. Additionally, this parameter appears non-linearly in the glitch's expression. In the present paper, we developed an approach to automatically estimate the helium glitch acoustic depth. With a simple minimisation step, we are able in a fraction of a second to automatically provide a robust and accurate value for the helium glitch acoustic depth, and therefore the helium glitch amplitude. We demonstrated for solar-like stars that our method is robust with respect to the exact definition of the helium glitch acoustic depth. The simple and fast approach we propose leads to measurements of the helium glitch amplitude that are consistent with the older approach implemented with WhoSGIAd (Farnir et al., 2019). Using four Kepler LEGACY targets, we further demonstrated that the helium abundance inferred using the revised glitch amplitude is consistent with both the older approach and the values presented in independent studies (Verma et al., 2019; Nsamba et al., 2021). As the precise retrieval of the helium glitch signature is crucial to the accurate measurement of the helium abundance of low-mass stars, the proposed method proves to be an excellent candidate as it allows us to fully automate the WhoSGIAd method by alleviating the need for a partial modelling of the considered target. Thanks to WhoSGIAd's precision and speed of execution, the model independent glitch signature adjustment is automatically carried out in less than a second. Furthermore, as we demonstrated, the model independent approach proposed here is compatible with other studies, offering many advantages. This opens the possibility to robustly analyse very large samples of data as can be expected from future missions such as PLATO. An implementation of the acoustic depth estimation presented in this paper within WhoSGIAd will be made available on WhoSGIAd's GitHub page ([https://github.com/Tuglut/WhoSGIAd-python](https://github.com/Tuglut/WhoSGIAd-python)) by the time of publication.
## Acknowledgements
M.F. and A.-M.B. acknowledge the support STFC consolidated grant ST/T000252/1.
The authors would like to thank the referees for their constructive remarks and suggestions, significantly improving the present paper.
## Data Availability
The data underlying this article are available in the article and in its online supplementary material.
|
2302.06947 | Counting and Sequential Information Processing in Mechanical
Metamaterials | Materials with an irreversible response to cyclic driving exhibit an evolving
internal state which, in principle, encodes information on the driving history.
Here we realize irreversible metamaterials that count mechanical driving cycles
and store the result into easily interpretable internal states. We extend these
designs to aperiodic metamaterials which are sensitive to the order of
different driving magnitudes, and realize 'lock and key' metamaterials that
only reach a specific state for a given target driving sequence. Our strategy
is robust, scalable and extendable, and opens new routes towards smart sensing,
soft robotics and mechanical information processing. | Lennard J. Kwakernaak, Martin van Hecke | 2023-02-14T10:07:00Z | http://arxiv.org/abs/2302.06947v1 | # Counting and Sequential Information Processing in Mechanical Metamaterials
###### Abstract
Materials with an irreversible response to cyclic driving exhibit an evolving internal state which, in principle, encodes information on the driving history. Here we realize irreversible metamaterials that count mechanical driving cycles and store the result into easily interpretable internal states. We extend these designs to aperiodic metamaterials which are sensitive to the order of different driving magnitudes, and realize 'lock and key' metamaterials that only reach a specific state for a given target driving sequence. Our strategy is robust, scalable and extendable, and opens new routes towards smart sensing, soft robotics and mechanical information processing.
+
Footnote †: preprint: APS/123-QED
Counting a series of signals is an elementary process that can be materialized in simple electronic or neural networks [1]. Even the Venus flytrap can count, as it only snaps shut when touched twice, despite not having a brain [2]. While the ability to count is not commonly associated with materials, certain complex materials, from crumbled sheets to amorphous media, can exhibit memory effects where the state depends on the driving history [3; 4]. Under cyclic driving, their response then may feature subharmonic behavior [5; 6; 7; 8; 9; 10; 11; 12] or, as was recently shown, a transient where the system only settles in a periodic response after \(\tau>0\) driving cycles [13; 14; 15]. The latter response thus counts the number of driving cycles in principle, but in practice, the link between this number and the internal state is highly convoluted. Materials that would feature controlled counting could simplify the design of soft robotics and intelligent sensors, and more widely, open a route towards sequential information processing. However, we have no rational strategies to control the link between state and count or to realize in-material counting.
Here we introduce a general platform for metamaterials [16] that count mechanical compression cycles. Our metamaterials consist of unit cells that each feature a memory-beam (m-beam) that is either buckled left or right, which we represent with a binary value \(s_{i}=0\) or \(1\)[17] (Fig. 1a). The unit cells are designed to interact with their neighbors such that under cyclic compression any unit cell in the '1' state copies this state to its right neighbor (Fig. 1b-c). This leads to a mechanically clocked wave where the '1' state advances rightward, one unit cell per compression cycle. Hence, the collective state, \(S:=\left\{s_{1},s_{2},\dots\right\}\), evolves like in a cellular automaton [18], with repeated cyclical compression yielding simple predictable pathways.
We combine such beam counters to realize metamaterials which exhibit more complex forms of sequential information processing than counting, including the detection of compression cycles of multiple amplitudes, as well as their sequential order. Together, these establish a general platform for realizing targeted multi-step pathways in metamaterials and open a route towards sequential information processing _in materials_[19; 20; 21].
### Unit cell and cyclic driving
We aim to realize metamaterials where state '1' spreads to the right when the compressive strain \(\varepsilon\) is cycled be
Figure 1: (a) Schematic representation of the evolution of a ’beam counter’ metamaterial with \(n=4\) unit cells under two compression cycles. Each unit cell contains a buckled m-beam (memory beam) which encodes a single bit. When the strain \(\varepsilon:=\Delta/L\) is cycled between \(\varepsilon_{m}\) and \(\varepsilon_{M}\), the m-beams interact (grey symbols), so that the ’1’ state is copied to the right (triangle), leading to the step wise advancing of the ’1’ state to the right. (b) Geometry of a unit cell \(i\) (highlighted), containing a slender m-beam and a thicker, asymmetric s-beam (slitted beam) — lengths are non-dimensionalized by setting the beams rest-lengths to \(1\). (c) Evolution of beam pairs under increased compression from \(\varepsilon_{m}\) (left) to \(\varepsilon_{M}\) (right): when the spacing \(d_{i}\) is smaller than a critical distance \(D^{*}\), the buckled state of the slender beam is copied to the thick beam (top); when \(D_{i}>D^{*}\), the buckled state of the thick beam is copied to the slender beam (bottom).
tween \(\varepsilon_{m}\) and \(\varepsilon_{M}\) (Fig. 1). We note that in contrast to recent metamaterials which exhibit sequential shape changes under monotonic driving [22; 23; 24; 25; 26], we require a sequential response under cyclic driving. This necessitates unit cells that memorize their previous state, interact with their neighbors, and break left-right symmetry. We satisfy these requirements with unit cells \(i\) containing two beams (Fig. 1b). The slender m-beams encode states \(s_{i}=0\) or \(1\) in their left or right buckled configurations. We choose \(\varepsilon_{m}\) larger than their buckling strain so they retain their state. The thicker and non-trivially shaped s-beams facilitate interactions between the m-beams, and buckle at a strain larger than \(\varepsilon_{m}\) but smaller than \(\varepsilon_{M}\).
The detailed design involves a careful choice of the symmetry breaking beam shapes and their spacings. First, weakly symmetry breaking rounded corners at the ends of the m-beams controls their buckling into a desired initial configuration \(S=\{100\dots\}\) -- this does not appreciably modify the evolution of the sample during compression cycles, yet allows resetting the beam counter by momentarily cycling the strain towards zero. Second, the s-beams feature similarly rounded corners that makes them buckle left, and a slit which extends their reach when they snap to the right and the slit opens up [27]. As we show below, these symmetry breaking enhancements are crucial for their role in right-copying the '1'-state of the m-beams. Third, we use the beam spacings \(d\) and \(D\) to control the interactions between s- and m-beams. We found that when two buckled beams of unequal thickness are brought in contact, upon further compression they either both snap left or snap right -- the direction depends on whether their distance is smaller or larger than a critical distance \(D^{*}\). We choose \(d_{i}<D^{*}\) and \(D_{i}>D*\) so that contact interactions between neighboring m- and s-beams favor rightward snapping of the beams (Fig. 1c).
## II Counting and controllable transients
We combine our unit cells to construct a 'beam counter' with \(n=11\) unit cells, using standard 3D printing and molding techniques (see Supplemental Material [28]; Fig. 2). We cycle the compression in a custom built setup that allows accurate parallel compression of wide samples, and track the center locations of the middle of the m-beams (Fig. 2b). Ramping up the strain from zero to \(\varepsilon_{m}\), the system reaches the initial state \(\{1000000000\}\) (Fig. 2b). Repeated compression cycles show the step-by-step copying of the '1'state of the m-beams to the right, which involves rightward snapping of the appropriate m-beam just after \(\varepsilon\) has peaked (Fig. 2b). Hence, the state evolves as \(\{100\dots 0\}\rightarrow\{110\dots 0\}\rightarrow\cdots\rightarrow\{1 11\dots 1\}\) (Fig. 2b) (see Movie 1 in [28]). We characterize such 'domain wall' states consisting of a string of '1's followed by 0's by the number of 0's, \(\sigma\). The evolution of our beam counter under cyclic compression can thus be seen as as counting down from \(\sigma=10\) to \(\sigma=0\). Our design is robust, can be scaled down, and can be operated in a
Figure 2: (a) Beam counting metamaterial with 11 units, with center region highlighted; see Movie 1 in [28] (\(t_{i}=0.040\), \(T_{i}=0.10\), \(d_{i}=0.13\) and \(D_{i}=0.15\)). Arrows indicate weak symmetry breaking of the m-beams that makes the system reach state \(\{1000000000\}\) when \(\varepsilon\) is increased from zero. (b) Space-time plot, tracing the center positions of each m-beam as a function of time under cyclic compression \(\varepsilon_{m}\nearrow\varepsilon_{M}\searrow\varepsilon_{m}\) (\(\varepsilon_{m}=0.026\), \(\varepsilon_{M}=0.099\)). Beams in state 0 (1) are highlighted in yellow (blue). (c) Evolution of the beam counter prepared in the initial state \(\{0100011010\}\) (central parts shown only; beam state colored as above).
Figure 3: Comparison of the evolution of two unit cells during a compression cycle. (a) Original design. (b) Design without slits, which does not copy the ’1’ state. Frames (aIV) and (bIV) are not at the same strain, but compare the states where the second m-beam just loses contact with one of its neighboring s-beams; note that the m-beam is then, respectively, to the right (a) and left (b) of the neutral line (dashed).
hand-held device (see Movie 2 in [28]).
The evolution from the natural initial state \(\{100\dots\}\) only features a limited set of states, which do not contain substrings like \(010\) or \(001\). To demonstrate that our metamaterial correctly copies 1-bits to the right, we use manual manipulation to program the metamaterial in the initial state \(\{01000111010\}\) -- this state contains all possible three-bit substrings. Its evolution shows that our metamaterial faithfully executes our target evolution (Fig. 2c; see Movie 3 in [28]). Moreover, we note that this initial state evolves to the absorbing state \(\{11\dots\}\) after only \(\tau=3\) cycles, as the largest numbers of 0's to the right of a 1 is equal to three. Here, the transient \(\tau\) is not a material property but a simple function of the state [13; 14].
A detailed inspection of the evolution of adjacent unit cells during their evolution illustrates that bit-evolution takes place in a two phases (Fig. 3). First, when \(\varepsilon\) is increased beyond a unit-cell dependent critical strain \(\varepsilon^{\dagger}\), the '1' state of \(m_{i}\) is copied to \(s_{i}\) (Fig. 3aI-aIII). During this first phase, the left s-beam snaps open to the right, and the m-beam becomes sandwiched between two s-beams (Fig. 3aIII). In the second phase, \(\varepsilon\) is lowered to \(\varepsilon_{m}\), and the sandwiched m-beam snaps right, after which all beams relax to their new configuration (Fig. 3aIII-aV). To illustrate how the slits facilitate the copying of the right-buckled state, we compare the sandwiched states for s-beams with and without slits ((Fig. 3); see Movie 4 and Supplemental Information [28]). Without slits, the sandwiched m-beam is pushed left and first loses contact with the right beam; with slits, the m-beam is pushed right, first loses contact with the left beam, and eventually moves right (Fig. 3a-b). We stress that although the slits are essential in the current design, we also realized beam counting in an alternative design that does not feature slitted beams (see Movie 5 and Supplemental Information [28]).
## III Sequential Processing
To demonstrate process information beyond simple counting, we combine multiple beam counters into aggregate metamaterials (Fig. 4). Our first goal is to realize metamaterials which discriminate and count driving cycles of different peak compressions \(\varepsilon_{M}\). Specifically, we combine three \(n=4\) beam counters labeled \(aaa,bbb\), and \(ccc\) which have respective critical thresholds \((\varepsilon_{a}^{\dagger},\varepsilon_{b}^{\dagger},\varepsilon_{c}^{ \dagger})\approx(0.073(4),0.085(3),0.092(2))\), which are all controlled by the same global strain \(\varepsilon\) (Fig. 4a-b). We label the resulting metamaterial as \(aaa|bbb|ccc\), and characterize its state by the number of '0' beams in each counter, \(\{\sigma_{i}\}\). We define driving cycles of different magnitude, \(A,B,C\), as compression sweeps \(\varepsilon_{m}\nearrow\varepsilon_{M}^{A,B,C}\searrow\varepsilon_{m}\), with \(\left(\varepsilon_{M}^{A},\varepsilon_{M}^{B},\varepsilon_{M}^{C}\right) \approx(0.078,0.089,0.099)\), such that \(\varepsilon_{a}^{\dagger}<\varepsilon_{M}^{\dagger}<\varepsilon_{M}^{\dagger }<\varepsilon_{M}^{\dagger}<\varepsilon_{M}^{\dagger}<\varepsilon_{M}^{ \dagger}<\varepsilon_{M}^{C}\). Starting out in the initial state \(\{\sigma_{i}\}=\{3,3,3\}\), a single driving cycle (\(A\), \(B\) or \(C\)) then advances one, two or all three counters, yielding three distinct states \(\{2,3,3\}\), \(\{2,2,3\}\), or \(\{2,2,2\}\) respectively. Hence, from the state we can uniquely infer the applied driving cycle.
Crucially, longer driving sequences are also encoded in the internal state We denote sequential driving cycles as, e.g., \(BAC\), for which \(\{\sigma_{i}\}\) evolves as \(\{3,3,3\}\xrightarrow{B}\{2,2,3\}\xrightarrow{A}\{1,2,3\}\xrightarrow{C}\{ 0,1,2\}\) (Fig. 4b). These states all encode specific information, e.g., state \(\{1,2,3\}\) encodes one \(A\) and one \(B\) pulse, whereas \(\{0,1,2\}\) encodes a memory of one \(B\), one \(C\) and an arbitrary number of \(A\) pulses. We note that while the capacity of our metamaterial is limited by one or more counters reaching zero, it can be enlarged by increasing the length \(n\) of the counters. Furthermore, we note that our metamaterial precisely materializes the Park Bench model that has been introduced as a toy model to understand Multiple Transient Memories [29; 30]. Regardless, our strategy combining multiple beam counters allows to distinguish and count different signals.
So far, our metamaterials are insensitive to the order of input signals, which limits their functionality to counting. However, combining unit cells with different thresholds in a single'strip' realizes heterogeneous metamaterials whose response is sequence dependent and, e.g., discriminates driving cycles \(ABC\) from \(BAC\). We realize the heterogeneous metamaterial \(bac\). (Fig. 4c-e). Starting from state \(\sigma=3\), we can use the same logic as before to infer its evolution and we subsequently collect all possible pathways in a transition graph (Fig. 4c). In particular we find that input \(BAC\) yields \(\sigma=0\) while all other three-character permutations of \(A\), \(B\) and \(C\) yield \(\sigma=1\) (Fig. 4d-e). This illustrates that the response of heterogenous counters is sequence dependent.
Finally, by combining heterogeneous and homogeneous counters we realize an aggregate metamaterial that unambiguously detect a specific input 'key' string and thus act as a sequential 'lock'. We note that state \(\sigma=0\) for counter \(bac\) is not unique to input \(BAC\), but can also be reached with input sequences such as \(BBC\) and \(CCC\) (Fig. 4c). Hence, to uniquely recognize a string \(BAC\), we combine the counting metamaterial \(aaa|bbb|ccc\) with the heterogeneous counter \(bac\) (Fig. 4b,d). Out of all three-character strings, \(BAC\) is the only one that yields the collective state \(\{\sigma\}=\{0,1,2,0\}\) (Fig. 4f). The experimental demonstration of the response of he \(aaa|bbb|ccc|bac\) machine to input \(BAC\) is shown in Fig. 4b,d, which correspond to a single experimental run where all four counters were actuated in parallel (see Movie 6 in [28]). We note that our strategy can trivially be extended to longer sequences or larger alphabets.
While the design above cannot distinguish input \(BAC\) from longer sequences such as \(ABAC\), we can detect such longer strings by extending the counter for the weakest signal: out of all possible input sequences, the metamaterial \(aaaa|bbb|ccc|bac\) only reaches state \(\{0,1,2,0\}\) for input \(BAC\), thus allowing to uniquely filter and detect such a string. Finally we note that designs featuring one heterogenous with multiple homogeneous counters
are not optimal. Unique detection of, e.g., three symbol sequences with less than four counters can be achieved; in addition, many machines recognize multiple distinct input sequences (see Supplemental Material [28]).
## IV Outlook
Our platform allows to realize metamaterials with predictable counting-like pathways and easily readable internal states under cyclical driving. These metamaterials act as a sequential thresholding devices, and can be generalized to detect more driving magnitudes and longer sequences. Moreover, similar sequential behavior can be realized in other designs, e.g., without slits. In contrast to recent mechanical platforms that store mechanical bits [17] and perform Boolean logic [19; 21], our metamaterials perform sequential computations, which are much more powerful than combinational logic. Extending our update rules to more complex cases, including those where the new state depends on multiple neighbors, including in higher dimensions, opens routes to create systems that are Turing-complete, such as 'rule 110' or Conway's game of life [18; 32]. Such 'cellular automata materials' would allow massively parallel computations _in materia_.
## Acknowledgements
We thank H. Bense, M. Caelen, C. Coulais, D. Holmes, B. Dura Fauli, D. Kraft, C. Meulblok and M. Munck for fruitful discussions and J. Mesman for technical support.
|
2305.13453 | A Meta-learning based Generalizable Indoor Localization Model using
Channel State Information | Indoor localization has gained significant attention in recent years due to
its various applications in smart homes, industrial automation, and healthcare,
especially since more people rely on their wireless devices for location-based
services. Deep learning-based solutions have shown promising results in
accurately estimating the position of wireless devices in indoor environments
using wireless parameters such as Channel State Information (CSI) and Received
Signal Strength Indicator (RSSI). However, despite the success of deep
learning-based approaches in achieving high localization accuracy, these models
suffer from a lack of generalizability and can not be readily-deployed to new
environments or operate in dynamic environments without retraining. In this
paper, we propose meta-learning-based localization models to address the lack
of generalizability that persists in conventionally trained DL-based
localization models. Furthermore, since meta-learning algorithms require
diverse datasets from several different scenarios, which can be hard to collect
in the context of localization, we design and propose a new meta-learning
algorithm, TB-MAML (Task Biased Model Agnostic Meta Learning), intended to
further improve generalizability when the dataset is limited. Lastly, we
evaluate the performance of TB-MAML-based localization against conventionally
trained localization models and localization done using other meta-learning
algorithms. | Ali Owfi, ChunChih Lin, Linke Guo, Fatemeh Afghah, Jonathan Ashdown, Kurt Turck | 2023-05-22T19:54:59Z | http://arxiv.org/abs/2305.13453v2 | # A Meta-learning based Generalizable Indoor Localization Model using Channel State Information
###### Abstract
Indoor localization has gained significant attention in recent years due to its various applications in smart homes, industrial automation, and healthcare, especially since more people rely on their wireless devices for location-based services. Deep learning-based solutions have shown promising results in accurately estimating the position of wireless devices in indoor environments using wireless parameters such as Channel State Information (CSI) and Received Signal Strength Indicator (RSSI). However, despite the success of deep learning-based approaches in achieving high localization accuracy, these models suffer from a lack of generalizability and can not be readily-developed to new environments or operate in dynamic environments without retraining. In this paper, we propose meta-learning-based localization models to address the lack of generalizability that persists in conventionally trained DL-based localization models. Furthermore, since meta-learning algorithms require diverse datasets from several different scenarios, which can be hard to collect in the context of localization, we design and propose a new meta-learning algorithm, TB-MAML (Task Blased Model Agnostic Meta Learning), intended to further improve generalizability when the dataset is limited. Lastly, we evaluate the performance of TB-MAML-based localization against conventionally trained localization models and localization done using other meta-learning algorithms.
Wireless Indoor Localization, Channel State Information (CSI), Meta-Learning
## I Introduction
Contrary to outdoor localization, where line-of-sight (LOS) is present in most instances, there are a lot of challenges in indoor localization, such as the presence of physical barriers, multipath effect, and the complexity of indoor environments. These challenges have been widely studied, and through recent works, data-driven and Machine Learning (ML) approaches have shown promising results for indoor localization [1]. Traditional indoor localization methods, such as geometric-based approaches (multilateration, trilateration, and triangulation) or fingerprinting, rely on manual calibration, which can be time-consuming and labor-intensive. Moreover, these methods tend to be less accurate than data-driven methods, especially in complex indoor environments with obstacles and signal interference.
Many technologies have been studied as a medium for wireless-based indoor localization, such as ultrasonic, radio frequency identification (RFID), ultra-wideband (UWB), Bluetooth, and WiFi [2]. Out of the proposed technologies, Wi-Fi is often preferred for indoor localization due to its widespread availability, low cost, and ease of implementation. Moreover, Wi-Fi signals also have a relatively large coverage area, meaning fewer access points are needed to cover a given indoor space.
Most proposed Wi-Fi-based indoor localization models either use Received Signal Strength Indicator (RSSI) or Channel State Information (CSI), as both these parameters can provide valuable information regarding the location of wireless devices. RSSI is simple and very easy to obtain as it does not require any special hardware to acquire. However, it is very volatile, and its information is coarse because RSSI is simply the strength of the received wireless signal. In contrast, CSI provides information about the channel characteristics between a device and an access point. CSI can provide more detailed information about the wireless signal, including phase and amplitude in different sub-channels. Although CSI is more stable than RSSI, it is also volatile and susceptible to any environmental changes.
Even though these parameters are not perfect, many data-driven localization models have been proposed that incorporate one of the mentioned parameters, a mixture of them, or a processed version of them in their training dataset, and perform relatively well on the respective testing dataset [3]. The issue with these models is that they have been trained on a train-set collected from a specific location and at a specific time, and due to the high volatility of the mentioned parameters, the underlying distribution that the data-driven model has learned from the given train-set is certain to change when the environment changes or even with time. This means that the learned information for a specific location and time is nearly ineffective for other locations or the same location at a different time. For these conventionally trained ML models to perform well in new environments, they have to go through a complete process of training, which makes these models not be readily-deployable for new locations. Moreover, a complete training process can be very hard or even not feasible in some instances due to the limitations on resources, time, and new datasets. All these mentioned reasons render conventionally
trained ML models impractical as a scalable solution for indoor localization.
This paper aims to solve the aforementioned issues with conventionally trained indoor localization models. We propose a generalizable indoor localization model using meta-learning, which can utilize the knowledge gained from training on multiple datasets collected in different environments towards new unseen environments requiring very little fine-tuning. To this end, we have collected CSI data in 33 different locations, with the data in each location constituting a separate task. We then evaluate the generalizability of the proposed meta-learning-based localization model and other benchmark methods by training on a set of the collected tasks and testing against a set of unseen tasks. Meta-learning algorithms require a sizeable amount of training tasks, which is time-consuming and challenging to collect in the context of indoor localization. Thus, we propose a data-efficient novel meta-learning algorithm, Task Biased Agnostic Meta Learning (TB-MAML), based on Model Agnostic Meta Learning (MAML) [4] to further improve generalizability even with relatively limited datasets. Lastly, we compare the generalizability of the TB-MAML-based localization model with other meta-learning-based localization models in terms of the number of tasks used for training.
The rest of this paper is structured as follows: Section II discusses the previous works on wireless indoor localization. Section III gives a brief introduction to meta-learning and MAML, followed by a description of our proposed meta-learning algorithm and the overall design of our indoor localization model. Section IV describes the dataset we have collected, explains the experiments we used to evaluate our proposed model, provides the evaluations, and discusses them. Finally, section V concludes the paper.
## II Related Work
Many of the earlier works focused on using Received Signal Strength Indicator (RSSI) as a measurement to determine the location of wireless devices [5, 6]. In [7], RSSI values for multiple reference points within an indoor perimeter are measured and stored. In the online phase, the RSSI values from three indoor APs are compared against the stored RSSI dataset based on the Euclidean distance. A weighted average is then calculated using the similarity of the new RSSI readings and the stored reference points. Horus [8] is another localization scheme that employs a probabilistic approach and utilizes RSSI data. In Horus, location-clustering techniques are implemented to reduce the computational requirements of the algorithm. In [9], the authors built a two-stage localization system based on K-Nearest Neighbors (KNN). In the first stage, their algorithm aims to identify the type of environment, and in the second stage, localization is performed using KNN. They utilized RSSI alongside a hybrid feature vector of Channel Transfer Function (CTF) and Frequency Coherence Function (FCF). They concluded that a model using multiple or hybrid features outperforms RSSI-only approaches.
While RSSI is simple and very easy to obtain, the information it carries about the channel is coarse as it only has one signal strength reading for each packet. As an alternative and a more reliable source of information, Channel State Information (CSI) can be used for localization [10]. CSI measures the amplitude and phase of the received signal at each subcarrier, providing detailed information about the channel characteristics.
DeepFi [11] proposes a Deep Neural Networks (DNN) model for indoor localization that uses CSI amplitude for its input. A greedy learning algorithm is used to train the model to reduce the computational complexity. Finally, in the online localization phase, DeepFi uses a probabilistic method based on the radial basis function to estimate the target's location. Evaluations indicate that DeepFi outperforms traditional statistical localization schemes such as HORUS [8] and FIFS [10].
ConFi [12] is the first localization paper that utilizes Convolutional Neural Networks (CNN). As CNNs are powerful tools for inferring information from images, ConFi arranges CSI amplitude data to create CSI feature images. The created feature images are then fed to CNN with three convolutional and two fully connected layers. ConFi treats localization as a classification problem,where inputs are localized based on several specified reference points. Their evaluations show that ConFi outperforms other conventional data-driven localization methods, demonstrating that CNN-based localization is a viable option.
In CiFi [13], CSI phase data was used as a medium to calculate the angle of arrival (AoA). They used the Intel 5300 network interface card with three antennas to collect the CSI data. Based on the measured CSI phase data for every two adjacent antennas, the phase difference was obtained, from which AoA can be calculated. As AoA is not as random raw CSI phase data, it was then fed to the CNN-based localization model they proposed as an input. Their results show that CiFi can compete with other established localization methods, such as DeepFi, suggesting that CSI phase data can also be effective for localization.
One fundamental issue with most of the mentioned localization models is the lack of generalizability and adaptability to new or dynamic environments, as these models have to be retrained when the environment changes to perform well. This dramatically hinders their applicability to real-world scenarios. To address this issue, a few recent works have utilized transfer learning and domain adaptation.
Transloc [14], is a knowledge transfer framework for indoor localization, which derives a cross-domain mapping to transfer the specific knowledge of one domain to another and then creates a homogeneous feature space. This enables the localization model to perform well when the environment changes with a limited number of new training data from the new environment. To increase robustness against environmental changes, Fidora [15] augments the data with a variational autoencoder to add diversity and then employs a domain-adaptive classifier to adjust the localization model to the new data.
In a recently published work, authors of [16] utilized meta-learning for indoor localization to increase the generalizability of DL-based localization models. [16] proposes a localization framework based on MAML [4] as opposed to conventional DL-based localization models. The results presented in this paper are based on simulated RSSI data. Some parameters used to generate the simulated data, such as the room size, number of reference points, and noise level, differed for each scenario,
the parameters being set by pre-determined settings for training and testing scenarios separately. This was done to increase the diversity of scenarios. As RSSI is highly dependent on many parameters, such as obstacles, obstructions, and positioning, which simulations can not fully capture. Hence, the generated scenarios may not be realistically diverse. In the context of meta-learning, a lack of sufficient diverse training scenarios may lead to meta-overfitting in the model, memorizing the learning process for a handful of scenarios and not reaching generalizability for unseen scenarios.
## III Methodology
### _Meta-Learning_
Meta-learning, also known as "learning to learn," is a sub-field of machine learning that focuses on developing algorithms capable of quickly adapting to new tasks with limited data. In conventional deep learning, models are designed to perform well on a specific task with a fixed objective over a dataset divided into training and testing sets. However, meta-learning aims to improve a learning model's performance by training it to learn the learning process itself, enabling it to adapt to new tasks with a very small dataset and consequently in a shorter amount of time. In meta-learning, multiple tasks are divided into training tasks and testing tasks, and each task consists of a distinct objective, a support set (training set), and a query set (test set). In the inner loop (also referred to as the adaptation phase), the meta-learning model adapts to each task by training on the corresponding support set, followed by computing the loss function for that task on the query set. It should be noted that the outer objective function utilized in meta-learning models for overall learning is not the same as the objective function used for each task during the inner loop.
### _Model-Agnostic Meta-Learning (MAML)_
Among the many proposed meta-learning algorithms, MAML [4] is arguably the most popular algorithm. One reason for this popularity is that, as its name suggests, MAML is model agnostic, meaning that it can be applied to any differentiable model regardless of its architecture or specific learning objective. MAML aims to determine an initial set of parameters for the inner model, such that adapting to new tasks can be done as quickly as possible using the computed initial set of parameters. Formally, MAML considers an inner model \(f\) with a set of parameters \(\theta\) denoted by \(f_{\theta}\).
During the inner loop, for each task \(\mathcal{T}_{i}\), the model adapts to task \(\mathcal{T}_{i}\) by training on the corresponding support set and, respectively, updating model parameters \(\theta\) based on the inner objective function to compute \(\theta^{\prime}_{i}\). The following equation shows the adaptation phase of a single gradient step, but it can be extended to cases where multiple gradient steps are taken, as well.
\[\theta^{{}^{\prime}}_{i}=\theta-\alpha\Delta_{\theta}\mathcal{L}_{\mathcal{T}_ {i}}(f_{\theta}) \tag{1}\]
where \(\alpha\) is the step size.
The outer objective function used in the outer loop is defined as below:
\[\min_{\theta}\sum_{\mathcal{T}_{i}\sim p(\mathcal{T})}\mathcal{L}_{\mathcal{T }_{i}}\left(f_{\theta^{\prime}_{i}}\right)=\sum_{\mathcal{T}_{i}\sim p( \mathcal{T})}\mathcal{L}_{\mathcal{T}_{i}}\left(f_{\theta-\alpha\nabla_{ \theta}}\mathcal{L}_{\mathcal{T}_{i}}\left(f_{\theta}\right)\right) \tag{2}\]
where \(f^{\prime}_{\theta}\) is optimized with respect to the initial set of parameters \(\theta\) used to adapt to each task. And the outer loop optimization rule is as followings:
\[\theta\leftarrow\theta-\beta\nabla_{\theta}\sum_{\mathcal{T}_{i}\sim p( \mathcal{T})}\mathcal{L}_{\mathcal{T}_{i}}\left(f_{\theta^{\prime}_{i}}\right) \tag{3}\]
where \(\beta\) is a hyper-parameter known as _meta-step size_.
For all training tasks, the inner loop is performed, and then \(\theta\) is updated during the outer loop as shown in (3). In contrast, only the inner loop is performed for the testing tasks to see how well the model can adapt to an unseen task using a limited support set.
### _Task Biased Model Agnostic Meta Learning (TB-MAML)_
In this section, we would like to propose TB-MAML, a novel meta-learning algorithm based on MAML. TB-MAML is designed for cases with a limited number of training tasks for the meta-training process. In conventional deep learning, not having enough data samples leads to overfitting, memorization of the data samples, and consequently, not learning the underlying distribution from which the data was sampled. A similar concept called _Meta-overfitting_ exists in the context of meta-learning. Consider a distribution over all tasks \(\mathcal{P}(\mathcal{T})\) and a limited set of tasks \(\mathcal{T}\) that do not wholly represent the distribution \(\mathcal{P}(\mathcal{T})\). Suppose a meta-learning model just uses the tasks \(\mathcal{T}\) for the meta-training process. In that case, it will meta-overfit to the tasks in \(\mathcal{T}\), meaning that it will not learn to adapt quickly to all the tasks drawn from the distribution \(\mathcal{P}(\mathcal{T})\), but just the tasks in \(\mathcal{T}\). TB-MAML is designed to learn the underlying distribution \(\mathcal{P}(\mathcal{T})\) even in cases where the set of training tasks \(\mathcal{T}\) available to us is limited. In the context of localization, each task requires a training set and a test set for multiple reference points in a location. Since the process of collecting data for multiple reference points per each task is time-consuming, gathering a large enough number
Fig. 1: Schematic of the proposed TB-MAML algorithm.
of indoor localization tasks is not an easy feat. To provide a sense of comparison, the dataset Omniglot which is a standard toy dataset for meta-learning literature has 1623 classes. If we define each task as a 10-way classification, we will have \(\binom{\text{1623}}{10}\) different tasks at our disposal which we can split into meta-training and meta-testing tasks. To this end, TB-MAML is particularly valuable in the context of indoor localization as it is designed for improved generalizability for circumstances where the number of tasks is limited.
TB-MAML defines an importance vector over the available meta-training tasks to identify the tasks that push the model more toward generalizability, or in other words, the tasks that provide better information regarding the learning process of all the other tasks in \(\mathcal{P}(\mathcal{T})\). TB-MAML is biased towards the more important tasks as it emphasizes them during the learning process, hence the name, Task Biased Model Agnostic Meta Learning.
To calculate the importance vector, we first select task \(i\) from the meta-training tasks. We train our inner model with the training set of task \(i\). In a case of \(n\)-shot learning, for each task \(j\) in the meta-training tasks where \(i\neq j\), we further train the inner model with the support set of task \(j\) and then test the model against the query set of task \(j\), resulting in the loss \(\mathcal{L}_{i}(\theta_{ij})\). We denote the average of all these losses as \(\mathcal{L}_{i}\), which is a measurement of how well a model trained for task \(i\) can adapt to unseen tasks. By calculating the average loss \(\mathcal{L}_{i}\) for all tasks, we form the vector \([\mathcal{L}_{1},...,\mathcal{L}_{n}]\). By normalizing this vector between values (-1,1) and then inverting the values, we derive the importance vector \([u_{1},...,u_{n}]\).
During outer loop (steps 6 and 7 in fig 1), when the inner loop TB-MAML has adapted to the task \(j\) using the corresponding support set, it updates \(\theta\) based on the importance of task \(j\). More Formally:
\[\theta\leftarrow\theta-(\beta+\gamma u_{j})\nabla_{\theta}\sum_{\mathcal{T}_{ i}\sim p(\mathcal{T})}\mathcal{L}_{\mathcal{T}_{i}}\left(f_{\theta^{\prime}_{i}}\right) \tag{4}\]
where \(u_{j}\) is the importance of task \(j\) and \(\gamma\) is a hyperparameter that adjusts intensity of the importance vector.
The entire process of TB-MAML is summarized in Algorithm 1. Furthermore, a schematic of TB-MAML is provided in Fig 1 for illustration of TB-MAML. In step 1, the importance vector is computed from the training tasks available. In step 2, the inner model is initilized with weight \(\theta\), task \(\mathcal{T}_{i}\) is sampled and the corresponding support set is fed to the inner model. Steps 3 and 4 represent the inner loop where the model adapts to task \(\mathcal{T}_{i}\). In step 5, query set of \(\mathcal{T}_{i}\) is given to the model and outer loop is then performed (steps 6 and 7), and the inner model's initialization weight \(\theta\) is updated. After sufficient iterations when convergence is reached, meta-testing phase starts (steps 9-13). The steps taken in this phase are similar to the ones taken in the meta training phase, with the difference that outer loop is not performed.
```
\(\mathcal{P}(\mathcal{T})\): Distribution over training tasks \(U=[u_{1},...,u_{n}]\): Importance vector for training tasks \(\alpha,\beta,\gamma,\): inner step size, outer step size, and importance vector intensity Randomly initialize inner model's weights \(\theta\) while not converged do Sample meta-training task \(T_{i}\sim\mathcal{P}(\mathcal{T})\) \(\theta^{{}^{\prime}}_{i}\leftarrow\theta\) for all inner loop iterations do Using support set \(\mathcal{D}_{i}\) compute loss \(\mathcal{L}_{\mathcal{T}_{i}}\) Update \(\theta^{{}^{\prime}}_{i}\leftarrow\theta^{{}^{\prime}}_{i}-\alpha\Delta_{\theta^ {{}^{\prime}}_{i}}\mathcal{L}_{\mathcal{T}_{i}}(f_{\theta^{{}^{\prime}}_{i}})\) endfor Using query set \(\mathcal{D}^{\prime}_{i}\) compute loss \(\mathcal{L}_{\mathcal{T}_{i}}\) Update \(\theta\leftarrow\theta-(\beta+\gamma u_{i})\nabla_{\theta}\mathcal{L}_{ \mathcal{T}_{i}}\left(f_{\theta^{{}^{\prime}}_{i}}\right)\) endwhile
```
**Algorithm 1** TB-MAML
## IV Evaluations
### _Dataset_
For the purpose of testing the generalizability and adaptability of the discussed localization models, a dataset consisting of multiple different scenarios was required. In total, we collected 33 scenarios, each scenario resulting in a different task. All 33 scenarios were collected in different indoor locations such as rooms, laboratories, corridors, and auditoriums, to diversify the overall dataset as much as possible. A few example locations can be seen in fig 2(b). Each scenario consisted of 12 reference points, arranged in a 3 by 4 grid with a grid size of 60 cm. Fig 2(a) shows the positioning of the reference points in test scenarios. We collected CSI data for all reference points using two Intel 5300 network interface cards, one as a receiver and one as a transmitter. We transmitted Wi-Fi 802.11n packets with 20 MHz bandwidth on the 5 GHz frequency band and for every reference point. The transmitter transmits 40 bursts each of the burst includes 100 packets. To counter the instantaneous interference or fluctuations in the environment, each burst has 1 second pause time before the next one. The transmitter uses only one antenna for transmission, while the receiver uses all three antennas for receiving. In 802.11n, 52 subcarriers are carrying information and used for calculating the CSI data. The Intel 5300 card follows a grouping method that reduces the size of the CSI report field to 30. Hence, each CSI sample had a size of \(3\times 30\). We calculate and normalize only the amplitude of the CSI data before feeding it into the network.
### _Generalizability Analysis of Conventional DL-based Localization Models_
Before providing the results for the proposed meta-learning models we would like to emphasize on the lack of generalizability in conventional DL localization models. Fig 3(a) depicts the error of a conventionally trained DL localization model on one task and then tested against another one. The architecture of the used DL model is described in table I. The value in cell \((i,j)\) is the distance error of the localization model trained for scenario \(i\) and then tested against scenario \(j\). For cleaner visualization purposes, only the first 10 scenarios are considered in the heatmap. As it can be seen from the figure, the distance error on the main diagonal is very low (when the model was trained for scenario \(i\) and was tested against \(i\)) but for the other cases we can see the distance error is pretty high, pointing to the lack of generalizability of the conventionally trained localization model. The mean distance error in this plot is 95.98 \(cm\).
In 3(b), we have the same experiment as 3(a) but just 5 new data samples per reference point from scenario \(j\) are provided to the localization model to train on. With a mean distance error of 63.45cm, we can observe that the overall distance error has reduced as expected in comparison with 3(a). But the distance error is still very high when compared to the main diagonal of the heatmap, pointing to the lack of adaptability in the conventionally trained model.
### _Localization Accuracy Analysis_
To evaluate the generalizability of our proposed TB-MAML-based localization model, we are considering several benchmark algorithms in our experiments. The first benchmark, referred to as conventional learning, we have a localization model without prior training that has to train on a few new samples from the unseen environments. In Transfer Learning, we are feeding the full training dataset of one of the scenarios to the localization model, followed by a few new samples from the unseen target environments. We are then employing MAML, First Order Model Agnostic Meta Learning (FOMAML), and our proposed meta-learning model, TB-MAML, as cases of meta-learning based localization. It has to be noted that for all benchmarks, results are based on localization accuracies from unseen scenarios. All algorithms have been executed multiple times with different training scenarios and testing scenarios and results are averaged over the runs to reduce randomness in results. To have a fair comparison, the same inner model structure has been used for all cases which is described in table I.
Figure 4 shows the localization errors of the compared localization models in terms of cumulative distribution function (CDF), in multiple cases with different number of new samples from the new scenarios. As visible from the figures, TB-MAML localization outperforms other benchmarks throughout all few-shot scenarios, followed by MAML. We can further observe that FOMAML-based localization is more accurate that a conventionally trained model, but slightly less accurate than transfer-learning-based localization. Since FOMAML is a computationally efficient first-order approximation of MAML and, therefore, a less accurate meta-learning algorithm, this observation is not unexpected. In the 5-shot case, 59 percent of distance errors for TB-MAML were below 50 cm, while the corresponding percentage for MAML, Transfer learning, FOMAML, and Conventional learning were 45, 38, 22, and 18 percent respectively. Figure 5 depicts a boxplot of the distance errors for the same experiments. Again, it can be observed that TB-MAML localization outperforms other benchmarks in terms of the average distance error, followed by the MAML localization model.
### _Limited Number of Tasks Analysis_
In another experiment, we compared the accuracy of the mentioned meta-learning based localization models with our proposed TB-MAML-based localization model in scenarios with different number of training tasks. Figure 6 illustrates the
Fig. 3: Localization distance errors of a conventional DL-based localization model trained on scenario \(i\) and tested against scenario \(j\). In Figure (a) no additional training samples from the testing scenario were provided to the model, whereas in Figure (b), five additional samples per reference point from the testing scenario were given to the model for further training. For cleaner visualization purposes, only the first 10 scenarios are considered in the figures.
Fig. 2: Experiment Settings.
results for this experiment. As expected we can observe that distance error of all compared meta-learning-based algorithms increases as the number of training tasks decreases. But we can also observe that TB-MAML outperforms the other benchmark localization algorithms throughout all scenarios with different number of training tasks. Moreover, we can see that TB-MAML is less affected in comparison when the number of training tasks is small (e.g. five training tasks), as TB-MAML is designed for situations where the number training tasks is limited.
## V Conclusion
In this paper, we addressed the lack of generalizability of conventionally trained indoor localization models by proposing meta-learning-based localization. Moreover, we designed a new meta-learning algorithm, TB-MAML, specialized to reach better generalizability when the number of scenarios available for training a meta learning model is limited. This characteristic of TB-MAML is desired in the context of indoor localization, as collecting large enough diverse scenarios is difficult and time-consuming. Through extensive experimental results using real data collected from 33 different locations, we showed that meta-learning-based localization models dominate conventionally trained localization models in generalizability. Furthermore, in another experiment between meta-learning-based localization models, we showed that TB-MAML-based localization reaches better generalizability even in cases with extremely limited number of available training scenarios.
|
2304.04206 | Some aspects of $k$-ideals of semirings | The aim of this paper is to study some distinguished classes of $k$-ideals of
semirings, which include $k$-prime, $k$-semiprime, $k$-radical,
$k$-irreducible, and $k$-strongly irreducible ideals. We discuss some of the
properties of $k$-ideals and their products, intersections, and ideal quotients
under semiring homomorphisms. | Themba Dube, Amartya Goswami | 2023-04-09T10:22:50Z | http://arxiv.org/abs/2304.04206v1 | # Some aspects of \(k\)-ideals of semirings
###### Abstract.
The aim of this paper is to study some distinguished classes of \(k\)-ideals of semirings, which include \(k\)-prime, \(k\)-semiprime, \(k\)-radical, \(k\)-irreducible, and \(k\)-strongly irreducible ideals. We discuss some of the properties of \(k\)-ideals and their products, intersections, and ideal quotients under semiring homomorphisms.
Key words and phrases:semiring, \(k\)-ideal, \(k\)-closure operation, weakly Noetherian semiring 2020 Mathematics Subject Classification: 16Y60
## 1. Introduction
As algebraic structures, semirings are definitely a natural choice of generalizations of rings, and it is appropriate to ask which properties of rings can be extended to semirings. One may expect semirings always to be extended to rings, but [20] gave examples of semirings that cannot be embedded in rings. In studying the structures of rings, just as ideals play crucial roles, the same is true for semirings. However, the ideals of semirings are significantly different in nature than those of rings. The lack of subtraction in semirings shows that many results in rings have no counterparts in semirings. The notion of \(k\)-ideal introduced in [10] is an attempt to compensate for this lack of subtractivity. Since the introduction of \(k\)-ideals, several studies have been made (see [1, 1, 13, 14, 15, 16, 17, 18, 19, 20, 21]).
In this paper, we shall touch upon some aspects of \(k\)-ideals that have not been addressed in the literature. The aim of this and the sequel paper, [10] is to contribute further on the theory of \(k\)-ideals by using results from the above-mentioned works and extending results from rings and semirings. To obtain this, we have lifted results from [11, 12] on ideals of semirings, and from [1, 13, 15, 16] on ideals of rings.
The result that plays a key role in studying distinguished classes (call one such as X) of \(k\)-ideals is the following "exchange principle":
\[k\text{-}\text{X ideal}=k\text{-ideal}+\text{X ideal}. \tag{1.1}\]
For example, to prove the properties of \(k\)-prime ideals (see Definition 3.4), it will be sufficient to prove them for prime \(k\)-ideals. It turns out that all the distinguished classes of \(k\)-ideals that we have considered here satisfy the above exchange principle. The other tool that has been extensively used in the proofs is the alternative formulation of \(k\)-ideals in terms of \(k\)-closure operations (see Definition 2.4).
We shall now briefly describe the content of the paper. In SS2, we recall the definition of the \(k\)-closure operation and gather some of its properties. We have definition of \(k\)-ideals in terms of the \(k\)-closure operations and study some operation on \(k\)-ideals. The purpose of SS3 is to study some distinguished classes of \(k\)-ideals, which contain \(k\)-prime, \(k\)-semiprime, and \(k\)-radical ideals in SS3.1, whereas in SS3.2, we discuss \(k\)-irreducible and \(k\)-strongly irreducible ideals. Finally, in SS4, we introduce the notions of \(k\)-contractions and \(k\)-extensions of \(k\)-ideals. We conclude this paper with remarks on some further works.
## 2. \(k\)-ideals
A (commutative) _semiring_ is a system \((R,+,0,\cdot,1)\) such that \((R,+,0)\) is a commutative monoid, \((R,\cdot,1)\) is a commutative monoid, \(0\cdot x=0=x\cdot 0\) for all \(x\in R\), and \(\cdot\) distributes over \(+\). We shall write \(xy\) for \(x\cdot y\). By a semiring, in this paper, we mean a commutative semiring with multiplicative identity element \(1\). A _semiring homomorphism_\(\phi\colon R\to R^{\prime}\) is a map such that \(\phi(x+y)=\phi(x)+\phi(y)\), \(\phi(xy)=\phi(x)\phi(y)\), \(\phi(0)=0\), and \(\phi(1)=1\) for all \(x\), \(y\in R\). An _ideal_\(I\) of a semiring \(R\) is an additive submonoid of \(R\) such that \(rx\in I\) for all \(x\in I\) and \(r\in R\). An ideal \(I\) is called _proper_ if \(I\neq R\). We also use the symbol \(0\) to denote the zero ideal of a semiring \(R\). If \(S\) is a nonempty subset of a semiring \(R\), then \(\langle S\rangle\) will denote the ideal generated by \(S\).
**Definition 2.1** ([1]).: A \(k\)_-ideal_ (or _subtractive ideal_) \(I\) of a semiring \(R\) is an ideal of \(R\) such that \(x\), \(x+y\in I\) implies \(y\in I\)1.
Footnote 1: Note the typo in [1]: In the definition of a \(k\)-ideal, ‘\(y\in S\)’ should be ‘\(y\in\Gamma\).
Surely, the zero ideal is a \(k\)-ideal and is contained in every \(k\)-ideal of a semiring \(R\). We denote by \(\mathcal{I}(R)\) (respectively by \(\mathcal{I}_{k}(R)\)) the set of all ideals (respectively all \(k\)-ideals) of \(R\). A \(k\)_-proper_ ideal is a proper \(k\)-ideal of \(R\). A \(k\)-ideal of a semiring can also be defined in terms of a closure operation on \(\mathcal{I}(R)\).
**Definition 2.2** ([1]).: Let \(R\) be a semiring. The \(k\)_-closure_ operation on \(\mathcal{I}(R)\) is defined by
\[\mathcal{C}_{k}(I)=\{r\in R\mid r+x\in I\text{ for some }x\in I\}. \tag{2.1}\]
The next lemma gathers the properties of closure operations needed in the sequel.
**Lemma 2.3**.: _In the following, \(I\), \(\{I_{\lambda}\}_{\lambda\in\Lambda}\), and \(J\) are ideals of a semiring \(R\)._
1. \(\mathcal{C}_{k}(I)\) _is the smallest_ \(k\)_-ideal containing_ \(I\)_._
2. \(\mathcal{C}_{k}(0)=0\)_._
3. \(\mathcal{C}_{k}(R)=R\)_._
4. \(\mathcal{C}_{k}(\mathcal{C}_{k}(I))=\mathcal{C}_{k}(I)\)_._
5. \(I\subseteq J\) _implies_ \(\mathcal{C}_{k}(I)\subseteq\mathcal{C}_{k}(J)\)_._
6. \(\mathcal{C}_{k}(\langle I\cup J\rangle)\supseteq\mathcal{C}_{k}(I)\cup \mathcal{C}_{k}(J)\)_._
7. \(\mathcal{C}_{k}(\lceil\lambda_{\lambda}\rceil_{\lambda})=\lceil\lambda_{ \lambda\in\Lambda}\mathcal{C}_{k}(I_{\lambda})\)_._
8. \(I\) _is a_ \(k\)_-ideal if and only if_ \(I=\mathcal{C}_{k}(I)\)_._
9. \(\mathcal{C}_{k}(IJ)\supseteq\mathcal{C}_{k}(I)\,\mathcal{C}_{k}(J)\)_._
Proof.: (1) [1, Proposition 3.1].
(2)-(7) Straightforward.
(8) [1, Lemma 2.2].
(9) Let \(r\in\mathcal{C}_{k}(I)\,\mathcal{C}_{k}(J)\). Then there exist \(r^{\prime}\in\mathcal{C}_{k}(I)\) and \(r^{\prime\prime}\in\mathcal{C}_{k}(J)\) such that \(r^{\prime}+i\in I\), \(r^{\prime\prime}+j\in J\), and \(r=r^{\prime}r^{\prime\prime}\), where \(i\in I\) and \(j\in J\). Notice that
\[r^{\prime}r^{\prime\prime}+r^{\prime}j+r^{\prime\prime}i+ij=(r^{\prime}+i)(r^ {\prime\prime}+j)\in IJ.\]
Since \(r^{\prime}r^{\prime\prime}+r^{\prime}j+r^{\prime\prime}i\in R\), \(r^{\prime}j+r^{\prime\prime}i\in R\), and \(R\) is a \(k\)-ideal, \(r=r^{\prime}r^{\prime\prime}\in R\), and this completes the proof
Thanks to Lemma 2.3(8), we now have an alternative definition of a \(k\)-ideal.
**Definition 2.4**.: An ideal \(I\) of a semiring \(R\) is called a \(k\)_-ideal_ if \(\mathcal{C}_{k}(I)=I\).
From Lemma 2.3(1), it follows that the \(k\)-closure is indeed a map \(\mathcal{C}_{k}\colon\mathcal{I}(R)\to\mathcal{I}_{k}(R)\) defined by (2.1). Our next goal is to study various operations on \(k\)-ideals of a semiring. It is easy to see that if \(\{I_{\lambda}\}_{\lambda\in\Lambda}\) is a family of \(k\)-ideals, then their _intersection_\(\bigcap_{\lambda\in\Lambda}I_{\lambda}\) is also a \(k\)-ideal. However, the sum of two \(k\)-ideals of a semiring need not be a \(k\)-ideal2, but it is so in a lattice ordered semiring (_cf._[12, Corollary 21.22]). If \(I\) and \(J\) are \(k\)-ideals of a semiring \(R\), define their _product_ as \(IJ=\mathcal{C}_{k}\left(\langle\{xy\mid x\in I,y\in J\}\rangle\right)\). The following lemma gives us the expected relation between products and intersections of \(k\)-ideals. The proof is straightforward.
Footnote 2: [12, Example 6.19]: 2N and 3N are \(k\)-ideals of the semiring \(\mathds{N}\), however 2N + 3N = \(\mathds{N}\setminus\{1\}\) is not a \(k\)-ideal of \(\mathds{N}\).
**Lemma 2.5**.: _For any two \(k\)-ideals \(I\) and \(J\) of a semiring \(R\), we have \(IJ\subseteq I\cap J\)._
If \(X\) is a subset of a semiring \(R\), then the _annihilator_ of \(X\) is defined by
\[\operatorname{Ann}_{R}(X)=\{r\in R\mid rx=0\text{ for all }x\in X\}.\]
The following result is proved in [12, Example 6.10].
**Proposition 2.6**.: _If \(X\) is a nonempty subset of a semiring \(R\) and \(X\neq\{0\},\) then \(\operatorname{Ann}_{R}(X)\) is a \(k\)-ideal._
Suppose that \(I\) and \(J\) are ideals of a semiring \(R\). Then the _ideal quotient_ of \(I\) over \(J\) is defined by
\[(I:J)=\{r\in R\mid rJ\subseteq I\}.\]
The following proposition and its corollary give examples of \(k\)-ideals constructed from ideal quotients.
**Proposition 2.7**.: _If \(I\) is a \(k\)-ideal and \(J\) is an ideal of a semiring \(R\), then \((I:J)\) is a \(k\)-ideal of \(R\)._
Proof.: Let \(r\), \(r+r^{\prime}\in(I\colon J)\). This implies \(rJ\subseteq I\) and \(rJ+r^{\prime}J=(r+r^{\prime})J\subseteq I\). Since \(I\) is a \(k\)-ideal, we have \(r^{\prime}J\subseteq I\), whence \(r^{\prime}\in(I\colon J)\).
**Corollary 2.8**.: _Suppose that \(J\), \(\{J_{\lambda}\}_{\lambda\in\Lambda}\), \(K\) are ideals and \(I\), \(\{I_{\omega}\}_{\omega\in\Omega}\) are \(k\)-ideals of a semiring \(R\). Then \((I\colon J)\), \(((I\colon J)\colon K)\), \((I\colon JK)\), \(((I\colon K)\colon J)\), \((\bigcap_{\omega}I_{\omega}\colon J)\), \(\bigcap_{\omega}(I_{\omega}\colon J)\), \((I\colon\sum_{\lambda}J_{\lambda})\), and \(\bigcap_{\lambda}(I\colon J_{\lambda})\) are all \(k\)-ideals of \(R\)._
The lattice of all ideals of a ring is modular, whereas the same is not true for a semiring. However, we have the following result announced in [12]. A proof of it can be found in [12, Proposition 6.38].
**Proposition 2.9**.: _For every semiring \(R\), \(\mathcal{I}_{k}(R)\) is a modular lattice._
## 3. Some classes of \(k\)-ideals
The purpose of this section is to study some distinguished classes of \(k\)-ideals of semirings. These classes of \(k\)-ideals are obtained by "restricting" the usual definitions of the corresponding classes of ideals to \(k\)-ideals.
**Definition 3.1**.: A proper \(k\)-ideal of a semiring \(R\) is called \(k\)_-maximal_ if it is not properly contained in another proper \(k\)-ideal.
The following result from [1, Corollary 2.2] guarantees that the set of \(k\)-maximal ideals in a semiring is nonempty.
**Lemma 3.2**.: _Every proper \(k\)-ideal of a semiring \(R\) is contained in a \(k\)-maximal ideal of \(R\)._
As it has been pointed out in the introduction that the exchange principle (1.1) holds for the classes of \(k\)-ideals that we study, the following proposition is the very first example of it for \(k\)-maximal ideals.
**Proposition 3.3**.: _An ideal \(M\) of a semiring \(R\) is \(k\)-maximal if and only if it is a \(k\)-ideal and a maximal ideal of \(R\)._
Proof.: It is obvious that every \(k\)-ideal of \(R\) which is also a maximal ideal is \(k\)-maximal. Suppose that \(M\) is a \(k\)-maximal ideal and \(M\subsetneq I\subsetneq R\) for some \(I\in\mathcal{I}(R)\setminus\mathcal{I}_{k}(R)\). This implies that
\[M=\mathcal{C}_{k}\left(M\right)\subseteq I\subset\mathcal{C}_{k}\left(I \right)\subseteq\mathcal{C}_{k}(R)=R,\]
where the first and last equalities follow respectively from (8) and (3) of Lemma 2.3, and the first and the last strict inclusions follow from Lemma 2.3(5). The middle strict inclusion follows from the hypothesis \(I\in\mathcal{I}(R)\setminus\mathcal{I}_{k}(R)\) and Lemma 2.3(8). Since \(M\) is \(k\)-maximal, \(M\subsetneq\mathcal{C}_{k}\left(I\right)\subsetneq R\) leads to a contradiction.
### \(k\)-prime and \(k\)-semiprime ideals
In noncommutative semirings, the notions of prime and semiprime ideals play significant roles. In this section, we shall extend those types of ideals to \(k\)-prime and \(k\)-semiprime ideals. The notion of a radical of an ideal will be replaced by \(k\)-radicals.
**Definition 3.4**.: A \(k\)-proper ideal \(P\) of a semiring \(R\) is called \(k\)_-prime_ if \(IJ\subseteq P\) implies either \(I\subseteq P\) or \(J\subseteq P\) for all \(I,J\in\mathcal{I}_{k}\left(R\right)\). We denote by \(\operatorname{spec}_{k}\left(R\right)\) the set of all \(k\)-prime ideals of \(R\)
The exchange principle (1.1) for \(k\)-prime ideals is proved in [11, Proposition 3.5].
**Proposition 3.5**.: _An ideal \(P\) of a semiring \(R\) is \(k\)-prime if and only if \(P\) is a prime ideal as well as a \(k\)-ideal of \(R\)._
As in rings (with identity), we also have the following result for \(\operatorname{spec}_{k}\left(R\right)\).
**Proposition 3.6**.: _Every nonzero semiring contains a minimal \(k\)-prime ideal._
Proof.: Suppose that \(R\) is a nonzero semiring. It follows from Lemma 3.2 that \(R\) has a \(k\)-maximal ideal, which by [14, Theorem 3.4 ] is also a prime \(k\)-ideal and hence is a \(k\)-prime ideal by [11, Proposition 3.5]. Therefore, the set \(\operatorname{spec}_{k}\left(R\right)\) is nonempty. The claim now follows from a routine application of Zorn's lemma.
**Proposition 3.7**.: _For a \(k\)-prime ideal \(P\) of a semiring \(R\), the following are equivalent._
1. \(xy\in P\) _implies either_ \(x\in P\) _or_ \(y\in P\) _for all_ \(x,y\in\)__
2. \(IJ\subseteq P\) _implies either_ \(I\subseteq P\) _or_ \(J\subseteq P\) _for all_ \(I,J\in\mathcal{I}_{k}\left(R\right)\)_._
Proof.: Let \(I\nsubseteq P\) and \(j\in J\). If \(i\in I\setminus P\), then \(ij\in IJ\subseteq P\). But \(i\notin P\). By hypothesis, this implies that \(j\in P\), or in other words, \(J\subseteq P\). Conversely, suppose that \(xy\in P\) and \(x\notin P\). From [11, Proposition 3.5], it follows that \(\langle xy\rangle_{k}=\langle x\rangle_{k}\langle y\rangle_{k}\), where \(\langle x\rangle_{k}\) is the smallest \(k\)-ideal of \(R\) containing \(x\). Therefore, \(\langle x\rangle_{k}\langle y\rangle_{k}\subseteq P\) and \(\langle x\rangle_{k}\nsubseteq P\). Since \(P\) is \(k\)-prime, we must have \(\langle y\rangle_{k}\subseteq P\), whence \(y\in P\).
The following proposition is lifted from rings, and the proof is identical to the ring-theoretic version of it, which can be found in [1, Lemma 3.19].
**Proposition 3.8** (Prime avoidance lemma).: _Let \(I\) be a subset of a semiring \(R\) that is stable under addition and multiplication, and \(P_{1},\ldots,P_{n}\) be \(k\)-ideals such that \(P_{3},\ldots,P_{n}\) are \(k\)-prime ideals. If \(I\nsubseteq P_{j}\) for all \(j\), then there is an \(x\in I\) such that \(x\notin P_{j}\) for all \(j\)._
Recall from [14, Definition 2.5] that a semiring \(R\) is called _weakly Noetherian_ if every ascending chain of \(k\)-ideals of \(R\) is ultimately stationary. If \(R\) is such that every ideal of it is a \(k\)-ideal, then [13, Proposition 7.17] shows \(R\) is weakly Noetherian if and only if every \(k\)-prime ideal of a semiring \(R\) is finitely generated. Moreover, in a weakly Noetherian semiring \(R\), [14, Corollary 6.6] says that the radical \(\mathcal{R}_{k}\left(I\right)\) of a \(k\)-ideal
\(I\) can be represented as a finite intersection of \(k\)-prime ideals. In a weakly Noetherian semiring, we have another result, but for minimal \(k\)-prime ideals.
**Proposition 3.9**.: _If \(R\) is a weakly Noetherian semiring, then the set of minimal \(k\)-prime ideals of a semiring \(R\) is finite._
Proof.: We give a topological proof. If \(R\) is weakly Noetherian, then the topological space (endowed with Zariski topology) \(\operatorname{spec}_{k}(R)\) is also Noetherian, and thus \(\operatorname{spec}_{k}(R)\) has finitely many irreducible components. Now every irreducible closed subset of \(\operatorname{spec}_{k}(R)\) is of the form
\[\operatorname{V}(P)=\{Q\in\operatorname{spec}_{k}(R)\mid P\subseteq Q\},\]
where \(P\) is a minimal \(k\)-prime ideal. Thus \(\operatorname{V}(P)\) is irreducible component if and only if \(P\) is a minimal \(k\)-prime ideal. Hence, the number of minimal \(k\)-prime ideals of \(R\) is finite.
We shall now define the notion of \(k\)-semiprime ideals. It will be shown in Theorem 3.18 that \(k\)-semiprime ideals are equivalent to \(k\)-radical ideals (see Definition 3.12).
**Definition 3.10**.: A \(k\)-proper ideal \(P\) of a semiring \(R\) is called \(k\)_-semiprime_ if \(I^{2}\subseteq P\) implies \(I\subseteq P\), for all \(k\)-ideals \(I\) of \(R\).
Similar to \(k\)-prime ideals, we also have exchange principle for \(k\)-semiprime ideals.
**Proposition 3.11**.: _An ideal \(Q\) of a semiring \(R\) is \(k\)-semiprime if and only if \(Q\) is a \(k\)-ideal and a semiprime ideal of \(R\)._
Proof.: Observe that if \(Q\) is a \(k\)-ideal and also a semiprime ideal of a semiring \(R\), then by Definition 3.10, \(Q\) is \(k\)-semiprime. Conversely, let \(Q\) be a \(k\)-semiprime ideal and \(I^{2}\subseteq Q\) for some \(I\in\mathcal{I}(R)\). Then
\[(\mathcal{C}_{k}(I))(\mathcal{C}_{k}(I))\subseteq\mathcal{C}_{k}(I^{2}) \subseteq\mathcal{C}_{k}(Q)=Q,\]
where the first and second inclusions respectively follow from Lemma 2.3(9) and Lemma 2.3(5). Since \(Q\) is \(k\)-semiprime, applying Lemma 2.3(1), we obtain that \(I\subseteq Q\).
In rings, it is well known that the definition of the radical of an ideal \(I\) can be formulated as the intersection of all prime ideals that contain \(I\). If we restrict \(I\) to being a \(k\)-ideal and prime ideals to being \(k\)-primes, then we obtain the notion of a \(k\)-radical.
**Definition 3.12** ([4]).: The \(k\)_-radical of a \(k\)-ideal \(I\)_ of a semiring \(R\) is defined by
\[\mathcal{R}_{k}(I)=\bigcap_{I\subseteq P}\{P\mid P\in\operatorname{spec}_{k}(R )\}. \tag{3.1}\]
If \(\mathcal{R}_{k}(I)=I\), then \(I\) is called a \(k\)_-radical ideal_.
In the following lemma we gather some elementary properties of \(k\)-radicals.
**Lemma 3.13**.: _In the following, \(I\) and \(J\) are \(k\)-ideals of of a semiring \(R\)._
1. \(\mathcal{R}_{k}(I)\) _is a_ \(k\)_-ideal containing_ \(I\)_._
2. \(\mathcal{R}_{k}(\mathcal{R}_{k}(I))=\mathcal{R}_{k}(I)\)_._
3. \(\mathcal{R}_{k}(IJ)=\mathcal{R}_{k}(I\cap J)=\mathcal{R}_{k}(I)\cap\mathcal{R} _{k}(J)\)_._
4. \(\mathcal{R}_{k}(I)=R\) _if and only if_ \(I=R\)
Proof.: (1) The fact that \(I\subseteq\mathcal{R}_{k}(I)\) follows from Definition 3.12. To show \(\mathcal{R}_{k}(I)\) is a \(k\)-ideal, let \(x,x+y\in\mathcal{R}_{k}(I)\). This implies \(x,x+y\in P\) for all \(k\)-prime ideals that contain \(I\). Since each such \(P\) is a \(k\)-ideal, \(y\in P\). Hence \(y\in\mathcal{R}_{k}(I)\).
(2) Since by (1), \(\mathcal{R}_{k}(I)\) is a \(k\)-ideal, applying Lemma 2.3(1) gives \(\mathcal{R}_{k}(I)\subseteq\mathcal{R}_{k}(\mathcal{R}_{k}(I))\). Since by (1), \(I\subseteq\mathcal{R}_{k}(I)\), by Definition 3.12, we have \(\mathcal{R}_{k}(I)\supseteq\mathcal{R}_{k}(\mathcal{R}_{k}(I))\).
(3) Follows from Lemma 2.3(5) and Lemma 2.5.
(4) Follows from Definition 3.12.
Next, we wish to prove the equivalence between \(k\)-semiprime ideals and \(k\)-radical ideals of a semiring. This equivalence is well known to hold between semiprime ideals and radical ideals of (noncommutative) rings and semirings. In the noncommutative case, we require the notions of \(m\)-systems and \(n\)-systems of rings and semirings, whereas in the context of \(k\)-ideals of commutative semirings, only multiplicatively closed subsets are sufficient. To obtain the equivalence, we first proceed through a series of lemmas.
**Lemma 3.14**.: _A \(k\)-ideal \(P\) of a semiring \(R\) is \(k\)-prime if and only if \(R\setminus P\) is a multiplicatively closed subset of \(R\)._
Proof.: Suppose that \(P\) is \(k\)-prime and \(x\), \(y\in R\setminus P\). Then by Proposition 3.7, \(xy\not\in P\), and hence \(xy\in R\setminus P\). Conversely, let \(x\), \(y\not\in P\). This implies that \(x\), \(y\in R\setminus P\). Since \(R\setminus P\) is multiplicatively closed, \(xy\in R\setminus P\), and hence \(xy\not\in P\), proving that \(P\) is prime. Therefore, by Proposition 3.7, \(P\) is \(k\)-prime.
**Lemma 3.15**.: _Let \(S\) be a multiplicatively closed subset of a semiring \(R\). Suppose that \(P\) is a \(k\)-ideal which is maximal with respect to the property that \(P\cap S=\emptyset\). Then \(P\) is \(k\)-prime._
Proof.: Let \(x\not\in P\) and \(y\not\in P\), but \(\langle x\rangle\langle y\rangle\subseteq P\). By the hypothesis on \(P\), there exist \(s\), \(s^{\prime}\in S\) such that \(s\in\langle x\rangle+P\) and \(s^{\prime}\in\langle y\rangle+P\). This implies that
\[ss^{\prime}\in(\langle x\rangle+P)(\langle y\rangle+P)=\langle x\rangle \langle y\rangle+P\subseteq P,\]
a contradiction. Hence \(P\) is prime and by Proposition 3.7, \(P\) is a \(k\)-prime ideal.
**Lemma 3.16**.: _The \(k\)-radical \(\mathcal{R}_{k}(I)\) of a \(k\)-ideal \(I\) is equal to the set_
\[T=\{r\in R\mid\text{every multiplicatively closed subset containing $r$ intersects $I$}\}.\]
Proof.: Suppose that \(r\in T\) and \(P\in\operatorname{spec}_{k}(R)\) such that \(I\subseteq P\). Then by Lemma 3.14, \(R\setminus P\) is a multiplicatively closed subset of \(R\) and \(r\not\in R\setminus P\). Hence \(r\in P\). Conversely, let \(r\not\in T\). This implies that there exists a multiplicatively closed subset \(S\) of \(R\) such that \(r\in S\) and \(S\cap I=\emptyset\). By Zorn's lemma, there exists a \(k\)-ideal \(P\) containing \(I\) and maximal with respect to the property that \(P\cap S=\emptyset\). By Lemma 3.15, \(P\) is a prime ideal and by Proposition 3.7, \(P\) is a \(k\)-prime ideal with \(r\not\in P\).
**Lemma 3.17**.: _Let \(I\) be a \(k\)-semiprime ideal of a semiring \(R\) and \(x\in R\setminus I\). Then there exists a multiplicatively closed subset \(S\) of \(R\) such that \(x\in S\subseteq R\setminus I\)._
Proof.: Define the elements of \(S=\{x_{1},x_{2},\ldots,x_{n},\ldots\}\) inductively as follows: \(x_{1}:=x\); \(x_{2}:=x_{1}x_{1}\); \(\ldots\); \(x_{n}:=x_{n-1}x_{n-1}\); \(\ldots\). Obviously \(x\in S\) and it is also easy to see that \(x_{i}\), \(x_{j}\in S\) implies that \(x_{i}x_{j}\in S\).
**Theorem 3.18**.: _For any \(k\)-ideal \(I\) of a semiring \(R\), the following are equivalent._
1. \(I\) _is_ \(k\)_-semiprime._
2. \(I\) _is an intersection of_ \(k\)_-prime ideals of_ \(R\)_._
3. \(I\) _is_ \(k\)_-radical._
Proof.: From Definition 3.12, it follows that \((3)\Rightarrow(2)\). Since the intersection of \(k\)-prime ideals is a \(k\)-prime ideal and every \(k\)-prime ideal is \(k\)-semiprime, \((2)\Rightarrow(1)\) follows. What remains is to show that \((1)\Rightarrow(3)\) and for that, it is sufficient to show \(\mathcal{R}_{k}(I)\subseteq I\). Suppose that \(x\not\in I\). Then \(x\in R\setminus I\) and by Lemma 3.17, there exists a multiplicatively closed subset \(S\) of \(R\) such that \(x\in S\subseteq R\setminus I\). But \(S\cap I=\emptyset\) and hence by Lemma 3.16, \(x\not\in\mathcal{R}_{k}(I)\).
**Corollary 3.19**.: _If \(I\in\mathcal{I}_{k}(R)\), then \(\mathcal{R}_{k}(I)\) is the smallest \(k\)-semiprime ideal of \(R\) containing \(I\)._
### \(k\)-irreducible and \(k\)-strongly irreducible ideals
Strongly irreducible ideals were introduced in [10] for commutative rings under the name primitive ideals. In [1, p. 301, Exercise 34], the same ideals are called quasi-prime. The term "strongly irreducible" was first used for noncommutative rings in [1]. In the context of semirings, a study of these ideals can be found in [1]. In this section, we introduce the \(k\)-irreducible and \(k\)-strongly irreducible ideals of semirings and show some relations with the \(k\)-prime and \(k\)-semiprime ideals.
**Definition 3.20**.: Let \(R\) be a semiring.
1. A \(k\)-ideal \(I\) of \(R\) is called \(k\)_-irreducible_ if \(J\cap J^{\prime}=I\) implies that either \(J=I\) or \(J^{\prime}=I\) for all \(J,J^{\prime}\in\mathcal{I}_{k}(R)\).
2. A \(k\)-ideal \(I\) of \(R\) is called \(k\)_-strongly irreducible_ if \(J\cap J^{\prime}\subseteq I\) implies that either \(J\subseteq I\) or \(J^{\prime}\subseteq I\) for all \(J,J^{\prime}\in\mathcal{I}_{k}(R)\).
It is obvious that every \(k\)-strongly irreducible ideal is \(k\)-irreducible, and it follows from Lemma 2.5 that every \(k\)-prime ideal is \(k\)-strongly irreducible. We now expect the exchange principle to hold for \(k\)-irreducible and \(k\)-strongly irreducible ideals, and here is that result.
**Proposition 3.21**.: _An ideal \(L\) of a semiring \(R\) is \(k\)-irreducible (\(k\)-strongly irreducible) if and only if \(L\) is irreducible (strongly irreducible) as well as a \(k\)-ideal of \(R\)._
Proof.: We give a proof for \(k\)-strongly irreducible ideals, that for \(k\)-irreducible ideals requiring only a trivial change of terminology. Suppose that \(L\) is a \(k\)-strongly irreducible ideal and \(I\), \(J\) are ideals of \(R\) such that \(I\cap J\subseteq L\). This implies
\[\mathcal{C}_{k}(I)\cap\mathcal{C}_{k}(J)=\mathcal{C}_{k}(I\cap J)\subseteq \mathcal{C}_{k}(L)=L,\]
where, the first equality follows from Lemma 2.3(7) and the inclusion from Lemma 2.3(5). By hypothesis, this implies that either \(\mathcal{C}_{k}(I)\subseteq L\) or \(\mathcal{C}_{k}(J)\subseteq L\). Since by Lemma 2.3(1), \(I\subseteq\mathcal{C}_{k}(I)\) for all \(I\in\mathcal{I}(R)\), we have the desired claim. The proof of the converse statement is obvious.
It is known (see [1, Proposition 7.33]) that a strongly irreducible ideal of a semiring has the following equivalent 'elementwise' property: If \(a\), \(b\in R\) satisfy \(\langle a\rangle\cap\langle b\rangle\subseteq I\), then either \(a\in I\) or \(b\in I\). The next proposition shows that a similar result holds for \(k\)-strongly irreducible ideals.
**Proposition 3.22**.: _\(I\) is a \(k\)-strongly irreducible ideal of a semiring \(R\) if and only if for all \(a\), \(b\in R\) satisfy \(\mathcal{C}_{k}(\langle a\rangle)\cap\mathcal{C}_{k}(\langle b\rangle)\subseteq I\) implies either \(a\in I\) or \(b\in I\)._
Proof.: Suppose that \(I\) is a \(k\)-strongly irreducible ideal of \(R\) and \(\langle a\rangle\cap\langle b\rangle\subseteq I\) for all \(a\), \(b\in R\). This implies
\[\mathcal{C}_{k}(\langle a\rangle\cap\langle b\rangle)=\mathcal{C}_{k}(\langle a \rangle)\cap\mathcal{C}_{k}(\langle b\rangle)\subseteq\mathcal{C}_{k}(I)=I,\]
where the first equality follows from Lemma 2.3(7). Since \(I\) is \(k\)-strongly irreducible, we must have either \(a\in\langle a\rangle\subseteq\mathcal{C}_{k}(\langle a\rangle)\subseteq I\) or \(b\in\langle b\rangle\subseteq\mathcal{C}_{k}(\langle b\rangle)\subseteq I\). To show the converse, suppose that \(I\) is a \(k\)-ideal which is not \(k\)-strongly irreducible. Then there are \(k\)-ideals \(J\) and \(K\) satisfy \(J\cap K\subseteq I\), but \(J\nsubseteq I\) and \(K\nsubseteq I\). This implies there exist \(j\in J\) and \(k\in K\) such that \(\mathcal{C}_{k}(\langle j\rangle)\cap\mathcal{C}_{k}(\langle k\rangle)\subseteq I\), but neither \(j\notin I\) nor \(k\in I\), a contradiction.
In Theorem 3.18, we have seen that a \(k\)-radical ideal can be expressed as the intersection of \(k\)-prime ideals containing it. We shall now see that any proper \(k\)-ideal can be represented in a similar fashion in terms of \(k\)-irreducible ideals. But first, a lemma.
**Lemma 3.23**.: _Suppose that \(R\) is a semiring. Let \(0\neq x\in R\) and \(I\) be a \(k\)-proper ideal of \(R\) such that \(x\notin I\). Then there exists a \(k\)-irreducible ideal \(J\) of \(R\) such that \(I\subseteq J\) and \(x\notin J\)._
Proof.: Let \(\{J_{\lambda}\}_{\lambda\in\Lambda}\) be a chain of \(k\)-ideals of \(R\) such that \(x\notin J_{\lambda}\supseteq I\) for all \(\lambda\in\Lambda\). Then
\[x\notin\bigcup_{\lambda\in\Lambda}J_{\lambda}\supseteq I.\]
By Zorn's lemma, there exists a maximal element \(J\) of this chain. Suppose that \(J=J_{1}\cap J_{2}\). By the maximality condition of \(J\), we must have \(x\in J_{1}\) and \(x\in J_{2}\), and hence \(x\in J_{1}\cap J_{2}=J\), a contradiction. Therefore, \(J\) is the required \(k\)-irreducible ideal.
**Proposition 3.24**.: _If \(I\) is a \(k\)-proper ideal of a semiring \(R\), then \(I=\bigcap_{I\subseteq J}J\), where \(J\) is a \(k\)-irreducible ideal of \(R\)._
Proof.: By Lemma 3.23, there exists a \(k\)-irreducible ideal \(J\) of \(R\) such that \(I\subseteq J\). Suppose that
\[J^{\prime}=\bigcap_{I\subseteq J}\{J\ |\ J\ \text{is $k$-irreducible}\}.\]
Then \(I\subseteq J^{\prime}\). We claim that \(J^{\prime}=I\). If \(J^{\prime}\neq I\), then there exists an \(x\in J^{\prime}\setminus I\), and by Lemma 3.23, there exists a \(k\)-irreducible ideal \(J^{\prime\prime}\) such that \(x\notin J^{\prime\prime}\supseteq I\), a contradiction.
It is well known that in a weakly Noetherian semiring, \(k\)-radicals have representation as a finite intersection of \(k\)-prime ideals. In terms of \(k\)-irreducible ideals, the following proposition extends that result to any \(k\)-ideal. Note that this proposition also holds for irreducible ideals of Noetherian commutative rings.
**Proposition 3.25**.: _Let \(R\) be a weakly Noetherian semiring. Then every \(k\)-ideal of \(R\) can be represented as an intersection of a finite number of \(k\)-irreducible ideals of \(R\)._
Proof.: Suppose that
\[\mathcal{F}=\left\{J\in\mathcal{I}_{k}(R)\ |\ J\neq\bigcap_{i=1}^{n}L_{i},\ L_{i} \ \text{is $k$-irreducible}\right\}.\]
It is sufficient to show that \(\mathcal{F}=\emptyset\). Since \(R\) is weakly Noetherian, \(\mathcal{F}\) has a maximal element \(I\). Since \(I\in\mathcal{F}\), it is not a finite intersection of \(k\)-irreducible ideals of \(R\). This implies that \(I\) is not \(k\)-irreducible. Hence, there are \(k\)-ideals \(J\) and \(K\) such that \(J\supseteq I\), \(K\supseteq I\), and \(I=J\cap K\). Since \(I\) is a maximal element of \(\mathcal{F}\), we must have \(J\), \(K\notin\mathcal{F}\). Therefore, \(J\) and \(K\) are a finite intersection of \(k\)-irreducible ideals of \(R\), which subsequently implies that \(I\) is also a finite intersection of \(k\)-irreducible ideals of \(R\), a contradiction.
As promised at the beginning of this section, the following result shows relations between prime-type and irreducible-type \(k\)-ideals.
**Proposition 3.26**.: _Let \(R\) be a semiring. A \(k\)-ideal \(P\) of \(R\) is \(k\)-prime if and only if it is \(k\)-semiprime and \(k\)-strongly irreducible._
Proof.: Let \(P\) be a \(k\)-prime ideal of a semiring \(R\). Then by Proposition 3.5, \(P\) is a \(k\)-ideal and a prime ideal of a semiring \(R\). This implies that \(P\) is \(k\)-semiprime by Proposition 3.11. From Lemma 2.5 and Proposition
3.21, it follows that \(P\) is also \(k\)-strongly irreducible. Conversely, let \(P\) be a \(k\)-semiprime and \(k\)-irreducible ideal. Suppose that \(I\), \(J\in\mathcal{I}_{k}(R)\) satisfying \(IJ\subseteq P\). Then
\[(I\cap J)^{2}\subseteq IJ\subseteq P.\]
Since \(P\) is \(k\)-semiprime, this implies that \(I\cap J\subseteq P\). But \(P\) is also \(k\)-strongly irreducible, which implies that either \(I\subseteq P\) or \(J\subseteq P\).
For commutative rings, it is known (see [1, Theorem 2.1(ii)]) that every proper ideal is contained in a minimal strongly irreducible ideal. Incidentally, the same holds for \(k\)-strongly irreducible ideals of semirings.
**Proposition 3.27**.: _Every \(k\)-proper ideal of a semiring is contained in a minimal \(k\)-strongly irreducible ideal._
Proof.: Let \(I\) be a \(k\)-proper ideal of a semiring \(R\). Consider the chain \((\mathcal{E},\subseteq)\), where
\[\mathcal{E}=\{J\mid I\subseteq J,\ J\text{ is $k$-strongly irreducible}\}.\]
Since every maximal ideal of a semiring \(R\) is strongly irreducible, by Proposition 3.3 and Proposition 3.21, every \(k\)-maximal ideal is \(k\)-strongly irreducible, and by Lemma 3.2, every proper \(k\)-ideal is contained in a \(k\)-maximal ideal. Therefore the set \(\mathcal{E}\) is nonempty. By Zorn's lemma, \(\mathcal{E}\) has a minimal element, which is our desired minimal \(k\)-strongly irreducible ideal.
The following result shows when all \(k\)-ideals of a semiring are \(k\)-strongly irreducible, and its proof is obvious.
**Proposition 3.28**.: _Every \(k\)-ideal of a semiring \(R\) is \(k\)-strongly irreducible if and only if \(\mathcal{I}_{k}(R)\) is totally ordered._
We conclude this section with a theorem on arithmetical semirings, where \(k\)-irreducibility and \(k\)-strongly irreducibility coincide.
**Theorem 3.29**.: _In a arithmetic semiring \(R\), a \(k\)-ideal of a semiring \(R\) is \(k\)-irreducible if and only if it is \(k\)-strongly irreducible. Conversely, if a \(k\)-irreducible ideal of a semiring \(R\) is \(k\)-strongly irreducible, then \(R\) is arithmetic._
Proof.: It has been shown in [15, Theorem 3] that in an arithmetic semiring, irreducible and strongly irreducible ideals are equivalent. Thanks to Proposition 3.21, we have then our claim. For the converse, [15, Theorem 7] says that if irreducibility implies strongly irreducibility, then the semiring is arithmetic. Once again, applying Proposition 3.21 on this result, we get the converse.
**Corollary 3.30**.: _In an arithmetic semiring, any \(k\)-ideal is the intersection of all \(k\)-strongly irreducible ideals containing it._
## 4. \(k\)-extensions and \(k\)-contractions
The aim of this final section is to study the properties of \(k\)-ideals and their products, intersections, and ideal quotients under semiring homomorphisms. These properties are '\(k\)-idealic' extensions of their (commutative) ring-theoretic versions (see [1, Proposition 1.17 and Exercise 1.18]).
**Definition 4.1**.: Suppose that \(\phi\colon R\to R^{\prime}\) is a semiring homomorphism.
1. If \(J\) is a \(k\)-ideal of \(R^{\prime}\), then the \(k\)_-contraction of_\(J\), denoted by \(J^{c}\), is defined by \(\phi^{-1}(J)\).
2. If \(I\) is a \(k\)-ideal of \(R\), then the \(k\)_-extension of_\(I\), denoted by \(I^{e}\), is defined by \(\mathcal{C}_{k}\left(\langle\phi(I)\rangle\right)\).
**Theorem 4.2**.: _Let \(\phi\colon R\to R^{\prime}\) be a semiring homomorphism. For \(k\)-ideals \(I\), \(I_{1}\), and \(I_{2}\) of \(R\), and \(k\)-ideals \(J\), \(J_{1}\), and \(J_{2}\) of \(R^{\prime}\), the following hold._
1. \(J^{c}\) _is a_ \(k\)_-ideal of_ \(R\)_._
2. _The kernel_ \(\ker\phi\) _is a_ \(k\)_-ideal of_ \(R\)_._
3. \(I^{e}\) _is a_ \(k\)_-ideal of_ \(R^{\prime}\)_._
4. (a)__\(I\subseteq I^{ec}\)__(b)__\(J\supseteq J^{ce}\)__(c)__\(J^{c}=J^{ecec}\)__(d)__\(I^{e}=I^{ece}\)_._
5. _There is a bijection between the sets_ \(\{I\mid I^{ec}=I\}\) _and_ \(\{J\mid J^{ce}=J\}\)_._
6. (a)__\((I_{1}\cap I_{2})^{e}\subseteq I_{1}^{e}\cap I_{2}^{e}\)__(b)__\((I_{1}I_{2})^{e}=I_{1}^{e}I_{2}^{e}\)__(c)__\((I_{1}:I_{2})^{e}\subseteq(I_{1}^{e}:I_{2}^{e})\)__(d)__\((\mathcal{R}_{k}(I))^{e}\subseteq\mathcal{R}_{k}(I)^{e}\)_._
7. (a)__\((J_{1}\cap J_{2})^{e}=J_{1}^{e}\cap J_{2}^{e}\)__(b)__\((J_{1}J_{2})^{e}\supseteq J_{1}^{e}J_{2}^{e}\)__(c)__\((J_{1}:J_{2})^{e}\subseteq(J_{1}^{e}:J_{2}^{e})\)__(d)__\((\mathcal{R}_{k}(J))^{e}=\mathcal{R}_{k}(J)^{e}\)_._
8. \(\mathcal{R}_{k}(J)^{c}\subseteq\mathcal{R}_{k}(J^{c})\)_._
Proof.: (1) [G99, Proposition 10.11].
(2) Since \(0\) is a \(k\)-ideal of \(R^{\prime}\), the claim follows from (1).
(3) Follows from Lemma 2.3(4)
(4) For (4.a), observe that \(\phi(I)\subseteq\mathcal{C}_{k}(\langle\phi(I)\rangle)\), whence
\[I\subseteq\phi(I)^{c}\subseteq\mathcal{C}_{k}(\langle\phi(I)\rangle)^{c}.\]
To prove (4.b), let \(r^{\prime}\in J^{ce}=\mathcal{C}_{k}(\langle J^{c}\rangle)\). Then there exists an \(x\in\langle\phi(J^{c})\rangle\) such that \(r^{\prime}+x\in\langle\phi(J^{c})\rangle\). Now, \(x\in\langle\phi(J^{c})\rangle\) implies \(x=\sum_{i}r^{\prime}_{i}\phi(y_{i})\), for \(r^{\prime}_{i}\in R^{\prime}\) and \(y_{i}\in J^{c}\), which subsequently implies \(\phi(y_{i})\in J\) and hence \(x\in J\). With a similar argument, we have \(r^{\prime}+x\in J\). Since \(J\) is a \(k\)-ideal, this means \(r^{\prime}\in J\), as required. Using (4.a) and (4.b), we have the proofs of (4.c) and (4.d).
(5) By considering the maps \(I\mapsto I^{e}\) and \(J\mapsto J^{c}\), the proof follows.
(6) To obtain (6.a), let \(r^{\prime}\in(I_{1}\cap I_{2})^{e}=\mathcal{C}_{k}(\langle\phi(I_{1}\cap I_{2}) \rangle)\). This implies that \(r^{\prime}+x\in\langle\phi(I_{1}\cap I_{2})\rangle\) for some \(x\in\langle\phi(I_{1}\cap I_{2})\rangle\), and hence \(x=\sum_{i}r^{\prime}_{i}\phi(x_{i})\), where \(x_{i}\in I_{1}\cap I_{2}\) and \(r^{\prime}_{i}\in R^{\prime}\). This shows that
\[\phi(x_{i})\in\mathcal{C}_{k}(\langle\phi(I_{1})\rangle)\cap\mathcal{C}_{k}( \langle\phi(I_{2})\rangle),\]
and hence \(x\in I_{1}^{e}\cap I_{2}^{e}\). Similarly, we obtain \(r^{\prime}+x\in I_{1}^{e}\cap I_{2}^{e}\). Since \(I_{1}^{e}\cap I_{2}^{e}\) is a \(k\)-ideal (since so are \(I_{1}^{e}\), \(I_{2}^{e}\), and hence their intersection), this implies that \(r^{\prime}\in I_{1}^{e}\cap I_{2}^{e}\). The proof of (6.b) is similar to (6.a), where the main difference is the use of the homomorphism property: \(\phi(I_{1}I_{2})=\phi(I_{1})\phi(I_{2})\).
To show (6.c), suppose that \(r^{\prime}\in(I_{1}:I_{2})^{e}=\mathcal{C}_{k}(\langle\phi(I_{1}:I_{2}) \rangle)\). From the definition of \(\mathcal{C}_{k}\), we have \(r^{\prime}+x\in\langle\phi(I_{1}:I_{2})\rangle\) for some \(x\in\langle\phi(I_{1}:I_{2})\rangle\). But this means \(x=\sum_{i}r^{\prime}_{i}\phi(r_{i})\), where \(r^{\prime}_{i}\in R^{\prime}\) and \(r_{i}\in(I_{1}:I_{2})\). From this we obtain \(\phi(r_{i})\in(\phi(I_{1}):\phi(I_{2}))\). Therefore,
\[x\in\left(\mathcal{C}_{k}(\langle\phi(I_{1})\rangle):\mathcal{C}_{k}(\langle \phi(I_{2})\rangle)\right).\]
Similarly, we can show that \(r^{\prime}+x\in(I_{1}^{e}:I_{2}^{e})\). Since by Proposition 2.7, \((I_{1}^{e}:I_{2}^{e})\) is a \(k\)-ideal, we must have \(r^{\prime}\in(I_{1}^{e}:I_{2}^{e})\), and proves the claim.
Finally, to have (6.d), following the above-mentioned method, we get an \(x\in\langle\phi(\mathcal{R}_{k}(I))\rangle\), which implies \(x=\sum_{i}r^{\prime}_{i}\phi(x_{i})\), where \(r^{\prime}_{i}\in R^{\prime}\) and \(x_{i}\in\mathcal{R}_{k}(I)\) and hence \(x_{i}\in P\) for all \(P\supseteq I\). This implies \(\phi(x_{i})\in\phi(P)\supseteq\phi(I)\) for all \(P\supseteq I\). The argument of the rest of the proof is similar as above.
(7) The proofs of all the identities are analogous to that of (6).
(8) Notice that if \(y\in\mathcal{R}_{k}(J)^{c}\), then \(y\in P\) for all \(P\) such that \(P\supseteq J^{c}\), where \(P\in\operatorname{spec}_{k}(R)\). Since
\[\mathcal{R}_{k}^{c}J=\left(\bigcap_{J\subseteq Q}Q\right)^{c}=\bigcap_{J^{c} \subseteq Q^{c}}Q^{c},\]
we have \(y\in\mathcal{R}_{k}^{c}J\).
**Concluding Remarks 4.3**.: We conclude the paper with the following remarks:
\(\bullet\) In the sequel of this paper, in [1], we shall explore the behaviour of various types of \(k\)-ideals under quotients and localizations of semirings. Moreover, in that paper, we aim to study \(k\)-ideals in special types of semirings, namely, idempotent semirings, Gel'fand semirings, Boolean semirings, and lattice-ordered semirings.
\(\bullet\) In [1], a comprehensive study of modules over a semiring has been done, and also the notion of a subtractive semimodule has been introduced. Similar to the distinguished classes of \(k\)-ideals that we have considered here, one can introduce the same for subtractive semimodules and study their properties.
\(\bullet\) Using the \(k\)-closure operation, we may endow a subbasic closed-set topology on \(\mathcal{I}(R)\) and study the topological properties of the corresponding spaces. This work has been initiated in [1]. However, a similar study can also be done for \(h\)-closure operations. Since there is a nice relationship between \(k\)-closure and \(h\)-closure operations (see [1]), it will be interesting to find topological connections between the respective spaces.
|
2302.07481 | Dynamics of Rapidly Rotating Bose-Einstein Quantum Droplets | This work theoretically investigates \textcolor{black}{the stationary
properties} and the dynamics of the rotating quantum liquid droplets confined
in a two-dimensional symmetric anharmonic trap. Mimicking the quantum Hall
systems, the modified Gross-Pitaevskii equation with the inclusion of the
Lee-Huang-Yang nonlinear interaction is analytically solved, and the role of
the Landau-level mixing effect is addressed. \textcolor{black}{Via controlling
the nonlinear interaction and the rotation speed, the rotating quantum droplet
with multiply quantized vortex can be created, and the preference of the
energetically favored quantum states can be distinguished in the phase diagram.
To better interpret the underlying physics of the phase singularities, a brief
comparison of the rotating quantum droplet and the optical vortex is made. The
investigation of the long-term evolution of the rotating quantum droplets
confirms the stability of the quantum states. At certain rotation speeds, the
multi-periodic trajectories and breathings provide evidence of the emergence of
the collective excitation of the surface mode in the vortex state. For quantum
droplets carrying multiply quantized vortex, the microscopic snapshots of the
rotation field adjusted current density distribution show that the combined
nonlinear interaction and the anharmonic trapping potential can provide the
restoring force to lead the quantum droplet to a regular and stable revolution
and reach the dynamic equilibrium, revealing the signature of the generation of
superfluids in the new kind of low-dimensional quantum liquids. | Szu-Cheng Cheng, Yu-Wen Wang, Wen-Hsuan Kuan | 2023-02-15T05:53:18Z | http://arxiv.org/abs/2302.07481v4 | # Dynamics of Rapidly Rotating Bose-Einstein Quantum Droplets
###### Abstract
This work theoretically investigates the dynamics of trapped rapidly rotating Bose-Einstein droplets governed by the modified Gross-Pitaevskii equation with the inclusion of the Lee-Huang-Yang nonlinear interaction. Mimicking the quantum Hall systems, the stationary properties of droplets are obtained by minimizing the energy functional established based on the Laughlin-like wavefunction including Landau-Level mixing. By tuning the particle-particle interaction and rotation speed, the preference for the formation of the center-of-mass state, vortex state, and off-centered vortex state can be distinguished in the phase diagram. Under fast rotations, the highly-spiral phase portraits reveal that the emergence of huge vortices with high angular momentum would stabilize the droplets against centrifugal depletion. By solving Euler-Lagrange equations, the periodicity and stability of each phase's breathing and trajectory during long-term evolution are analyzed. The supposition of gauge-induced azimuthal and linear flows results in the generation of nonuniform persistent currents of multiply quantized circulations and reveals the signature of partially coherent superfluids with the nonlinear-modulated self-bound effect.
## I Introduction
Using laser cooling and trapping techniques, experimentalists can create atomic quantum gases at extremely low temperatures. Today, we know that the phase transition for indistinguishable particles is purely the consequence of quantum statistical effects. According to the Bose-Einstein statistics, the grand canonical ensemble theory reveals the emergence of the quantum phase transition as the chemical potential approaches zero, and by calculating the phase space filling one can further define the critical temperature. Having the same quantum configuration and sharing almost the same energy state, the macroscopic collection of cold atoms with a nonzero off-diagonal long-range order forms the Bose-Einstein condensates (BECs) [1; 2; 3]. With the mean-field (MF) approximation, the dynamics of the BECs with weak interatomic interactions can be adequately described through the application of the Gross-Pitaevskii equation (GPE). Near the Feshbach resonance, where the elastic scattering can be dramatically altered by an external field [4; 5; 6], a quasibound molecule can tunnel across a potential energy barrier and resonantly couple with the free states of the colliding atoms. Therefore, the possibility of tuning the magnitude and sign of the scattering length with external magnetic fields provides new perspectives on the manipulation of BECs [1; 7; 8].
Concerning the two-body interactions between a monatomic ensemble, an equilibrium between attractive interatomic forces and short-range repulsion due to the van der Waals forces will lead to the formation of a liquid in the cooling processes instead of a gas. Because of the high density and incompressibility, the attempts by usual liquids to enter the quantum regime are prohibited. However, with the inclusion of the Lee-Huang-Yang (LHY) correction to the ground state energy of a homogeneous weakly repulsive Bose gas [9], a new type of quantum liquid which emerges in the ultracold and extremely dilute atomic systems, violating the van der Waals model, has recently been observed in the two-component Bose mixtures [10; 11; 12] and single-component dipolar condensates[13; 14; 15; 16]. Above the particle number threshold, the gas-to-liquid phase transition takes place when the instabilities arising from attractive mean-field interaction \(\propto n^{2}\) are compensated for by the high-order quantum many-body effects, \(\propto n^{5/2}\)[17]. By tuning the interatomic interactions, the realization of the dilute and weakly interacting self-bound liquid droplets provides direct evidence of the beyond MF effects [18].
Being a paradigm for quantum liquids, the superior coherence of the Bose superfluids is demonstrated by calculating the one-body density matrix and the correlation function between any two particles. To describe them, the introduction of the order parameter along with the low-lying elementary excitations obtained from the solutions to the Bogoliubov equations manifest the essence of the quantum fluctuations in the quantum phase transitions. For spinless atomic gases, the gradient of the phase of the condensate wavefunction defines the superfluid velocity, which is irrotational unless there is a singularity embedded in the phase of the order parameter such that the Onsager-Feynmann quantization condition is satisfied. Through laser-stirred phase imprinting, the generation of vortex cores [19; 20], vortex rings [21; 22], and even vortex lattices [23; 24] in rapidly rotating dilute-gas BECs has been experimentally
realized. In tightly trapped BECs, the formation of multiple quantized vortices and long-lived vortex aggregations has also been observed [25; 26], showing the crucial influence of the Coriolis forces in the rotating systems.
Contrary to the gaseous BECs, the quantum droplets with embedded vorticity are metastable in the single-component dipolar condensate [27]. However, it was reported that stable solutions can be theoretically found for the 2D quantum droplets (QDs) with hidden and explicit vorticity [28] and the 3D binary condensates with contact and LHY-amended interactions [29]. It has also been experimentally shown that the application of optical lattices helps to stabilize zero-vorticity and vortical solitons and even multipeak modes for 1D solitons [30]. These studies show that since the nonlinear competitions depict the domain boundaries for the vortex rings of a spinning droplet, the atom number for systems where the finite size effects are relevant is critical to the formation of the low-energy modes. The presence of the lattice potential restricts the axisymmetric solutions and breaks the spatial uniformity, isotropy, and the conservation of the linear and angular momenta.
Just as the neutral superfluids can be induced by rotation, charged superfluids can be initiated by a magnetic field. In 2D electron systems, the observation of quantum Hall (QH) effects reveals remarkable macroscopic quantum phenomena related to topological investigations. For example, vortices, the known stable topological solitons in quantum field theory, are realized as coherent states describing collective excitations of the basic field. The objective of this work is to investigate the dynamics of 2D rapidly rotating quantum droplets within quadratic plus quartic trapping potentials. Since the radial confinements are reduced by the centrifugal potential, the Coriolis force experienced by the droplets in the rotating frame appears as equivalent to the Lorenz force on a charged particle. As the analogy between the QH effects and the rapidly rotating BECs has been precisely recognized, it would be interesting to see whether the artificial Lorenz forces can also be engineered for the new type of quantum liquid such that the generation of exotic phases of multiple topological charges and the Landau-level mixing (LLM) effects can be observed, and the stability of the quantum states can be explored in this system. This would make it possible to simulate the QH-type effects in a controlled manner with the nonuniform quantum liquids. The combined effect of interactions and quantum statistics eventually determines the features of the many-body ground state. Such rotating systems, therefore, allow us to study artificial orbital magnetism in quantum liquids.
## II Theoretical Model
To form a 2D quantum droplet in the system of dilute Bose-Bose mixtures with densities \(n_{\uparrow}\) and \(n_{\downarrow}\), a weakly attractive interaction for the interspecies and a weakly repulsive interaction for the intraspecies are required. It has been proven that by writing the short-range interaction in terms of the coupling constant \(\tilde{g}_{\sigma\sigma^{\prime}}\) and the scattering length \(a_{\sigma\sigma^{\prime}}\) via \(\tilde{g}_{\sigma\sigma^{\prime}}=(4\pi\hbar^{2}/M)/\ln(4e^{-2\gamma}/a_{\sigma \sigma^{\prime}}^{2}\Delta)\), where \(\gamma\) is the Euler constant and \(\Delta\) is chosen such that \(\tilde{g}_{\uparrow\downarrow}^{2}=\tilde{g}_{\uparrow\uparrow}\tilde{g}_{ \downarrow\downarrow}\), the weakly interacting regime beyond the MF approximation can be theoretically approached [31; 32] using the scattering \(t\) matrix [33]. For the symmetric case \(a_{\uparrow\uparrow}=a_{\downarrow\downarrow}=a\) and \(n_{\uparrow}=n_{\downarrow}=n\), the calculation of the energy density of a uniform liquid within the assumption of a macroscopic condensate population gives \(E_{2D}=(4\kappa\hbar^{2}n^{2}/M)\ln(e^{2r+1/2}\kappa aa_{\uparrow\downarrow}n)\), in which \(\kappa=2\pi/\ln(a_{\uparrow\downarrow}/a)\). It can be further simplified to \(E_{2D}=(2\kappa^{2}\hbar^{2}n^{2}/\pi M)\ln(n/en_{0})\) by introducing the equilibrium density \(n_{0}=e^{-2r-3/2}/\kappa aa_{\uparrow\downarrow}\) for each component. As a result, after the energy density \(E_{2D}\) including all allowed scattering paths is calculated, the liquid phase properties in free space can be obtained by minimizing the grand potential density at zero pressure.
With the inclusion of a modified LHY potential, the time-dependent GPE for a quartic anharmonically-trapped rotating QD viewed in the rotating frame of reference can be written as
\[i\hbar\frac{\partial\Psi}{\partial t} = \frac{1}{2M}(-i\hbar\vec{\nabla}-M\Omega\hat{z}\times\vec{r})^{2 }\Psi+\left[\frac{1}{2}M(\omega_{0}^{2}-\Omega^{2})r^{2}+\frac{1}{4}\lambda^{ \prime}r^{4}\right]\Psi \tag{1}\] \[+ \frac{4\sigma\kappa\hbar^{2}n_{0}}{M}|\Psi|^{2}\ln(|\Psi|^{2}/n_ {0})\,\Psi\]
in which \(\sigma\) is a tunable constant for the nonlinear coefficient adjustment with the three-dimensional experimental data in a quasi-2D system. The first term of Eq. (1) precisely shows that the rotating droplet moves as a charge \(-e\) in the \(xy\) plane subjected to a synthesized magnetic field \(B\hat{z}\) with a vector potential \((e/c)\vec{A}=M\Omega\hat{z}\times\vec{r}\), giving the cyclotron frequency \(\Omega=eB/2Mc\). Setting the effective magnetic length equal to unity and applying the Rodrigues definition of the associated Laguerre polynomial
\[L_{n}^{k}(t)=\frac{1}{n!}e^{t}t^{-k}\frac{d^{n}}{dt^{n}}(e^{-t}t^{n+k}), \tag{2}\]
the eigenfunctions of the free part of the Hamiltonian
\[\chi_{n,m}(\vec{r})=\frac{(-1)^{n}}{\sqrt{2\pi}}\sqrt{\frac{n!}{2^{m}(m+n)!}}\,r^{ m}e^{-r^{2}/4}e^{-im\phi}L_{n}^{m}\left(\frac{r^{2}}{2}\right), \tag{3}\]
can be derived, where integers \(n\) and \(m\) correspond to the Landau level index and the degenerate states within a Landau level, respectively. For low-dimensional uniform liquids, it is found that the typical length scale on which \(\Psi\) changes is in order of the healing length, and the surface tension is crucial to the finite size effects on the droplet's energy and surface mode spectrum. To generalize the investigation of nonuniform liquids, a strong confinement is imposed with the sum of quadratic and quartic components, which manages to compensate for the centrifugal repulsion and stabilize the deformed droplet under fast rotations. Moreover, as the centrifugal potential \(-M\Omega^{2}r^{2}/2\) becomes influential at rapid rotations, the combined trapping potential leads to an effective Mexican hat trap. The atomic liquid becomes similar to the complex Schrodinger field interacting with the electromagnetic field in the Higgs-type potential, where the magnetic flux is squeezed into quantized vortices as is well-known in superconductors. This similarity may thus bring the topological relevance between vortex solitons and rotating droplets.
While the presence of the attractive inter-particle interactions will mix different \((n,m)\) states, the lowest Landau level approach would be insufficient to describe the rotating droplets, especially for the strongly confined atoms since the density will not be thinned out in the \(xy\) plane as was observed in the vortex lattices. Mimicking the QH systems, we can obtain the stationary properties of the droplet matter waves governed by Eq. (1) by minimizing the energy functional \(E[\Psi,\Psi^{*}]\) constructed using the Laughlin-like wavefunctions with LLM:
\[\Psi=C_{m}(\mathcal{Z}-\mathcal{Z}_{0})^{m}\exp\left[-\frac{(\vec{r}-\vec{R})^ {2}}{4\rho^{2}}-\frac{i}{2l_{z}^{2}}\hat{z}\cdot(\vec{r}\times\vec{R})\right] \exp\left[i\frac{\alpha}{4}(\vec{r}-\vec{R})^{2}\right]\exp\left[i\frac{\vec{ p}}{2\hbar}\cdot(\vec{r}-\vec{R})\right], \tag{4}\]
where, specified by the unit length \(l_{z},\,\rho\) sketches the width of the wavepacket and \(\vec{r}=x\hat{x}+y\hat{y}\) is the position of the particle in the droplet, whereas \(\vec{R}=X_{0}\hat{x}+Y_{0}\hat{y}\) denotes the center-of-mass (CM) position, and \(\alpha\) and \(\vec{p}\) are the conjugate curvature coefficient and momentum representative of the inherent MF expansion and the relative repulsion of the wavepacket, respectively. The inclusion of an additional phase arising from rotation ensures the Gauge invariance. In the absence of the dissipation due to three-body collisions, the particle number is conserved, and we obtain the wavefunction coefficient \(C_{m}=\sqrt{N2^{m+1}/\pi m!}/\rho^{m+1}\). For convenience, we map the system onto a 2D complex plane by introducing the dimensionless complex coordinates \(\mathcal{Z}_{0}=\frac{1}{2}(X_{0}+iY_{0})\), \(\mathcal{Z}=\frac{1}{2}(x+iy)\), and momentum \(\mathcal{P}=\frac{1}{2}(p_{x}+ip_{y})\). In this manner, the wavefunction can be reconstructed as
\[\Psi =C_{m}(\mathcal{Z}-\mathcal{Z}_{0})^{m}\exp\left[-\frac{1}{\rho^ {2}}|\mathcal{Z}-\mathcal{Z}_{0}|^{2}\right]\exp\left[\mathcal{Z}^{*} \mathcal{Z}_{0}-\mathcal{Z}\mathcal{Z}_{0}^{*}\right]\exp\left[i\alpha| \mathcal{Z}-\mathcal{Z}_{0}|^{2}\right]\] \[\times\exp\left[i\mathcal{P}^{*}(\mathcal{Z}-\mathcal{Z}_{0})+i \mathcal{P}(\mathcal{Z}^{*}-\mathcal{Z}_{0}^{*})\right]. \tag{5}\]
To investigate the QD's dynamics and the configuration stability, we employ Hamilton's principle of least action
\[\delta S[\Psi^{*},\Psi]=\int dt\int d\vec{r}\;\delta\mathcal{L}(\Psi,\Psi^{* },\ldots)=0, \tag{6}\]
based on the stationary condition for any Lagrangian density \(\mathcal{L}\) constructed with the wavefunction, its complex conjugate, and their derivatives given by
\[\mathcal{L}=\frac{i\hbar}{2}\left(\Psi^{*}\frac{\partial}{\partial t}\Psi- \Psi\frac{\partial}{\partial t}\psi^{*}\right)-\Psi^{*}H\Psi. \tag{7}\]
Within the dimensionless formulation, the spatial integration of the Lagrangian density involving the time derivative terms gives
\[\int\frac{i}{2}\left(\Psi^{*}\frac{\partial\Psi}{\partial t}-\Psi\frac{ \partial\Psi^{*}}{\partial t}\right)\,d\vec{r}=-iN(\mathcal{Z}^{*}\mathcal{ \dot{Z}}_{0}-\mathcal{Z}\mathcal{\dot{Z}}_{0}^{*})-\frac{N}{2}(m+1)\dot{\alpha} \rho^{2}+N(\mathcal{P}^{*}\mathcal{\dot{Z}}_{0}+\mathcal{P}\mathcal{\dot{Z}}_{0 }^{*}). \tag{8}\]
Based on the transformation relations \(\partial/\partial\mathcal{Z}=\partial/\partial x-i\partial/\partial y\) and \(\partial/\partial\mathcal{Z}^{*}=\partial/\partial x+i\partial/\partial y\), we define the complex creation and annihilation operators \(a=-i\left(\mathcal{Z}+\partial/\partial\mathcal{Z}^{*}\right)/\sqrt{2}\) and \(a^{\dagger}=i\left(\mathcal{Z}^{*}-\partial/\partial\mathcal{Z}\right)/\sqrt{2}\) such that the differential operators \(D_{x}=\partial/\partial x+iy/2=i(a+a^{\dagger})/\sqrt{2}\) and \(D_{y}=\partial/\partial y-ix/2=i(a-a^{\dagger})/\sqrt{2}\) can also be defined as well. With
these complex variables, it is easy to recover the kinetic energy operator \(K=-(D_{x}^{2}+D_{y}^{2})\) to an effective number operator \(2a^{\dagger}a+1\) such that
\[\int\Psi^{*}K\Psi\,d\vec{r}=\frac{N(m+1)}{2}\left(\rho^{2}+\frac{1}{\rho^{2}}+ \alpha^{2}\rho^{2}\right)-mN+N|\mathcal{P}|^{2}. \tag{9}\]
Similarly, for the effective trapping potential operator, we have
\[\int\Psi^{*}V_{eff}\Psi\,d\vec{r} =N\left(\omega_{0}^{2}-1\right)\left(\frac{m+1}{2}\rho^{2}+| \mathcal{Z}_{0}|^{2}\right)\] \[+N\lambda\left[\frac{1}{4}(m+2)(m+1)\rho^{4}+|\mathcal{Z}_{0}|^{ 4}+2(m+1)\rho^{2}|\mathcal{Z}_{0}|^{2}\right]. \tag{10}\]
Evaluating the angular momentum carried by the droplet,
\[\int\Psi^{*}L_{z}\Psi\,d\vec{r}=\left[i(\mathcal{P}^{*}\mathcal{Z}_{0}- \mathcal{Z}_{0}^{*}\mathcal{P})+m+2|\mathcal{Z}_{0}|^{2}\right]N, \tag{11}\]
show three different mechanisms: the wavepacket repulsion dynamics, quantum phase imprinting and the rigid revolution. And, the interatomic energy term,
\[\int|\Psi|^{4}\ln\left(\frac{|\Psi|^{2}}{\sqrt{e}}\right)\,d\vec{r} =\frac{N^{2}}{\pi(m!)^{2}2^{2m+2}}\frac{1}{\rho^{2}}\left[(2m)!\ln \left(\frac{|C_{m}|^{2}}{4^{m}\sqrt{e}}\right)-\frac{(m+1)!}{2}\right. \tag{12}\] \[\left.+m\Gamma^{\prime}(2m+1)+(2m)(2m)!\ln\rho\right], \tag{13}\]
qualifies the self-bound feature through equilibrating the spatial attraction and repulsion. Eventually, the Lagrange function per atom is derived as
\[\mathfrak{L} =-i(\mathcal{Z}^{*}\dot{\mathcal{Z}}_{0}-\mathcal{Z}\dot{\mathcal{ Z}}_{0}^{*})-\frac{1}{2}(m+1)\dot{\alpha}\rho^{2}+(\mathcal{P}^{*}\dot{ \mathcal{Z}}_{0}+\mathcal{P}\dot{\mathcal{Z}}_{0}^{*})-\frac{m+1}{2}\left( \frac{1}{\rho^{2}}+\alpha^{2}\rho^{2}\right)\] \[-\omega_{0}^{2}\left(\frac{m+1}{2}\rho^{2}+|\mathcal{Z}_{0}|^{2} \right)-\lambda\left[\frac{1}{4}(m+2)(m+1)\rho^{4}+|\mathcal{Z}_{0}|^{4}+2(m+ 1)\rho^{2}|\mathcal{Z}_{0}|^{2}\right]\] \[-\frac{N}{\pi(m!)^{2}2^{2m+2}}\frac{1}{\rho^{2}}\left[(2m)!\ln \left(\frac{|C_{m}|^{2}}{4^{m}\sqrt{e}}\right)-\frac{(m+1)!}{2}+m\Gamma^{ \prime}(2m+1)+(2m)(2m)!\ln\rho\right]\] \[+\Omega\left[i(\mathcal{P}^{*}\mathcal{Z}_{0}-\mathcal{Z}_{0}^{* }\mathcal{P})+m+2|\mathcal{Z}_{0}|^{2}\right]-i(\mathcal{P}^{*}\mathcal{Z}_{0} -\mathcal{Z}_{0}^{*}\mathcal{P})-(|\mathcal{Z}_{0}|^{2}+|\mathcal{P}|^{2}), \tag{14}\]
through which the Euler-Lagrange equations and the equations of motion of the corresponding characteristic parameters can be obtained, i.e.,
\[\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{\mathcal{P}}^{* }}\right)-\frac{\partial L}{\partial\mathcal{P}^{*}}=0\quad\text{gives}\quad \dot{\mathcal{Z}}_{0}=\mathcal{P}-\operatorname{i}\left(\Omega-1\right) \tag{15}\] \[\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{\mathcal{Z}}_{0}^ {*}}\right)-\frac{\partial L}{\partial\mathcal{Z}_{0}^{*}}=0\quad\text{gives}\quad \dot{\mathcal{P}}=-2\mathrm{i}\dot{\mathcal{Z}}_{0}-\left(\omega_{0}^{2}-1 \right)\mathcal{Z}_{0}-2\lambda|\mathcal{Z}_{0}|^{2}\mathcal{Z}_{0}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad-\left( \Omega-1\right)\mathcal{P}+2\mathcal{Z}_{0}\left(\Omega-1\right)-2(m+1)\lambda \rho^{2}\] (16) \[\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{\alpha}}\right)- \frac{\partial L}{\partial\alpha}=0\quad\text{gives}\quad\dot{\rho}=\alpha\rho\] (17) \[\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{\rho}}\right)- \frac{\partial L}{\partial\rho}=0\quad\text{gives}\quad\dot{\alpha}=-\frac{1}{ N(m+1)\rho}\frac{\partial E}{\partial\rho} \tag{18}\]
## III Results and discussion
Despite the fact that the 3D scattering lengths \(a^{3D}\) and \(a^{3D}_{\uparrow\downarrow}\) provided by Refs. [12] and [34] can be chosen for suitably describing the liquid-like droplet regimes, it is a totally different scenario for 2D systems. To fit the simulation results of Ref. [31], the 2D scattering lengths \(a\) and \(a_{\uparrow\downarrow}\) would take the form of \(a=l_{z}\exp[-\sqrt{\pi/2}\,l_{z}/a^{3D}]\) and \(a_{\uparrow\downarrow}=l_{z}\exp[-\sqrt{\pi/2}\,l_{z}/a^{3D}_{\uparrow\downarrow}]\), respectively, where \(l_{z}=\sqrt{\hbar/2M\omega_{z}}\) is the oscillator length in the strong confinement direction.
To simulate a symmetric mixture of a quasi-2D \({}^{39}\)K self-bound oblate droplet [10], we choose \(\omega_{z}=2\pi\times 400\,\mathrm{Hz}\), \(a_{\uparrow\downarrow}^{3D}=-1800\,a_{0}\), and \(a^{3D}=1100\,a_{0}\) such that the weakly interacting regime is ensured by the inequality \(1/\ln(a_{\uparrow\downarrow}/a)=0.05\ll 1\). In addition, by applying the Bogoliubov theory and the diffusion Monte Carlo simulation for \(n_{0}a^{2}=5\times 10^{-10}\), the requirement of dilute liquids \(na^{2}\ll 1\) around the equilibrium density \(n_{0}=2.5\times 10^{14}\,\mathrm{m}^{-2}\) is also satisfied. As the chemical potential and the healing length can be approximately represented by \(\mu\sim-n\hbar^{2}/M\ln^{2}(a_{\uparrow\downarrow}/a)\) and \(\xi\sim\hbar/\sqrt{M|\mu|}\), respectively, and the latter represents the vortex core size of the rotating QD as well, they are approximately \(-0.412\) and \(2.2\) at \(n=1.1n_{0}\) in the dimensionless scales. With these parameters, the particle number scale \(\tilde{N}=n_{0}l_{z}^{2}\) is about \(81\). For energy scale \(\tilde{E}=\hbar\omega_{z}\sim 1.65\,\mathrm{peV}\), the calculation of the ground state energy of this work shows that a rotating droplet of hundreds of atoms can be stably sustained in the extremely low temperatures of about tens of Kelvin, indicating that the time scale for tracing the droplet's dynamics before collapse approaches \(\tau=1/\omega_{z}=0.4\,\mathrm{ms}\).
### Density Profile and Phase Portrait
The influences of the confinement, nonlinear interaction, and rotational speed on the QD's configuration are first investigated. To investigate the stationary properties, the QD is claimed to be stirred adiabatically to ensure that equilibrium at certain \(\Omega\) can be reached via the re-thermalization processes. However, to maintain bounded QDs at extremely high rotational speed, the emergence of strong attractions between squeezed atoms becomes inevitable in the fast expansion of the core size due to the very large orbital angular momentum (OAM) imposition on the vortex (VX) states. As a result, the QDs may teeter on the edge of collapse in the presence of the three-body collisions. To avoid this situation, we therefore suggest maximizing the total angular momentum of the VX state in energy minimization. Along with the lowest Landau level approximation, the exploitation of this constraint is thus useful to estimate the upper bound of the topological charge and the deflection distance given by \(m_{max}=(\Omega-\epsilon/2-1)/\lambda-1\) and \(R=2\sqrt{\Omega-\epsilon/2-1-(m+1)\lambda\rho^{2}}/\sqrt{\lambda}\), respectively. With this restriction, the limiting fraction of the cross-sectional area occupied by the vortex cores estimated in the Thomas-Fermi regime for dilute atomic gases [35] can also be validly applied for QDs.
Figure 1 shows the density profiles and phase portraits for (a) \(N=20\), (b) \(N=60\), and (c) \(N=100\), corresponding to \(\Omega=0.6,1.4,1.8\) (top to bottom) and initial momentum \(p_{x}=0,p_{y}=0\) (middle), \(p_{x}=1.414,p_{y}=2.449\) (right). At slow rotations, it is found that when the particle number is low, the CM state with \(m=0\) and \(R=0\) is the energetically favorable ground state, even when \(\Omega\) is slightly larger than one, as shown in Fig. 1(a1). When the particle number increases, the repulsive LHY terms play the decisive role in the formation of the VX states with nonzero \(m\), such as the doughnuts shown in Figs. 1(b1) and (c1) that both have \(m=1\). As we increase the rotational speed to \(\Omega=1.4\), Figs. 1(a2), (b2), and (c2) show the possibility of the formation of giant vortices with \(m\geq 10\) and large cores due to strong centrifugal forces. On the other hand, at even faster rotations with \(\Omega=1.8\), the system prefers to form off-center vortex (OCVX) states with nonzero \(R\) instead, such as shown in Figs. 1(a3), (b3), and (c3). For these deflected QDs, the cores of the vortices enlarge with the increase in the particle number, but are lower than those of the aforementioned VX states. Within weakly nonlinear interactions, this phenomenon can be attributed to the counterbalance between the centrifugal force and the strong compression provided by the anharmonic potential.
Figure 1: (color online) Density profiles and phase portraits for (a) \(N=20\), (b) \(N=60\), and (c) \(N=100\), corresponding to \(\Omega=0.6,1.4,1.8\) (top to bottom) and initial momentum \(p_{x}=0,p_{y}=0\) (middle), \(p_{x}=1.414,p_{y}=2.449\) (right).
In the middle and right columns, the color lines depict the spatial phase distribution in the cases \(\vec{p}(0)=0\) and \(\vec{p}(0)\neq 0\). For the CM state, the phase portraits in (a4) and (a7) display the wavefronts of plane waves, whereas for VX states the radial patterns in (b4), (c4), (a5), (b5), and (c5) imply the repetition rate of phase variation in \(2\pi\), which just equals the corresponding topological charge number. However, the launch of a finite initial momentum produces strain on the surface of the QD, thus imprinting an additional phase upon its wavefunction and skewing the radial wavefronts, such as shown in (b7), (c7), (a8), (b8), and (c8). For the OCVX states, the emergence of the fork-like phase distributions, presented in (b6), (b9), (c6), and (c9), due to the displacement of the ring center is the reveal of the spiral phase from the grating diffraction as observed in the optical vortices, such as shown in Fig. 2. Both QDs and optical vortices share a number of key features. In the paraxial approximation, the solution of the Helmholtz equation of a cylindrical symmetric optical vortex is the Laguerre-Gaussian function,
\[\begin{split} LG_{p\ell}(r,\phi,z)=&\sqrt{\frac{2 p!}{\pi(p+|\ell|)!}}\frac{1}{w(z)}\left[\frac{\sqrt{2}r}{w(z)}\right]^{|\ell|} \exp\left[\frac{-r^{2}}{w^{2}(z)}\right]L_{p}^{|\ell|}\left(\frac{2r^{2}}{w^{ 2}(z)}\right)\exp[i\ell\phi]\\ &\exp\left[\frac{ik_{0}r^{2}z}{2\left(z^{2}+z_{R}^{2}\right)} \right]\exp\left[-i(2p+|\ell|+1)\tan^{-1}\left(\frac{z}{z_{R}}\right)\right], \end{split} \tag{19}\]
where \(w(\mathrm{z})=\mathrm{w}_{0}\sqrt{1+\left(z/z_{R}\right)^{2}}\) is the beam width with \(w_{0}\) being the beam waist, \(\mathrm{z}_{R}\) the Rayleigh range, and \((2p+|l|+1)\tan^{-1}\left(z/z_{R}\right)\) the Gouy phase corresponding to the angular index \(l\) and the radial index \(p\). With the implementation of the spatial light modulator (SLM), we can easily generate high-order optical vortices and access their phase information. As demonstrated in Fig. 2(a), the density profile of the \(LG_{010}\) mode with intensity singularity is the first-order diffraction at the focal length, whereas the phase singularities in (b) and (c) are revealed by the interference of the \(LG_{010}\) mode with a Gaussian spherical wave and a Gaussian plane wave, respectively. Since many of the similarities between the simulation and the experimental results of the two systems can be addressed, the optical vortex can be thought of as the classical correspondence of the rotating QD. The emergence of the intensity and phase singularities in rotating QDs provide evidence of the controllable generation of nonzero OAM upon atoms, and further the possibility of synthesizing orbital magnetism in quantum liquids. Therefore, as Figs. 1(a3), (a6), and (a9) show, when the particle number is not high enough to stably support the formation of a multiple charged vortex, the rotation induced phase singularity in the tiny vortex core could be diminished as the translational effect is involved, resulting in the hindering of the persistent current generation and destruction of the superfluidity. This feature, which can be verified later in the phase diagram, thus suggests a critical particle number \(N_{c}\) for the formation of a stable QD vortex in the weakly interacting and dilute liquids, below which there can be no extrinsic magnetic dipole moment nor observation of paramagnetism.
### Phase Diagram
Figure 3(a) shows the \(N-\Omega\) phase diagram of the system's ground state with the inclusion of the maximum circulation restriction, in which (i) to (iv) depict the regions for CM, VX, OCVX, and off-center-of-mass (OCM) states, respectively. In Fig. 3(b), the blue and red curves show the expectation value of single-particle's OAM \(\langle L_{z}\rangle=m+2|Z_{0}|^{2}\) as a function of \(\Omega\) with and without following the restriction of maximum circulation that can be distinguished from the appearance of an extended plateau. The inset also shows the energy variation as a function of \(\Omega\) for the \(m=0\) CM state and \(m=1,3,\ldots,11\) VX states. With the increase of \(\Omega\), the phase transition from the CM state to the VX state with large multiple singularities and lower energies occurs to maintain the QD's stability. Across the boundary of (ii) and (iii), the phase transition from the VX state to the OCVX state occurs. The QDs attempt
Figure 2: (color online) (a) Generation of the optical vortex of \(LG_{010}\) mode with intensity singularity using SLM. The phase singularities can be observed from the interference of the \(LG_{010}\) mode with (b) a Gaussian spherical wave and (c) a Gaussian plane wave
to keep stable at extremely high rotational speed \(\Omega\) by abruptly lowering their rotational inertia concerning the CM of the atoms via shrinking the vortex core size but raising the rotational kinetic energy via configuring themselves to off-axis vortices. Otherwise, the formation of VX states with huge cores far beyond the theoretical limit and squeezed atomic distribution, such as the density profile shown in Fig. 3(b), where \(\Omega=1.8\), would effectively bear an inter-attractive interaction, and thus is only metastable and fragile against three-body collisions. As a comparison, stable OCM states can be formed in the few particle and fast rotation regimes (iv). However, when the particle number is low, the QDs of ultra-dilute liquids can be stably sustained if they are energetically more favorable than the atomic cloud subjected to an effective 2D MF interaction given by
\[V^{2D}(\vec{r})=\frac{\sqrt{8\pi}\hbar^{2}a_{s}}{ml_{z}}\delta^{2D}(\vec{r}). \tag{20}\]
The presence of OCM states near \(\Omega=1\) in Fig. 2(a) is the signature of the dynamic instability that the gas-liquid phase transition can be induced due to particle fluctuations.
It should be noticed that for certain \(N\) and \(\Omega\), the stability condition
\[\frac{\partial^{2}E}{\partial\rho^{2}}\frac{\partial^{2}E}{\partial R^{2}}- \left(\frac{\partial^{2}E}{\partial\rho\partial R}\right)^{2}>0 \tag{21}\]
must be satisfied when determining the QD's ground state. Therefore, while the CM states in the region (i) are found to violate the inequality, the system would try to maintain its stability by removing the atoms from the dense peak to pinning on the trap center, resulting in the creation of the embedded vortex with multiple circulations when initiating a rotation. Although it was reported that the multiple singly quantized vortex clusters can be created in the QDs at slow rotations [36], they are metastable and cause the deformation of the QDs.
### Periodicity and Stability
We can trace QD's dynamics by solving multiple coupled equations of motion given by the Euler-Lagrange equation
\[\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{q}}\right)-\left(\frac{ \partial L}{\partial q}\right)=0 \tag{22}\]
for the characteristic parameters \(q=\rho,\alpha,\vec{R}\), and \(\vec{P}\). Because of the lack of rotational kinetic energy, the perturbed CM state shows its instability in the short-term evolution and eventually crashes when it escapes from the trap confinement (Fig. 4(a)). With the increase in rotational speed, the dominant VX state shows its stability by adjusting itself to the lowest-Landau level (Fig. 4(b)). At rapid rotations, Fig. 4(c) shows that OCVX maintains stability by lowering the OAM but shrinking itself to steadily precess against the perturbation.
Figure 3: (color online) (a) The \(N-\Omega\) phase diagram, in which (i) to (iv) depict the regions for CM, VX, OCVX, and OCM states, respectively. For \(N=60\), the blue and red curves in (b) show the expectation value of the OAM \(\langle L_{z}\rangle\) as a function of \(\Omega\) with and without following the restriction of maximum circulation. At \(\Omega=1.8\), the density profile of the VX state with a huge core reveals a metastable contrast of a stable OCM state formed in the few particle regimes. The inset shows the energy variation as a function of \(\Omega\) for the \(m=0\) CM state and \(m=1,3,\dots,11\) VX states.
At \(\Omega=1.0\), the externally-applied effective centripetal force for an orbital motion vanishes, leaving a nonzero Coriolis force induced by the velocity variation in the zonal direction that launches a self-curing rotational motion for the QD. As shown in Fig. 4(d), the quasi-periodic trajectories and breathings provide evidence of the emergence of the collective excitation of the surface mode in the VX state. In the presence of the anharmonic trapping and the nonlinear effects, the orbit of the QD does not strictly close upon itself after a finite number of oscillations, and neither does it open. Instead, as the lobes reveal, the evolution of the QD shows a multiple-periodic motion between the turning points. Similar to that for a periodic motion driven by the central-force field, the area enclosed by the moving trajectory is still a rational fraction \(2\pi(a/b)\)-after \(b\) periods, the radius of the vector of the QD will have made \(a\) complete revolutions and will have returned to its original position.
For strongly interacting cases, the vortices are much smaller than the system size and the radius of curvature of the density profile. The results of homogeneous systems then apply to support the formation of vortex arrays. As a result, for a fixed angular momentum, the wave functions are written as the sum of noninteracting particle states of different angular momenta rather than the ansatz as Eq. (4) presents. On the other hand, in the region containing multiple quantized vortices, the vortex cores are large and the vortex density is suppressed such that the individual vortices become indiscernible. Accordingly, although the multiple quantized vortices carrying high topological charges are not thermodynamically stable and energetically favorable in the homogeneous superfluids or harmonic-confined systems, the periodicity and stability analyses in this work demonstrate that the vortex configurations in anharmonically-trapped rotating QDs can be stably supported.
Figure 4: (color online) Trajectory \(\vec{R}\) and \(\vec{P}\), width \(\rho\), and conjugate curvature coefficient \(\alpha\) for (a)-(c) \(N=20\) and \(\Omega=0.6\), \(1.4\), and \(1.8\), (d) \(N=100\) and \(\Omega=1.0\).
### Persistent Currents
Rotation at angular velocity \(\Omega\) can be regarded as a perturbation to the nonrotating system Hamiltonian:
\[-\vec{\Omega}\cdot\vec{L}=-m\sum_{\mathbf{q}}\vec{j}_{\mathbf{q}}^{\dagger}\cdot \vec{A}_{\mathbf{q}}, \tag{23}\]
where \(\vec{j}_{\mathbf{q}}^{\dagger}\) is the particle current density fluctuation and \(\vec{A}_{\mathbf{q}}\) is the Fourier transform of the transverse artificial vector potential. With the application of linear response theory, this fluctuation will give rise to the mass current in the condensate frame. For low-energy scattering around \(\mathbf{q}=0\), the current density in the coordinate representation can be described in terms of the condensate wavefunction,
\[\vec{j}=\frac{\hbar}{2Mi}\left[\psi^{*}(\vec{r})\vec{\nabla}\psi(\vec{r})-\psi( \vec{r})\vec{\nabla}\psi^{*}(\vec{r})\right], \tag{24}\]
the substitution of a general order parameter \(\psi(\vec{r})=f(\vec{r})e^{i\phi(\vec{r})}\) in Eq. (24) gives \(\vec{j}=(\hbar/M)\vec{\nabla}\phi(\vec{r})|\psi(\vec{r})|^{2}\), which implies that motion of the condensate governed by velocity field \(\vec{v}(\vec{r})=(\hbar/M)\vec{\nabla}\phi(\vec{r})\) is a potential flow and is irrotational unless there exist some singularities in the local phase \(\phi(\vec{r})\). As previously mentioned, the VX and OCVX states that carry nonzero OAM are similar to the optical vortices of Laguerre-Gaussian modes with phase singularity and intensity singularity in the optical field. Moreover, while the quantum number \(m\) of the topological charge is found equal to the electromagnetic field angular momentum flux density per photon with respect to the transverse position, the circulation in the vortex beams is specified to be fluid-like. In analogy with the photon beams, the emergence of embedded singularities in the neutral atoms thus provide the opportunity to examine the coherence property and the superfluidity that are associated with the two-body reduced density matrix via investigating the persistent current (PC) generation in the QDs.
Described by the wave function of Eq. (5), the dimensionless total current density of the QD consists of four parts:
\[\vec{j}=g_{m}(\vec{r},\vec{R})\left[4m(\vec{r}-\vec{R})^{-2}\left[-\left(y-Y_{ 0}\right)\hat{x}+\left(x-X_{0}\right)\hat{y}\right]+\left[\alpha(x-X_{0})+p_{x }-Y_{0}\right]\hat{x}+\left[-\alpha(y-Y_{0})-p_{y}+X_{0}\right]\hat{y}\right], \tag{25}\]
in which \(g_{m}(\vec{r},\vec{R})=4^{-m}\,C_{m}^{2}\,(\vec{r}-\vec{R})^{2m}\exp[-(\vec{r} -\vec{R})^{2}/2\rho^{2}]\). As expected, Eq. (25) reveals that the generation of PCs can only be observed in the quantum states specified with nonzero circulation. The phase shifts due to the MF expansion and the relative repulsion of the wavepacket have no contributions to the generation of the PCs, but would alter the flow of the ordinary fluids. In Fig. 5 we demonstrate the snapshots of the persistent density distribution at \(t=0\) for \(N=100\) and (a) \(\Omega=0.6\), (b) \(\Omega=1.4\), and (c) \(\Omega=1.8\). For each column, the upper panel corresponds to \(\vec{p}(0)=0\), and the lower one corresponds to \(\vec{p}(0)\neq 0\). At slow rotating speed, the directional swirling of the dense atomic flow is a result of the Coriolis force effect. In experiments of vortex creation for the two-component dilute BEC gases, where the self-interaction of one component is different from the other, the Hamiltonian is assumed to be invariant under simultaneous rotation of all the hyperfine states. In that work, the generation of annual PCs is attributed to lacking the common kind of topological stability as a single-component system such as He-4. The reasoning, however, will not be applied to generating asymmetric PCs in this work, since the scattering strength imbalance among different species has already been considered in the effective LHY nonlinear interaction, and therefore there is no spin current generation in this work. The nonuniform distribution of the PCs of multiple topological charges is mainly attributed to the supposition of the gauge-induced azimuthal and linear flows, and at \(t>0\), the nonhomogeneity can be enhanced or reduced according to the wavepacket breathing. Together with the nonlinear-modulated self-bound effect, the PC is established as a partially coherent flow with mixed ideal and rigid-like fluids. As a result, the imbalance in vorticity will produce a shear force in the atomic cloud, providing strong support for driving the complex revolutions demonstrated in Fig. 4.
## IV Conclusion
This work investigated the dynamics of trapped rotating QDs with nonlinear LHY interaction. The stationary properties were studied using the variational method by minimizing energy functional based on Laughlin-like wavefunction with LLM. We explored the density profile, phase portrait, and phase diagram of the CM state, VX state, OCVX state, and OCM state. We also analyzed the periodicity and stability of QD's breathing and trajectory. The rotating QD can be thought of as the quantum correspondence of the optical vortex. The emergence of the intensity and phase singularities in rotating QDs provide evidence of the controllable generation of nonzero OAM upon atoms, and the possibility of synthesizing orbital magnetism in quantum liquids. With the increase in the rotational speed,
the VX states tend to occupy the lowest-Landau level to maintain their stability, which is consistent with the QH phenomenon. At rapid rotations, the OCVX state with lower \(m\) and smaller core size becomes energetically favorable by showing the possibility to steadily precess against the perturbation. At \(\Omega=1.0\), the externally-applied effective centripetal force for an orbital motion vanishes, leaving a nonzero Coriolis force induced by the velocity variation in the zonal direction that launches a self-curing rotational motion for the QD. The quasi-periodic trajectories and breathings provide evidence for the emergence of the collective excitation of the surface mode in the VX state. As a signature of superfluids, the generation of nonuniform PCs of multiple topological charges is attributed to the supposition of the gauge-induced azimuthal and linear flows and reveals the effect of nonlinear modulation. Instead of forming metastable vortex lattices, our work verifies that QDs with multiple topological charges can be stably supported in the anharmonically confined rapid rotating system.
## V Acknowledgement
We thank the Ministry of Science and Technology, Taiwan for partial financial support under grants MOST 110-2221-E-845-004 and MOST 111-2112-M-034-001.
|
2304.03569 | Dynamics of a solitonic vortex in an anisotropically trapped superfluid | We analytically study the dynamics of a solitonic vortex (SV) in a superfluid
confined in a non-axisymmetric harmonic trap. The study provides a framework
for analyzing the role of the trap anisotropy in the oscillation of SVs
observed in recent experiments on atomic Bose and Fermi superfluids. The
emergence of common and statistics-dependent features is traced in a unified
approach to both types of fluid. Our description, built in the hydrodynamic
formalism, is based on a Lagragian approach which incorporates the vortex
location as dynamical parameters of a variational ansatz. Previous operative
Hamiltonian pictures are recovered through a canonically traced procedure. Our
results improve the understanding of the experimental findings. Some of the
observed features are shown to be specific to the tri-axial anisotropy of the
trap. In particular, we characterize the nontrivial dependence of the
oscillation frequency on the trapping transversal to the vortical line. The
study reveals also the crucial role played by the nonlinear character of the
dynamics in the observed oscillation: for the considered experimental
conditions, the frequency, and, in turn, the effective inertial mass of the
vortex, are found to significantly depend on the amplitude of the generated
motion. It is also uncovered how the coupling with collective modes of the
fluid induces a non-negligible shift in the oscillation frequency. The
appearance of fine-structure features in the SV trajectory is predicted. | J. M. Gomez Llorente, J. Plata | 2023-04-07T10:04:57Z | http://arxiv.org/abs/2304.03569v1 | # Dynamics of a solitonic vortex in an anisotropically trapped superfluid
###### Abstract
We analytically study the dynamics of a solitonic vortex (SV) in a superfluid confined in a non-axisymmetric harmonic trap. The study provides a framework for analyzing the role of the trap anisotropy in the oscillation of SVs observed in recent experiments on atomic Bose and Fermi superfluids. The emergence of common and statistics-dependent features is traced in a unified approach to both types of fluid. Our description, built in the hydrodynamic formalism, is based on a Lagragian approach which incorporates the vortex location as dynamical parameters of a variational ansatz. Previous operative Hamiltonian pictures are recovered through a canonically traced procedure. Our results improve the understanding of the experimental findings. Some of the observed features are shown to be specific to the tri-axial anisotropy of the trap. In particular, we characterize the nontrivial dependence of the oscillation frequency on the trapping transversal to the vortical line. The study reveals also the crucial role played by the nonlinear character of the dynamics in the observed oscillation: for the considered experimental conditions, the frequency, and, in turn, the effective inertial mass of the vortex, are found to significantly depend on the amplitude of the generated motion. It is also uncovered how the coupling with collective modes of the fluid induces a non-negligible shift in the oscillation frequency. The appearance of _fine-structure_ features in the SV _trajectory_ is predicted.
Introduction
The dynamics of solitonic excitations in trapped Bose-Einstein condensates (BECs) have been the subject of intense research in the last decades [1; 2; 3; 4; 5; 6; 7; 8; 9]. A central objective of the theoretical work has been the characterization of the effect of trapping on structures identified in uniform environments. Significant advances have been made: different soliton-like solutions, with topological forms depending on the confining characteristics, have been found to the Gross-Pitaevskii (GP) equation. Moreover, the occurrence, dependent on the trapping conditions, in particular, on the effective dimensionality, of dynamical or energetic instability has been traced. Instability-induced decay sequences connecting diverse types of solutions have been described [10; 11; 12; 13; 14]. For specific confining properties, planar dark solitons (PDSs), vortex rings (VRs), and SVs are present in a decay cascade initiated via _snake_ instability [15; 16; 14]. Parallel advances have taken place in the experimental area: various techniques have been implemented for the direct observation and the controlled generation of the structures [1; 2; 5; 6; 8; 17; 18]. The research, initially focused on bosonic fluids, has been extended to superfluid Fermi gases [19; 20; 21; 22; 23; 24], where, the distinctive aspects of coherence and interactions imply facing additional, fundamental and technical, problems. In particular, the evaluation of the potential role of the fluid statistics in the appearance of differential characteristics of the soliton-like excitations is required. Here, we deal with some recently uncovered aspects of this problem: experiments realized by different groups have revealed nontrivial features of the dynamics of SVs which seem to be common to fermionic and bosonic superfluids. In those experiments, originally intended to the controlled production of PDSs in a fermionic superfluid in the BEC-BCS crossover [21; 22; 23] and in a BEC [25; 18][26], oscillating long-lived SVs were detected. In fact, the presence of SVs was inferred from the characterization of the observed oscillatory motion, specifically, from tracing unexpected large values of the effective inertial masses. Subsequently, the conclusive identification of the structures as SVs was achieved in the fermionic case through the implementation of direct tomographic imaging. Tomographic techniques also allowed observing instability cascades, where, in agreement with the predictions, PDSs were found to decay into VRs, which, in turn, evolved into SVs [22]. The SV character of the structures was also corroborated in one of the bosonic implementations: observed in free-expansion images, a twisted planar density depletion around the vortex line and phase dislocations in the interference pattern were identified as distinctive
SV signatures [18]. Afterwards, a stroboscopic technique was used to monitor the real-time dynamics [27]. The analyses of the experiments have incorporated numerical simulations based on the GP equation and hydrodynamical approaches [18, 28],[21]. With them, some of the experimental findings have been approximately reproduced. Numerical results were also presented to support the applicability of the technique of control implemented in [26]. Despite those achievements, additional work on the understanding of the experimental results seems necessary. We will focus on three issues that require further clarification. First, an accurate characterization of the role of the trap anisotropy is needed: as emphasized in [26], the lack of models that incorporate the triaxial anisotropy present in some of the practical setups implies that no precise reference values of the oscillation frequencies are available (the associated analyses were based on the approximate applicability of models set up for axisymmetric trapping to the actual anisotropic confinement). Second, removing the limitations of the linear approximation employed in some of the descriptions is essential: the magnitude of the observed amplitudes demands the nonlinear character of the dynamics to be explicitly taken into account. It is worth stressing that, in the nonlinear regime, since the oscillation frequency, and, in turn, the (effective) inertial mass are amplitude dependent, their measured values cannot be identified as intrinsic characteristics of the structures. Third, to achieve a detailed characterization of the system dynamics, some restrictions of the applied formalism must be overcome: although the use of effective Hamiltonian approaches built from approximate expressions for the free energy of the SV has served to understand salient aspects of the dynamics, a more complete description, where second-order effects can be included, is required. To deal with those issues, we generalize the approach presented in [29] to describe the precession of vortex lines in BECs. In this seminal work, the ansatz for the condensate wave function incorporates the phase of a quantum vortex line and the ground-state Thomas-Fermi density. Additionally, the vortex core is modeled by considering a zero-density region around the vortex line with a width given by the healing length. Corrections to such an ansatz will be estimated in the present work. Our results will show the basic approach to be rather accurate. We build a general framework where the evolution of SVs in Bose or Fermi superfluids can be analyzed. Focusing on the dynamics subsequent to the SV formation, we will proceed by setting up a variational scheme where the vortex position will be incorporated as dynamical parameters of the ansatz. In this approach, diverse characteristics of the setups, like different regimes of trap anisotropy or a broad range
of oscillation amplitudes, can be addressed. To account for second-order effects, the trial _wave-function_ will be generalized along two lines. First, we will assess the role of additional degrees of freedom of the vortex motion associated to (potentially realizable) sets of initial conditions. Subsequently, the coupling with collective modes of the fluid will be evaluated. The resulting framework will enable us to improve the agreement with the experimental results and predict the appearance of nontrivial _fine-structure_ features.
The outline of the paper is as follows. In Sec. II, we present our model system. The variational method used to characterize the dynamics is introduced in Sec. III. As a proof of consistency, we connect with previous operative approaches by presenting a completely traced application of the Hamiltonian formalism. In Sec. IV, the general dynamical equations are particularized to the cases of a BEC and of a superfluid Fermi gas of atoms in the BEC-BCS crossover [30; 31; 32]. Additional information on the dynamics, extracted from the generalization of the variational ansatz, is discussed in Sec. V. Some details of the application of our approach to the considered experiments are given in Sec. VI. Finally, the general conclusions are summarized in Sec. VII.
## II The model system
We consider an atomic Bose or Fermi superfluid characterized by an order parameter \(\Psi(\mathbf{r},t)=\sqrt{\rho(\mathbf{r},t)}e^{iS(\mathbf{r},t)}\) [\(\rho(\mathbf{r},t)\) and \(S(\mathbf{r},t)\) are respectively the density and phase of the fluid] which is assumed to obey the nonlinear Schrodinger (NLS) equation
\[i\hbar\frac{\partial\Psi(\mathbf{r},t)}{\partial t}=\left[-\frac{\hbar^{2}}{2 M}\mathbf{\nabla}^{2}+V_{ex}(\mathbf{r})+\mu[\rho(\mathbf{r},t)]\right]\Psi( \mathbf{r},t). \tag{1}\]
The identification of the parameters in this equation depends on the bosonic or fermionic character of the fluid. For a Bose fluid, \(M\) denotes the mass \(m_{A}\) of a condensate atom, and \(\mu[\rho(\mathbf{r},t)]\) accounts for the interaction term, \(g\rho\), \(g\) being the coupling strength. Eq. (1) corresponds then to the GP equation [30]. On the other hand, for a fermionic fluid in the BEC-BCS crossover, \(M\) stands for the mass of a pair of atoms (\(M=2m_{A}\)) and the nonlinear term incorporates the equation of state of the fluid which expresses the chemical potential \(\mu\) as a function of the density. Moreover, assuming the applicability of the polytropic approximation [33; 34; 35], the nonlinear term is written as \(\mu[\rho(\mathbf{r},t)]=C\rho(\mathbf{r},t)^{\gamma}\), where the polytropic index \(\gamma\) is a characteristic of the (interaction-dependent) fluid regime, and \(C\) is a
constant, which is usually expressed in terms of reference values for the chemical potential and density. In the BEC side of the crossover, the polytropic index takes the value \(\gamma=1\) [Eq. (1) can be again identified with the GP equation]. Additionally, in the unitary regime [21; 31; 32], the dynamics can be expected to be modeled by taking \(\gamma=2/3\). Since no formal differences exist between the description of the bosonic case and that of the fermionic superfluid in the (molecular) BEC regime, we will account for them in a unified way.
Emulating the referred practical setups, a confining nonaxisymmetric harmonic potential \(V_{ex}(\mathbf{r})\) is considered, i.e.,
\[V_{ex}(x,y,z)=\frac{1}{2}\left(k_{x}x^{2}+k_{y}y^{2}+k_{z}z^{2}\right), \tag{2}\]
where \(k_{i}\) (\(i\equiv x,y,z\)) denote the force constants of the trap, the corresponding frequencies being \(\omega_{i}=\sqrt{\frac{k_{i}}{m_{A}}}\). As in the experimental arrangements [21; 26], we consider cigar-shaped traps with transversal anisotropy, specifically, it is assumed that \(k_{y}>k_{x}\gg k_{z}\). The SVs were observed to be oriented along the shortest confining direction (\(OY\) axis), which was argued to be a consequence of energetic instability. We will tackle this issue farther on. Additionally, we will show that the trap frequencies that directly affect the observed precession frequencies are those transversal to SV orientation, i.e., \(\omega_{x}\) and \(\omega_{z}\).
To obtain stationary solutions to Eq. (1), we make the change
\[\Psi(\mathbf{r},t)=\sqrt{\rho(\mathbf{r})}e^{-\frac{i}{\hbar}\bar{\mu}t}, \tag{3}\]
where \(\bar{\mu}\) stands for the effective (bulk) chemical potential. In the resulting (stationary) equation, the Thomas-Fermi (TF) approximation, which corresponds to neglect the kinetic-energy term, implies making
\[V_{ex}(\mathbf{r})+\mu[\rho(\mathbf{r})]=\bar{\mu}. \tag{4}\]
Then, taking into account the polytropic approximation to \(\mu[\rho(\mathbf{r})]\), the stationary fluid density in the TF regime can be written as
\[\rho(\mathbf{r})=\left|\Psi(\mathbf{r})\right|^{2} = C^{-1/\gamma}\left[\bar{\mu}-V_{ex}(\mathbf{r})\right]^{1/\gamma} \tag{5}\] \[= \rho_{0}\left[1-\frac{V_{ex}(\mathbf{r})}{\bar{\mu}}\right]^{1/ \gamma},\]
for \(\bar{\mu}-V_{ex}({\bf r})\geq 0\), and, \(\rho({\bf r})=0\), otherwise. [We have written \(\rho({\bf 0})\equiv\rho_{0}\).] The TF radii \(R_{i}\), (\(i\equiv x,y,z\)), are given by
\[R_{i}=\sqrt{\frac{2\bar{\mu}}{k_{i}}}, \tag{6}\]
\(\bar{\mu}\) being obtained from normalization.
The requirements for the validity of the present approach must be emphasized. First, the hydrodynamic description is expected to be valid at length scales much larger than the healing length. Second, the above NLS equation [Eq. (1)] is applicable provided that a local density approximation to the equation of state of the fluid is feasible. Those conditions are fulfilled in the mentioned experiments.
## III Description of the dynamics through a variational Lagrangian approach
### General dynamical equations
The NLS equation given by Eq. (1) with \(\mu[\rho({\bf r},t)]=C\rho({\bf r},t)^{\gamma}\) can be derived from the Lagragian density
\[\mathcal{L}[\Psi]=i\frac{\hbar}{2}\left(\Psi^{*}\frac{\partial\Psi}{\partial t }-\Psi\frac{\partial\Psi^{*}}{\partial t}\right)-\frac{\hbar^{2}}{2M}\left| \boldsymbol{\nabla}\Psi\right|^{2}-V_{ex}\left|\Psi\right|^{2}-\frac{C}{ \gamma+1}\left|\Psi\right|^{2(\gamma+1)} \tag{7}\]
Indeed, the NLS equation is the Euler-Lagrange equation that is obtained by imposing the action
\[\mathscr{S}[\Psi]=\int_{t_{1}}^{t_{2}}dt\int d\mathbf{r}\mathcal{L}[\Psi] \tag{8}\]
to be stationary against infinitesimal variations \(\delta\Psi\) and \(\delta\Psi^{*}\) which fulfill \(\delta\Psi({\bf r},t_{1})=\delta\Psi({\bf r},t_{2})=0,\,\forall{\bf r}\). This Lagragian formalism provides us with an appropriate framework for setting up a variational method [36]. First, an ansatz \(\Psi({\bf r},t;{\bf u})\) is proposed for the _wave function_. (A generic notation, \({\bf u}\), is used for the variational parameters.) Then, by introducing the ansatz into Ec. (7), and integrating, a Lagragian function is obtained, i.e.,
\[L({\bf u},{\bf\dot{u}},t)=\int d\mathbf{r}\mathcal{L}\left[\Psi({\bf r},t;{ \bf u}),\Psi^{*}({\bf r},t;{\bf u})\right]. \tag{9}\]
Finally, the effective Lagrange's equations
\[\frac{d}{dt}\left(\frac{\partial L}{\partial\dot{\mathbf{u}}}\right)-\frac{ \partial L}{\partial\mathbf{u}}=0, \tag{10}\]
give the dynamics of the variational parameters, and, consequently, the time evolution of \(\Psi(\mathbf{r},t;u)\).
In order to describe the dynamics of the considered solitonic vortex, assumed to be aligned along the \(y\) direction, we use the ansatz
\[\Psi(\mathbf{r},t;x_{0},z_{0})=\left|\Psi(\mathbf{r})\right|e^{iS(x,z;t;x_{0}, z_{0})}, \tag{11}\]
where the variational parameters, \(x_{0}\), and \(z_{0}\), correspond to the time-dependent location \(\mathbf{r_{0}}(t)=[x_{0}(t),z_{0}(t)]\) of the SV which we intend to characterize. For the phase profile, which must account for the circulating flow around the vortex, we write
\[S(x,z,t;x_{0},z_{0}) = \arctan\left(\frac{x-x_{0}}{z-z_{0}}\right)-\frac{\bar{\mu}t}{ \hbar} \tag{12}\] \[\equiv S_{v}(x,z;x_{0},z_{0})-\frac{\bar{\mu}t}{\hbar}.\]
Additionally, for \(\left|\Psi(\mathbf{r})\right|\), we take the background Thomas-Fermi (TF) expression, given through Ec. (5): as a first-order approximation, it is assumed here that the modification of the background density due to the presence of the vortex has a minor effect on the characterization of the parameter dynamics. Second-order effects will be evaluated in Section V, where we will use a more elaborate ansatz which incorporates changes in the density correlated with the phase proposal and accounts for the potential interplay of the vortex motion with collective modes of the fluid. These changes will also allow a more precise evaluation of the effects of the vortex core.
The dynamics of the parameters \(x_{0}\) and \(z_{0}\) are governed by the Lagragian function obtained from Eq. (9), i.e.,
\[L(x_{0},z_{0};\dot{x}_{0},\dot{z}_{0})=\int d\mathbf{r}[-\hbar\rho\frac{ \partial(S_{v}-\frac{\bar{\mu}t}{\hbar})}{\partial t}-\frac{\hbar^{2}}{2M}( \left|\mathbf{\nabla}\rho^{1/2}\right|^{2}+\rho\left|\mathbf{\nabla}S\right|^{2})-V_{ ex}\rho-\frac{C\rho^{\gamma+1}}{\gamma+1}] \tag{13}\]
where we have taken into account that
\[i\frac{\hbar}{2}\left(\Psi^{*}\frac{\partial\Psi}{\partial t}-\Psi\frac{ \partial\Psi^{*}}{\partial t}\right)=-\hbar\rho\frac{\partial S}{\partial t}, \tag{14}\]
and,
\[\left|\mathbf{\nabla}\Psi\right|^{2}=\left|\mathbf{\nabla}\rho^{1/2}+i\rho^{1/2}\mathbf{ \nabla}S\right|^{2}. \tag{15}\]
Given the form of the ansatz [see Eq. (11)] which incorporates the variational parameters only through the phase \(S_{v}(x,z;x_{0},z_{0})\), it is apparent that the terms associated with \(\bar{\mu}\), \(\left|\mathbf{\nabla}\rho^{1/2}\right|^{2}\), \(V_{ex}\rho\), and \(\frac{C}{\gamma+1}\rho^{\gamma+1}\) in Eq. (13) are not relevant to the effective Lagrange's equations as they do not introduce dependence on the parameters. (Furthermore, since the term \(\left|\mathbf{\nabla}\rho^{1/2}\right|^{2}\) is neglected in the considered TF approximation, it will yet be ignored in the forthcoming generalization of the ansatz, in spite of the parameters being incorporated then, not only through the phase, but also via the density.) Hence, the Lagragian function can be effectively reduced to
\[L(x_{0},z_{0};\dot{x}_{0},\dot{z}_{0})=\int d\mathbf{r}\rho\left[-\hbar\frac{ \partial S_{v}}{\partial t}-\frac{\hbar^{2}}{2M}\left|\mathbf{\nabla}S_{v}\right| ^{2}\right]. \tag{16}\]
The integration must exclude the region around the solitonic-vortex core (a cylinder of radius equal to the healing length) where the true condensate density can be accurately approximated to zero. This approximation, which has been used, in all theoretical previous works of confined vortex lines, regularizes the integral. Using the specific functional form proposed for the phase in Eq. (12), we rewrite Eq. (16) as
\[L(x_{0},z_{0};\dot{x}_{0},\dot{z}_{0}) = \int d\mathbf{r}\rho(\mathbf{r})\frac{\hbar\left[\dot{x}_{0}(z-z_ {0})-\dot{z}_{0}(x-x_{0})\right]-\frac{\hbar^{2}}{2M}}{(x-x_{0})^{2}+(z-z_{0}) ^{2}} \tag{17}\] \[\equiv f_{x}(x_{0},z_{0})\dot{x}_{0}+f_{z}(x_{0},z_{0})\dot{z}_{0}+F(x_ {0},z_{0}),\]
where, for convenience for the later application of the Hamiltonian formalism, we have introduced the functions
\[f_{x}(x_{0},z_{0}) = \hbar\int d\mathbf{r}\rho(\mathbf{r})\frac{(z-z_{0})}{(x-x_{0})^ {2}+(z-z_{0})^{2}}, \tag{18}\] \[f_{z}(x_{0},z_{0}) = -\hbar\int d\mathbf{r}\rho(\mathbf{r})\frac{(x-x_{0})}{(x-x_{0}) ^{2}+(z-z_{0})^{2}},\] (19) \[F(x_{0},z_{0}) = -\frac{\hbar^{2}}{2M}\int d\mathbf{r}\rho(\mathbf{r})\frac{1}{(x -x_{0})^{2}+(z-z_{0})^{2}}, \tag{20}\]
which have been evaluated using an approximate method of sequential integration applicable in the regime of strong anisotropy corresponding to the referred experiments, i.e., for \(\omega_{x}\gg\)
\(\omega_{z}\). Here, we recall that, in the realization of [21], which corresponded to a fermionic superfluid in the BEC-BCS crossover, the displacement of the center of the (cigar-shaped) trap due to gravitational effects led to a small difference between the values of the trap frequencies in the directions perpendicular to the longest trap axis. Consequently, the system was not axially symmetric, the anisotropy transversal to the vortical line being considerable. Also, the trap used in the experimental realization corresponding to an atomic BEC [26] was operated in a regime of strong anisotropy. We have obtained for the above integrals
\[f_{x}(x_{0},z_{0}) \simeq 0, \tag{21}\] \[f_{z}(x_{0},z_{0}) \simeq -\pi\hbar\int_{-x_{0}}^{x_{0}}\rho_{2D}(x,z_{0})dx,\] (22) \[F(x_{0},z_{0}) \simeq -\frac{\pi\hbar^{2}}{M}\rho_{2D}(x_{0},z_{0})\ln\left(\frac{R_{x} }{\xi}\right). \tag{23}\]
where \(R_{x}\) is the TF radius in the \(x\) direction [see Eq. (6)] and \(\xi\) represents the size of the vortex core, which, for, both, bosonic and fermionic fluids, can be approximated as \(\xi=\hbar/\sqrt{2M\bar{\mu}}\) within the TF picture. Note that \(\xi\) corresponds to the standard form of the healing length in the bosonic case. Moreover, we have used the column density along the vortex orientation, i.e.,
\[\rho_{2D}(x,z)=\int_{-y_{L}(x,z)}^{y_{L}(x,z)}\rho(x,y,z)dy. \tag{24}\]
where,
\[y_{L}(x,z)=+\frac{1}{k_{y}^{1/2}}\left[2\bar{\mu}-\left(k_{x}x^{2}+k_{z}z^{2} \right)\right]^{1/2} \tag{25}\]
is the limit value of the \(y-\)coordinate as a function of the other two variables. We have employed (alternative) methods of integration of general applicability to precisely define the range of validity of the obtained Lagragian function. Namely, although exact values of the integrals present in Eqs. (18) and (19) have not been explicitly obtained, they can be shown to lead to the same Lagragian function as the expressions given by Eqs. (21) and (22). Then, it follows that the approximation implemented to obtain \(f_{x}(x_{0},z_{0})\) and \(f_{z}(x_{0},z_{0})\) does not restrict the applicability of the description. Additionally, in order to improve the accuracy of Eq. (23), we have gone to the next precision order: we have calculated the contribution of the zero-order terms (i.e, the terms where the factor \(R_{x}/\xi\) is not present). That contribution will be incorporated through a numerical factor, the _effective zero-order parameter_\(c_{\gamma}^{(0)}\), in
the argument of the logarithmic function in Eq. (23), namely, we will write \(\ln(c_{\gamma}^{(0)}R_{x}/\xi)\). In Sec. V, we will see that the generalization of the description implies dealing with additional zero-order terms, which will be accounted for by appropriately modifying \(c_{\gamma}^{(0)}\). The final value of that parameter, which entirely incorporates the zero-order terms of the different extensions of the model, will be given in Sec. VI, when the specific application of the study to the experimental setups is discussed.
Using the explicit forms of \(f_{x}(x_{0},z_{0})\), \(f_{z}(x_{0},z_{0})\), and \(F(x_{0},z_{0})\), the Lagragian function is written as
\[L(x_{0},z_{0};\dot{x}_{0},\dot{z}_{0}) = -\pi\hbar\dot{z}_{0}\int_{-x_{0}}^{x_{0}}\rho_{2D}(x,z_{0})dx-\pi \frac{\hbar^{2}}{M}\rho_{2D}(x_{0},z_{0})\ln\left(c_{\gamma}^{(0)}\frac{R_{x} }{\xi}\right), \tag{26}\]
and, from Lagrange's equations, one obtains that the evolution of the vortex location is given by
\[\dot{x}_{0}=\frac{\hbar}{2M}\frac{\frac{\partial\rho_{2D}(x_{0},z_{0})}{ \partial z_{0}}}{\rho_{2D}(x_{0},z_{0})}\ln\left(c_{\gamma}^{(0)}\frac{R_{x} }{\xi}\right) \tag{27}\]
\[\dot{z}_{0}=-\frac{\hbar}{2M}\frac{\frac{\partial\rho_{2D}(x_{0},z_{0})}{ \partial x_{0}}}{\rho_{2D}(x_{0},z_{0})}\ln\left(c_{\gamma}^{(0)}\frac{R_{x}} {\xi}\right). \tag{28}\]
Here, it is apparent that much of the information on the dynamics is incorporated into the column density. It is via \(\rho_{2D}(x_{0},z_{0})\) that the anisotropy of the trap and the nonlinearity of the problem enter the equations. Moreover, in the present approach, the differences between the dynamics of the SV in bosonic and fermionic superfluids emerge mainly from the different form of the column density in each case. Later on, the specific functional form of \(\rho_{2D}(x_{0},z_{0})\) will be introduced and the characteristic frequency of the oscillation \(\Omega_{p}\) will be obtained. We will see that the presence of the quotient \((\partial\rho_{2D}/\partial z_{0})/\rho_{2D}\) [or \((\partial\rho_{2D}/\partial x_{0})/\rho_{2D}\)] in the above equations implies that \(\Omega_{p}\) does not explicitly depend on the trap frequency along the SV direction \(\omega_{y}\).
Some considerations on dimensionality are pertinent. The dynamical system formed by the set of variational parameters has only one degree of freedom: since the Lagrangian presents a linear dependence on the generalized velocities, the dynamics are given by two first-order equations. In consequence, only two initial conditions, e.g., the vortex coordinates \(x_{0}(t=0)\), \(z_{0}(t=0)\), are required. Note that the dimensionality constraints derive from the form chosen for the variational ansatz. A reduction in the set of generalized coordinates will be implemented in the next subsection.
### The Hamiltonian formalism
An operative Hamiltonian picture set up from an approximate expression for the free energy of the SV was presented in [21] and subsequently used in [11]. (The same technique had been applied to analyze the dynamics of a vortex ring in [37]; see also [38; 39] for alternative approaches.) To establish the connection with those descriptions, we give now a detailed account of the application of the Hamiltonian formalism to our model system.
Since building the Hamiltonian function from a redundant set of generalized coordinates can lead to inconsistencies, we turn to implement a dimensionality reduction, prior to the derivation of Hamilton equations. In order to present a general procedure, explicit expressions for the functions \(f_{x}(x_{0},z_{0})\), \(f_{z}(x_{0},z_{0})\), and \(F(x_{0},z_{0})\) will not be used. We start by rewriting the Lagragian function that governs the dynamics: to the expression of \(L(x_{0},z_{0};\dot{x}_{0},\dot{z}_{0})\) given by Ec. (17), we add the total time derivative of a function \(G(x_{0},z_{0})\), which will be adjusted in order to eliminate one of the generalized velocities. (Without loss of generality we will remove \(\dot{x}_{0}\).) Hence, the _new_ Lagragian function is written as
\[\tilde{L}(x_{0},z_{0};\dot{x}_{0},\dot{z}_{0}) = f_{x}(x_{0},z_{0})\dot{x}_{0}+f_{z}(x_{0},z_{0})\dot{z}_{0}+F(x_ {0},z_{0})+\frac{d}{dt}G(x_{0},z_{0}) \tag{29}\] \[= f_{x}(x_{0},z_{0})\dot{x}_{0}+f_{z}(x_{0},z_{0})\dot{z}_{0}+F(x_ {0},z_{0})+\frac{\partial G}{\partial x_{0}}\dot{x}_{0}+\frac{\partial G}{ \partial z_{0}}\dot{z}_{0},\]
and, to eliminate \(\dot{x}_{0}\), we impose
\[\frac{\partial G}{\partial x_{0}}=-f_{x}(x_{0},z_{0}) \tag{30}\]
Then, \(G(x_{0},z_{0})\) is obtained as
\[G(x_{0},z_{0})=-\int f_{x}(x_{0},z_{0})dx_{0}, \tag{31}\]
and the Lagragian function is converted into
\[\tilde{L}(x_{0},z_{0};\dot{z}_{0}) = \left[f_{z}(x_{0},z_{0})+\frac{\partial G}{\partial z_{0}}\right] \dot{z}_{0}+F(x_{0},z_{0}).\]
It follows that the canonical conjugate momentum of \(z_{0}\) is given by
\[p_{z_{0}}=\frac{\partial\tilde{L}}{\partial\dot{z}_{0}}=f_{z}(x_{0},z_{0})+ \frac{\partial G}{\partial z_{0}}, \tag{32}\]
and the Hamiltonian function is straightforwardly set up through a Legendre transformation. Namely,
\[H(z_{0},p_{z_{0}})=p_{z_{0}}\dot{z}_{0}-\tilde{L}=-F\left[z_{0},x_{0}(z_{0},p_{z_{ 0}})\right] \tag{33}\]
[We have written \(x_{0}(z_{0},p_{z_{0}})\) from Ec. (32): there are only two canonical conjugate variables, \(z_{0}\) and \(p_{z_{0}}\).] The expression obtained for \(H(z_{0},p_{z_{0}})\), particularized to the axis-symmetric scenario, matches the effective Hamiltonian function operatively built from the free energy in previous approaches [21][11]. Actually, in those descriptions, the energy was evaluated from modeling the SV _wave-function_ in a form analogous to the ansatz proposed in our variational method. No _ad hoc_ introduction of the conjugate momentum is required in our approach: a completely canonical procedure is followed.
Hamilton's equations are given by the expressions
\[\dot{z}_{0}=\frac{\partial H}{\partial p_{z_{0}}}=-\frac{\partial F}{ \partial x_{0}}\frac{\partial x_{0}}{\partial p_{z_{0}}}, \tag{34}\]
\[\dot{p}_{z_{0}}=-\frac{\partial H}{\partial z_{0}}=\left[\frac{\partial F}{ \partial z_{0}}\right]_{x_{0}}+\frac{\partial F}{\partial x_{0}}\frac{ \partial x_{0}}{\partial z_{0}}, \tag{35}\]
which, after minor algebra, are shown to consistently reproduce the dynamical equations obtained via the Lagragian formalism [Ecs. (27) and (28)]. This alternative view of the evolution of the parameters can be convenient for further studies where analogies with other dynamical systems can be established via canonical transformations.
At this point it is worth recalling that, in the referred experiments [26], [21], the SVs were found to be oriented along the shortest axis of the trap. Some insight into this finding can be achieved by analyzing the expression of the Hamiltonian [Ec. (33)] for a generic orientation of the vortex. It is shown that the lowest energy of the system corresponds indeed to the vortical line oriented along the shortest radius. Consequently, one can conjecture that there must be a damping mechanism which leads to the occurrence of that minimum-energy orientation. (Dissipation effects on related structures were studied in [40; 41; 42; 43; 44].) Closely connected with this aspect of the dynamics is the characterization of the initial conditions for the analyzed process. In fact, this is an open question: the possibility of preparing the initial state is limited as the SVs seem to appear as the (uncontrolled) final product of decay sequences which start with PDSs. Further restrictions on the initial conditions are present
if, as conjectured, the SVs experience a damping process leading to the minimum-energy orientation. Actually, from the potential effects of decay and damping, one can reasonably expect the emergence of a constrained scenario for the effective preparation of the SVs. The pertinence of a simple set of two initial conditions, e.g., the coordinates of the vortical line, as required in the above approach, seems to be corroborated by the general agreement of our basic picture with the experimental results. On the other hand, a potential realization where both, the initial positions and velocities of the structures, could be independently fixed would require an approach with a more elaborate ansatz where the dimensional reduction outlined in the above paragraphs would not be feasible. We will deal with this issue in Sec. V.
## IV The role of the fluid statistics in the vortex dynamics
In our picture, the differential characteristics of the SV dynamics in bosonic and fermionic superfluids are rooted in the corresponding different values of the polytropic index of the applied NLS equation. Furthermore, given the form of the ansatz used in the variational method, \(\gamma\) enters the description through the background density \(\rho({\bf r})\), more specifically, via the column density \(\rho_{2D}(x,z)\) and the effective factor \(c_{\gamma}^{(0)}\) in Eqs. (27) and (28). To make explicit the differences between the bosonic and fermionic cases, we evaluate the precise functional form of that density: using the scaled variables
\[X=\frac{x}{R_{x}},\quad Y=\frac{y}{R_{y}},\quad Z=\frac{z}{R_{z}}, \tag{36}\]
the expression of the TF density in the bosonic case (i.e., for, both, bosonic atoms and fermionic atoms in the BEC regime) is written as
\[\rho_{B}(X,Y,Z)=\rho_{0}\left(1-X^{2}-Y^{2}-Z^{2}\right). \tag{37}\]
On the other hand, in the fermionic case at the unitarity regime, the TF density reads
\[\rho_{F}(X,Y,Z)=\rho_{0}\left(1-X^{2}-Y^{2}-Z^{2}\right)^{3/2}. \tag{38}\]
Both expressions are applicable in the range defined by \(1\geq X^{2}+Y^{2}+Z^{2}\), the density being zero outside that range. The respective column densities, \(\rho_{2D,B}(X,Z)\) and \(\rho_{2D,F}(X,Z)\), are in turn given by
\[\rho_{2D,B}(X,Z)=\frac{4}{3}R_{y}\rho_{0}\left(1-X^{2}-Z^{2}\right)^{3/2}, \tag{39}\]
and
\[\rho_{2D,F}(X,Z)=\frac{3}{8}\pi R_{y}\rho_{0}\left(1-X^{2}-Z^{2}\right)^{2}. \tag{40}\]
Introducing those expressions into Ecs. (27) and (28), we obtain the evolution of the vortex location, which can be expressed in compact form, for both \(\gamma=1\) and \(\gamma=2/3\), as
\[\dot{X}_{0}=-\frac{2\gamma^{-1}+1}{4}\frac{\hbar}{\bar{\mu}}\frac{\sqrt{k_{x} k_{z}}}{M}\ln\left(c_{\gamma}^{(0)}\frac{R_{x}}{\xi}\right)\frac{Z_{0}}{1-X_{0}^ {2}-Z_{0}^{2}} \tag{41}\]
\[\dot{Z}_{0}=\frac{2\gamma^{-1}+1}{4}\frac{\hbar}{\bar{\mu}}\frac{\sqrt{k_{x} k_{z}}}{M}\ln\left(c_{\gamma}^{(0)}\frac{R_{x}}{\xi}\right)\frac{X_{0}}{1-X_{0}^ {2}-Z_{0}^{2}} \tag{42}\]
From these equations, it is readily shown that the amplitude of the SV motion, given by
\[A_{xz}=\sqrt{X_{0}^{2}+Z_{0}^{2}}, \tag{43}\]
is a conserved magnitude satisfying \(0\leq A_{xz}\leq 1\). Indeed, the vortex location describes the elliptical trajectory defined by the equation
\[\frac{x_{0}^{2}}{R_{x}^{2}}+\frac{z_{0}^{2}}{R_{z}^{2}}=A_{xz}^{2}, \tag{44}\]
which actually corresponds to the most conspicuous experimental features. (Perturbative corrections to this description, which will be presented in the next section, will allow us to predict the emergence of fine-structure characteristics.) Moreover, we can combine Eqs. (41) and (42), to obtain
\[\ddot{X}_{0}+\Omega_{p}^{2}X_{0}=0, \tag{45}\]
and
\[\ddot{Z}_{0}+\Omega_{p}^{2}Z_{0}=0, \tag{46}\]
where the characteristic frequency of the vortex oscillation is
\[\Omega_{p}=\frac{2\gamma^{-1}+1}{4}\frac{\hbar}{\bar{\mu}}\frac{\sqrt{k_{x}k_ {z}}}{M(1-A_{xz}^{2})}\ln\left(c_{\gamma}^{(0)}\frac{R_{x}}{\xi}\right). \tag{47}\]
In order to compare with the experimental results, it is convenient to express \(\Omega_{p}\) as a function of the trap frequencies \(\omega_{i}=\sqrt{\frac{k_{i}}{m_{A}}}\), (\(i\equiv x,y,z\)). For a Bose gas of atoms, since \(M=m_{A}\) and \(\gamma=1\), we find
\[\Omega_{p,B}=\frac{3}{4}\frac{\hbar}{\bar{\mu}}\frac{\omega_{x}\omega_{z}}{(1-A _{xz}^{2})}\ln\left(c_{\gamma=1}^{(0)}\frac{R_{x}}{\xi}\right). \tag{48}\]
In contrast, for a Fermi superfluid, taking into account that \(M=2m_{A}\), one obtains
\[\Omega_{p,F}=\frac{2\gamma^{-1}+1}{8}\frac{\hbar}{\bar{\mu}}\frac{\omega_{x} \omega_{z}}{(1-A_{xz}^{2})}\ln\left(c_{\gamma}^{(0)}\frac{R_{x}}{\xi}\right). \tag{49}\]
Some preliminary clues to clarify the experimental findings can be extracted from the above picture:
i) The lack of models strictly applicable to a triaxial anisotropic confinement was a handicap in the early interpretation of the experimental results. In fact, former analyses were based on assuming the approximate applicability of theoretical results known for an axisymmetric trapping. From the initial interpretation of the findings as reflecting the effect of the transversal trapping on the reduced mono-dimensional motion of the observed structure, the oscillation frequency was conjectured to depend on the trap frequencies transversal to the longest radius of the (cigar-shaped) trap. In order to reproduce the observed features, an effective mean value of the two transversal frequencies, tentatively defined in different forms, was used in the expression known for the axisymmetric setting [26]. Moreover, an effective transversal radius \(R_{t}\) was incorporated in the logarithmic factor present in the functional form of the frequency, i.e., \(\ln\left(R_{t}/\xi\right)\). Those limitations of the analysis are removed by the present study. Our results conclusively show that it is the trap anisotropy transversal to the vortex line that affects the precession frequency: as shown in Eqs. (48) and (49), \(\Omega_{p}\) depends on both \(\omega_{x}\) and \(\omega_{z}\). Moreover, as we have previously stated, \(\Omega_{p}\) does not explicitly depend on the trap frequency corresponding to the direction of the vortical line: \(\omega_{y}\) enters Eqs. (48) and (49) only through the bulk chemical potential. Our study lifts also the ambiguity relative to the argument of the logarithmic function: it is the radius \(R_{x}\) corresponding to the shortest transversal direction to the vortex line that enters that argument.
ii) Because of the nonlinear character of the dynamics, the oscillation frequency depends on the amplitude \(A_{xz}\). In fact, for the amplitudes detected in the experiments, a linear approximation, i.e., taking \(1-A_{xz}^{2}=1-X_{0}^{2}-Z_{0}^{2}\simeq 1\), is not feasible, and the specific value
of the factor \(1-A_{xz}^{2}\) must be taken into account to reproduce the measured frequencies.
A comment on the use of an effective inertial mass in the present context is in order. The introduction of that concept in the study of planar solitons [45] allowed deriving a compact expression for the period of the soliton in terms of the period of the (elongated) trap. In the considered regime of small-amplitude oscillations, the inertial mass is an intrinsic characteristic of the (trapped) solitonic structure. However, in its application to the (two-dimensional) dynamics of the SV made in [21], the inertial mass becomes dependent on the vortex position via the column density. The present analysis shows that there is an additional dependence on the SV position associated to the nonlinearity of the dynamics.
iii) No qualitative differences in the SV dynamics for the bosonic and fermionic cases are predicted with the used variational framework. The only differential effect is a numerical factor determined by the value of the polytropic index \(\gamma\) in the characteristic frequency of oscillation \(\varOmega_{p}\) and the logarithmic factor \(c\). The common global properties simply derive from the assumed superfluid character of both Bose and Fermi gases.
## V Generalization of the approach
In this Section, the above description will be generalized by increasing the flexibility of the variational ansatz. Two lines will be followed. First, we will use a trial _wave-function_ where the vortex-location parameters will be incorporated, not only through the phase, as in the previous approach, but also via the functional form of the density. This will be shown to imply dealing with additional degrees of freedom in the characterization of the vortex dynamics. In the second line, an ansatz which can account for the role of the condensate degrees of freedom will be employed. Although both extensions can be studied simultaneously, we will deal with them consecutively in order to concentrate on their differential implications. For simplicity, only the analysis of SV dynamics in the bosonic superfluid will be presented. For the fermionic case, which can be straightforwardly studied following the same procedure, only the final results will be given.
### Effects associated to vortex-induced variations in the fluid density
Here, we use the connection between phase and density given by the Euler-like equation of the hydrodynamic formalism [30] to derive the functional form of the density from the form proposed for the phase. Specifically, in the trial _wave-function_, written now as \(\Psi=\tilde{\rho}^{1/2}e^{iS}\), our proposal for the phase is
\[S(\mathbf{r},t;x_{0},z_{0})=S_{v}(x,z;x_{0},z_{0})-\frac{1}{\hbar}\bar{\mu}t+\delta, \tag{50}\]
which still incorporates the _vorticity-conveying_ function \(S_{v}(x,z;x_{0},z_{0})\), given by Eq. (12), and the term associated to the bulk chemical potential \(\bar{\mu}\). So there are no differences with the previous proposal except for the presence of the additional parameter \(\delta(t)\), needed now to guarantee the normalization of the trial _wave-function_ as modifications in the density are introduced. Indeed, the form of the density, \(\tilde{\rho}\), is not longer assumed to be that of the background \(\rho(\mathbf{r})=g^{-1}\left[\bar{\mu}-V_{ex}(\mathbf{r})\right]\). Now, \(\tilde{\rho}\) is left as a free field in the Lagrangian density, it being subsequently fixed by its own Euler-Lagrange equation of motion. It amounts to use the precise connection between density and phase given by the Euler-like (hydrodynamic) equation, i.e.,
\[\frac{\hbar^{2}}{2M}\left|\mathbf{\nabla}S\right|^{2}+V_{ex}+\hbar\frac{\partial( S_{v}-\frac{\bar{\mu}t}{\hbar}+\delta)}{\partial t}+g\tilde{\rho}=0. \tag{51}\]
Accordingly, the density is obtained in terms of the phase as
\[\tilde{\rho} = -\frac{1}{g}\left[\frac{\hbar^{2}}{2M}\left|\mathbf{\nabla}S\right| ^{2}+V_{ex}+\hbar\frac{\partial S_{v}}{\partial t}-\bar{\mu}+\hbar\dot{\delta}\right] \tag{52}\] \[= \rho-\frac{1}{g}\left[\frac{\hbar^{2}}{2M}\left|\mathbf{\nabla}S \right|^{2}+\hbar\frac{\partial S_{v}}{\partial t}+\hbar\dot{\delta}\right].\]
This expression is incorporated now into our variational scheme. To derive the Lagragian function, we first introduce into Eq. (13) the form of the nonlinear term corresponding to the considered bosonic case. It is worth emphasizing that a more complete characterization of the vortex core is given in this approach: the kinetic term, \(\frac{\hbar^{2}}{2M}\left|\mathbf{\nabla}S\right|^{2}\), explicitly accounts for the density reduction in the core. The size of the zero-density region is found to correspond to the healing length, as was assumed in the previous simpler model. Actually, the predictions of the present approach confirms the applicability of the basic model as a first-order approximation.
Then, using the link between phase and density given by Eq. (51), the Lagragian function is written in the compact form
\[L(x_{0},z_{0};\dot{x}_{0},\dot{z}_{0})=\frac{g}{2}\int d\mathbf{r}\tilde{\rho}^{ 2}. \tag{53}\]
Finally, by inserting in this equation the density as given by Eq. (52), and retaining only the dominant terms, we arrive at
\[L(x_{0},z_{0};\dot{x}_{0},\dot{z}_{0}) = \frac{g}{2}\int d\mathbf{r}\rho^{2}+ \tag{54}\] \[\int d\mathbf{r}\rho\left[-\hbar\frac{\partial S_{v}}{\partial t }-\frac{\hbar^{2}}{2M}\left|\boldsymbol{\nabla}S_{v}\right|^{2}\right]+\] \[\frac{1}{2g}\int d\mathbf{r}\left(\hbar\frac{\partial S_{v}}{ \partial t}\right)^{2}+\] \[\frac{1}{2g}\int d\mathbf{r}\left(\frac{\hbar^{2}}{2M}\left| \boldsymbol{\nabla}S\right|^{2}\right)^{2}\]
The magnitude of the terms left out in the above equation can be shown to be much smaller than that of the ones kept. We have also omitted the part that accounts for the dynamics of \(\delta(t)\), which is uncoupled from the rest of parameters and it is trivially solved in the regime which will be eventually considered. The term \(\frac{g}{2}\int d\mathbf{r}\rho^{2}\) in the above equation can be ignored as it does not contain the variational parameters. The rest of the integrals are evaluated to logarithmic accuracy, including zero-order terms, to give
\[L(x_{0},z_{0};\dot{x}_{0},\dot{z}_{0}) = -\pi\hbar\dot{z}_{0}\int_{-x_{0}}^{x_{0}}\rho_{2D}(x,z_{0})dx- \pi\frac{\hbar^{2}}{M}\rho_{2D}(x_{0},z_{0})\ln\left(c_{\gamma=1}^{(1)}\frac{ R_{x}}{\xi}\right)+ \tag{55}\] \[\pi\frac{\hbar^{2}}{g}\sqrt{\frac{2\tilde{\mu}}{M\omega_{y}^{2}} }\ln\left(c_{\gamma=1}^{(2)}\frac{R_{x}}{\xi}\right)(\dot{x}_{0}^{2}+\dot{z}_ {0}^{2}).\]
The first line contains the Lagragian function used in the former approximation. Specific to the modification of the ansatz is the change in the effective zero-order parameter, i.e., \(c_{\gamma=1}^{(0)}\to c_{\gamma=1}^{(1)}\), which is made to account for the contribution of the last integral in Eq. (54), entirely given by zero-order terms. Also emergent is the quadratic function of the generalized velocities present in the second line. (The parameter \(c_{\gamma=1}^{(2)}\) is required there.) The magnitude of the changes, which can be approximately evaluated using the dynamical equations of the previous order of approximation, i.e., Eqs. (41) and (42), is shown to be much
smaller than that of the former Lagragian. As a consequence, the effects incorporated by the implemented modification of the trial _wave-function_ can be estimated to correspond to a perturbation of the formerly characterized scenario. Particularly relevant to the consistency of the description is the explicit inclusion of the vortex core in the ansatz used in the modified scenario. The minor effect of this correction justifies the approach followed in previous theoretical works. Namely, the use of an ansatz where the vortex core is modeled by an exclusion region in the background Thomas-Fermi density with size determined by the healing length correctly accounts for the most conspicuous experimental features.
Some more specific conclusions follow:
i) Since the Lagragian presents a quadratic dependence on the generalized velocities, the dynamics are described now in terms of two second-order equations. Hence, no dimensional reduction can be implemented: both, the initial positions and velocities, are needed to integrate the equations. This approach can then be relevant to potential experimental arrangements where the independent variation of that set of initial conditions could be feasible.
ii) As can be shown from the dynamical equations, the amplitude \(A_{xz}=\sqrt{X_{0}^{2}+Z_{0}^{2}}\) is not longer a conserved magnitude. Indeed, the amplitude is found to oscillate: Fig. 1 illustrates how the elliptic trajectories of the vortex location found in the previous approach are now modulated by a term oscillating with a frequency larger than the precession frequency.
iii) Useful insight into the mechanisms responsible for the dynamics is given by a linear approximation. One of the normal modes can be basically traced to a linearized version the model system of the previous order of approximation. Its frequency, i.e., the (linear) counterpart of the precession frequency formerly obtained, is approximately given by
\[\Omega_{p,B}=\frac{3}{4}\frac{\hbar}{\bar{\mu}}\frac{\sqrt{k_{x}k_{z}}}{M} \ln\left(c_{\gamma=1}^{(1)}\frac{R_{x}}{\xi}\right), \tag{56}\]
and
\[\Omega_{p,F}=\frac{\hbar}{\bar{\mu}}\frac{\sqrt{k_{x}k_{z}}}{M}\ln\left(c_{ \gamma=2/3}^{(1)}\frac{R_{x}}{\xi}\right), \tag{57}\]
for the respective bosonic and fermionic cases. The other mode, which we term _the modulation mode_, is specific to the elements incorporated via variations in the density. Its frequency \(\Omega_{m}\), which is higher than the precession frequency \(\Omega_{p}\), has been obtained through an ana
lytical adiabatic approximation. Specifically, we have found for \(\Omega_{m}\) the following (bosonic and fermionic) expressions
\[\Omega_{m,B}=\frac{8}{3}\frac{\mu}{\hbar}\left(\left[2\ln\left(4\frac{R_{x}}{ \xi}\right)-3\right]\left[2\ln\left(4\frac{R_{x}}{\xi}\right)-1-4\sqrt{\frac{k_ {z}}{k_{x}}}\right]\right)^{-1/2} \tag{58}\]
and
\[\Omega_{m,F}=2\frac{\mu}{\hbar}\left(\left[2\ln\left(2\frac{R_{x}}{\xi}\right) -2\right]\left[2\ln\left(2\frac{R_{x}}{\xi}\right)-4\sqrt{\frac{k_{z}}{k_{x}}} \right]\right)^{-1/2}, \tag{59}\]
whose validity has been checked through numerical calculations. For generic initial conditions, the global dynamics can be viewed as corresponding to a combination of the two (component) modes of the system. The results of the former description are recovered provided that the initial conditions fulfill Ecs. (41) and (42): only the precession mode is generated then.
iv) The present extension of the approach can be actually regarded as a proof of consistency. The pertinence of improving the primary ansatz by modifying the form of the density according to the precise constraints imposed by the hydrodynamic formalism is clear. Since the former order of approximation has been shown to account for salient features of the dynamics, its robustness against a consistent modification of the ansatz can be expected. The second-order character of the obtained corrections confirms that argument. Following the same line of reasoning, the physical character of the found second-order effects can be presumed. However, their detection implies dealing with technical difficulties: the observation of the additional (larger) frequency \(\Omega_{m}\) requires higher experimental resolution. One cannot disregard that the (uncontrolled) conditions for the emergence of the SVs structures might correspond to the inhibition of the second mode. Moreover, we should take into account that the potential resonance of that mode with high-frequency collective excitations of the condensate could activate a damping mechanism which can preclude its observation. The analysis of the robustness of the second mode against dissipation effects is left for future work.
### The effect of the condensate motion on the SV dynamics
Now, we turn to analyze how the characterization of the vortex precession is modified when degrees of freedom of the background fluid are taken into account. The general procedure is illustrated by incorporating the dipole and quadrupole modes into the description. Appropriate to our objectives is the use of a variational ansatz \(\Psi=\tilde{\rho}^{1/2}e^{iS}\) with the phase being given by
\[S({\bf r},t;x_{0},z_{0};a_{j},b_{jk})=S_{v}(x,z;x_{0},z_{0})-\frac{1}{\hbar} \bar{\mu}t+\delta+\sum_{j,k=x,z}a_{j}x_{j}+b_{jk}x_{j}x_{k}, \tag{60}\]
where, in addition to the vortex location and the normalization term \(\delta(t)\), we have included the set of parameters \(a_{j}(t)\) and \(b_{jk}(t)\), which will allow dealing with the dipole and quadrupole modes of the fluid. (Only the modes contained in the plane perpendicular to the SV are considered.) Apart from entering the phase, those parameters are present in the form of the density \(\tilde{\rho}\), which is derived, as indicated in the previous subsection, through the Euler-like (hydrodynamic) equation that connects phase and density. Accordingly, we obtain
\[\tilde{\rho} = -\frac{1}{g}\left[\frac{\hbar^{2}}{2M}\left|\mathbf{\nabla}S\right|^ {2}+V_{ex}+\hbar\frac{\partial S_{v}}{\partial t}-\bar{\mu}+\frac{\dot{\delta} }{\hbar}+\sum_{j,k=x,z}\left(\dot{a}_{j}x_{j}+\dot{b}_{jk}x_{j}x_{k}\right)\right] \tag{61}\] \[= \rho-\frac{1}{g}\left[\frac{\hbar^{2}}{2M}\left|\mathbf{\nabla}S \right|^{2}+\hbar\frac{\partial S_{v}}{\partial t}+\frac{\dot{\delta}}{\hbar} +\sum_{j,k=x,z}\left(\dot{a}_{j}x_{j}+\dot{b}_{jk}x_{j}x_{k}\right)\right],\]
where it is apparent that the terms \(\dot{a}_{j}x_{j}\) account for displacements of the center of the condensate, and the terms \(\dot{b}_{jk}x_{j}x_{k}\) incorporate changes in the radii and reorientation of the axes. One should notice the parallelism of this procedure with the method introduced in
Figure 1: An illustration of a SV trajectory as predicted by the first extension of our basic approach. A perturbative high-frequency modulation of the formerly obtained elliptical trayectory is observed.
Ref. [46] to analyze the effect of modulations of the trap frequencies on the fundamental state of a BEC. In that method, it is the form of the density that is explicitly proposed since physically supported conjectures can be made on it, the phase being subsequently derived from the Euler-like hydrodynamic equation. In contrast, in the present case, as it is the vorticity the characteristic of the structure that is actually known, it is convenient to start the proposal by modeling the phase.
We proceed as before using Eq. (13) to build the Lagragian function from the proposed ansatz, and, later on, to obtain the Euler-Lagrange equations for the set of variational parameters. In order to have a first global picture of the dynamics, we have worked with a linearized version of the set of coupled equations. Within this regime, all the integrals have been obtained analytically beyond logarithmic accuracy to include zero order terms. The main implications of the coupling of vortex and fluid coordinates are summarized in the following points, where, to simplify the discussion we will not refer to the modulation mode identified in the previous subsection.
i) The vortex precession can be tracked down in one of the emerging normal modes. For the considered experimental conditions, given that the inertia of the condensate is much larger than that of the SV structure, the mixing of the former precession mode with the intrinsic condensate modes, incorporated via the parameters \(a_{j}(t)\) and \(b_{jk}(t)\), is negligible. Indeed, the _new version_ of the precession mode basically corresponds to the motion of the vortex relative to the condensate. In contrast, there is a non-negligible displacement of the mode frequency with respect to the formerly obtained \(\Omega_{p}\). That shift is specifically rooted in the coupling with the parameters \(a_{j}(t)\). Since its magnitude corresponds to a contribution of zero order in the quotient \(R_{x}/\xi\), it can be incorporated into the functional form of \(\Omega_{p}\) by modifying the effective zero-order parameter, which will be denoted now \(c_{\gamma}^{(3)}\). Accordingly, the expressions of the precession frequencies corresponding respectively to the bosonic and fermionic cases are written as
\[\Omega_{p,B}=\frac{3}{4}\frac{\hbar}{\bar{\mu}}\frac{\sqrt{k_{x}k_{z}}}{M}\ln \left(c_{\gamma=1}^{(3)}\frac{R_{x}}{\xi}\right), \tag{62}\]
\[\Omega_{p,F}=\frac{\hbar}{\bar{\mu}}\frac{\sqrt{k_{x}k_{z}}}{M}\ln\left(c_{ \gamma=2/3}^{(3)}\frac{R_{x}}{\xi}\right). \tag{63}\]
ii) Among the resulting normal modes, one can also identify the dipole and quadrupole modes of the condensate. The effect of the vortex on those modes has been studied in
previous work [47; 48]. In agreement with the results of those studies, we observe that the presence of the vortex affects the frequencies of the quadrupole modes but leaves practically unchanged the frequencies of the dipole modes. The implications of a stronger mixing of the vortex dynamics and the condensate motion were analyzed in [49], where the setup characteristics correspond to a reduction in the magnitude of the inertia of the condensate relative to that of the vortex.
The results of the two studied extensions of the approach configure a picture where, the corrections to the previously presented (primary) description have a perturbative character.
## VI Application to the experiments
In order to illustrate the applicability of the study to emulate the results of the two considered experiments, we have incorporated into our approach the two sets of force constants used in the practical setups. Additionally, as amplitudes comparable to the TF radii were reached in both experiments, we have considered a broad range of amplitudes for the vortex motion.
Figs. 2 and 3 depict our findings for the (fermionic) system studied in [21]. In both figures, the precession period of the SV is displayed as a function of the chemical potential \(\bar{\mu}\). (The precession period is expressed in units of the period \(T_{z}=2\pi/\omega_{z}\).) Whereas the results of Fig. 2 correspond to a linear regime, i.e., they are applicable when the amplitude is sufficiently small for the approximation \(1-A_{xz}^{2}=1-X_{0}^{2}-Z_{0}^{2}\simeq 1\) to be valid, Fig. 3 incorporates nonlinear effects associated to increasing values of the amplitude. Moreover, in order to illustrate how the predictions on the SV behavior change as the sequence of extensions of the basic approach is applied, partial results, corresponding to the different stages in our model, are presented in Fig. 2. Namely, the dotted line reflects our primary picture, where the period is given by Eq. (49), with the effective zero-order parameter taking the value \(c_{\gamma}^{(0)}=1\). This description already improves the results of early analyses through the inclusion of anisotropy effects. Actually, at this stage, the main characteristics of the experimental curves are approximately reproduced. Still, observable corrections are obtained through the inclusion of additional effects. The dashed line, which corresponds to the first extension of the basic model, i.e., to the use of an ansatz where the precise connection between phase and density is incorporated, reflects a non-negligible modification of the period. A larger
additional shift (continuous line) is observed when the second extension of the model is applied, i.e., when, in addition to the previous system components, the coupling with the collective modes of the condensate is taken into account. The values \(c_{\gamma=1}^{(3)}=1.262\) and \(c_{\gamma=2/3}^{(3)}=0.876\), derived in our theoretical framework, were used to operatively incorporate the contribution of zero-order terms at this level. In the two considered regimes of the fermionic fluid, i.e., in the BEC side and in the unitary regime, the agreement with the experimental results improves as the description is generalized. Even so, it is the inclusion of nonlinearity in our framework that constitutes the dominant correction to the primary picture. As shown in Fig. 3, as larger amplitudes are reached, the period significantly decreases approaching the experimental results presented in [21]. (Note that the open circles correspond to experimental results extracted from Fig. 3 of [21].) It is also evident that although the agreement is good in the whole range considered for the chemical potential, the stronger dispersion of the experimental results in the unitarity regime makes the comparison to be less conclusive in that region. Actually, the need of a more detailed modeling of the system in that range might be conjectured. Namely, a more precise approximation to the size of the vortex core can be pertinent. In the same line, one must take into account that the polytropic approximation to the equation of state of the fluid, although appropriate to account for some significant aspects of the dynamics, cannot be expected to give a complete description of the system.
Additional arguments on the importance of including nonlinearity in the analyses can be extracted from Fig. 4, where the precession frequency corresponding to the setup of [26] is represented as a function of the amplitude. Actually, the reproduction of the experimental
Figure 3: The precession period of the SV in a fermionic fluid as a function of the chemical potential for different amplitudes. The curves shift down as the amplitude \(A_{xz}\) takes larger values. (\(A_{xz}=0.\), \(0.1\), \(0.2\), \(0.3\), \(0.4\), \(0.5\), from top to bottom.)The open circles correspond to experimental data extracted from Fig. 3 of [21]. (Same units as in Fig. 2.)
Figure 2: The precession period of the SV in a fermionic fluid as a function of the bulk chemical potential as given by the different approaches developed in the study. (The amplitudes are small enough to guarantee the applicability of a linear approximation.) The dotted line incorporates the results obtained through the basic approach. The dashed line corresponds to the first extension of the model. The continuous line represents the results of the complete (linear) description. The precession period is expressed in units of the period \(T_{z}=2\pi/\omega_{z}\) corresponding to the smallest of the trap frequencies, i.e., \(\omega_{z}\). Additionally, the chemical potential is written in units of \(\hbar\omega_{\perp}\equiv\hbar\omega_{x}\). The figure illustrates the effect of the logarithmic factor \(c_{\gamma}^{(i)}\) in each case. (The left and right regions respectively correspond to the BEC and unitary regimes.)
results (see the spectral analysis presented in Fig. 6 of [26]) demands the introduction of nonlinear corrections into the model. We recall that, in the early evaluation of the experiments of [26], the presence of SVs was merely conjectured since no confirmation through direct observation techniques was feasible. The agreement of our predictions with the experimental results supports the conjecture that the observed structures are actually SVs.
## VII Concluding Remarks
Our study provides a theoretical framework for analyzing the dynamics of SVs in trapped superfluids which extends former approaches and allows clarifying recent experimental results. The unified approach applied to SVs in bosonic and fermionic superfluids has served to identify the common characteristics of the dynamics as simply rooted in the superfluid character of both systems. It has been shown that, in the regime where the hydrodynamic description is applicable, i.e., at scales much larger than the healing length, the only differential aspect of the fluid statistics is a numerical factor, dependent on the polytropic index, in the expression of the oscillation frequency. The general correspondence of our results with those of the experiments confirms the utility of the polytropic approximation to the equation of state of the fermionic fluid as an operative method to uncover basic aspects of the dynamics.
With respect to previous analyses of the considered experiments, the study contains specific advances in understanding the effect of the trap anisotropy, the relevance of nonlinearity to the SV precession, and the implications of the coupling with collective modes of the fluid. Indeed, the incorporation of a non-axisymmetric trap in the applied model has served to
Figure 4: The precession frequency corresponding to the setup of [26] as a function of the relative amplitude \(A_{xz}\).
trace the nontrivial dependence of the oscillation frequency on the anisotropy transversal to the vortical line. Moreover, nonlinearity has been found to be a central component of the dynamics emergent in the implemented setups. We have shown that the operatively defined inertial mass, apart from incorporating characteristics of the structure and trapping, is amplitude dependent. Additionally, the study has uncovered how the oscillation frequency is shifted by the coupling with collective modes of the fluid. The inclusion of those system components into our model implies a generalization of former descriptions which has led us to obtain precise values for the precession frequency, and, in turn, significantly improve the agreement with the experimental results. Our whole approach enhances the ground for the design of strategies of control.
Apart from accounting for features observed in the experiments, our analysis predicts the existence of fine details in the SV dynamics associated to potentially realizable experimental conditions. Indeed, the use of a variational ansatz where interrelated proposals for the phase and density are consistently incorporated has revealed the presence of a _fine structure_ in the previously identified SV trajectories. The emergence of those fine details requires the implementation of specific initial conditions. Although technically demanding, their observation can be expected to be feasible given the significant advances achieved in the control of the considered systems. The modified ansatz has also served to give a more complete characterization of the vortex core. From our results, the validity of the basic modeling of the core is confirmed.
Some comments on potential extensions of the study are in order. No border effects have been incorporated into our approach: straight vortex lines have been considered. This simplification can be overcome through an appropriate modification of the functional form of the phase in the variational ansatz. The inclusion of dissipation effects, which can have practical implications on the controlled preparation of the systems and on the robustness of the predicted _fine structure_ of the dynamics, is also pending. Moreover, we point out that the present study, focused on the dynamics subsequent to the SVs formation, does not complete the explanation of the experimental results. In fact, the characterization of the whole decay sequence, starting from PDS and ending with SVs is still required. In this line, the inclusion of additional structures in the decay process, like the intermediate solitonic form predicted in [11] and detected in [22], can be of great interest. Finally, it is worth stressing that alternative theoretical approaches which go beyond the hydrodynamic
description of the fermionic system are needed to characterize the vortex dynamics in the BCS regime.
###### Acknowledgements.
One of us (JMGL) acknowledges the support of the Spanish Ministerio de Economia y Competitividad and the European Regional Development Fund (Grant No. PID2019-105225GB-I00).
|
2306.15572 | Generating Elementary Integrable Expressions | There has been an increasing number of applications of machine learning to
the field of Computer Algebra in recent years, including to the prominent
sub-field of Symbolic Integration. However, machine learning models require an
abundance of data for them to be successful and there exist few benchmarks on
the scale required. While methods to generate new data already exist, they are
flawed in several ways which may lead to bias in machine learning models
trained upon them. In this paper, we describe how to use the Risch Algorithm
for symbolic integration to create a dataset of elementary integrable
expressions. Further, we show that data generated this way alleviates some of
the flaws found in earlier methods. | Rashid Barket, Matthew England, Jürgen Gerhard | 2023-06-27T15:48:40Z | http://arxiv.org/abs/2306.15572v1 | # Generating Elementary Integrable Expressions
###### Abstract
There has been an increasing number of applications of machine learning to the field of Computer Algebra in recent years, including to the prominent sub-field of Symbolic Integration. However, machine learning models require an abundance of data for them to be successful and there exist few benchmarks on the scale required. While methods to generate new data already exist, they are flawed in several ways which may lead to bias in machine learning models trained upon them. In this paper, we describe how to use the Risch Algorithm for symbolic integration to create a dataset of elementary integrable expressions. Further, we show that data generated this way alleviates some of the flaws found in earlier methods.
Keywords:Computer Algebra Symbolic Integration Machine Learning Data Generation.
## 1 Introduction
### Machine Learning and Computer Algebra
A key feature of a Computer Algebra System (CAS) is its exactness: when prompted for a calculation, a CAS is expected to return the exact answer (or no answer if the calculation is not feasible), as opposed to an approximation to an answer. Due to this restraint, it seems as though Machine Learning (ML) and Computer Algebra do not work well together due to the probabilistic nature of ML: no matter how well-trained an ML model is, it can never guarantee perfect predictions. However, rather than trying to use ML to predict a calculation in place of a CAS, we can instead use ML in conjunction with a CAS to help optimize and/or select the symbolic computation algorithms implemented within. Such a combination of ML and symbolic computation preserves the unique selling point of a CAS. The earliest examples of such ML for CAS optimisation known to the authors are: Hunag et al. [3] which used a support vector machine to choose the variable ordering for cylindrical algebraic decomposition; and Kuipers et al. [5] which used a Monte-Carlo tree search to find the representation of polynomials that are most efficient to evaluate.
### Symbolic Integration Meta-Algorithms
Our interest is the integrate function of a CAS, which takes an integrand and produces an integral (either definite or indefinite). In most CASs, and certainly in Maple where the authors focus their work, the integrate function is essentially a meta-algorithm: it accepts a mathematical expression as an input, does some pre-processing on the expression, and then passes the processed problem to one of a selection of available sub-algorithms. In Maple, the function will try a list of such sub-algorithms in turn until one is found that can integrate the expression, in some cases first querying a guard as to whether that sub-algorithm is applicable to the input in question. If none of these methods work, the function simply returns the input back as an unevaluated integral (implying that Maple cannot integrate it).
Currently, as of Maple 2023, these sub-algorithms for int are tried in the same pre-set order for every input, and outputs the answer of the first sub-algorithm that works. There are currently 11 sub-algorithms to choose from. The list of sub-algorithms is available on the Maple help page3 for the function.
Footnote 3: www.maplesoft.com/support/help/maple/view.aspx?path=int%2methods+
The first use of ML is to improve the integrate function's efficiency. A similar approach was taken by Simpson et al. [9] for the resultant function (see Definition 3 later). After applying a neural network to classify which algorithm (of four possible choices) to use, the authors test their model on a random sample of several thousand inputs. Maple's existing meta-algorithm took 37,783 seconds to finish its computations, whereas the sub-algorithm choices from neural network took only 12,097 seconds-a significant improvement with a 68% decrease in runtime. There were also gains against Mathematica with a 49% decrease in runtime. We hope to achieve similar results with the integrate function.
The second motivation to use ML is in optimizing the output. To gain a better understanding of this, consider what happens in Figure 1 when you integrate the function \(f(x)=x\sin(x)\) in Maple and ask it to try all possible sub-algorithms. When \(f(x)\) is integrated, there are three successful outputs that come from three different sub-algorithms. Each output is expressed differently but are all mathematically correct and equivalent. We wish to choose the simplest output, which in this case is \(\int f(x)=~{}\sin(x)-x\cos(x)\).
### Motivation
The goal of the data generation method described in this paper is to be able to produce many integrable expressions to train a ML model on. There is not enough benchmark/real-world data to train a model on, hence why these data generation methods are needed. There does currently exists data generation methods. Lample & Charton [6] produce three methods for developing integrable expressions: FWD, BWD, and IBP (described in detail in Section 2.1). These methods have drawbacks which the data generation method we propose will handle.
The FWD method, which generates a random expression and calculates its integral, tends to produce short integrands and long integrals. Furthermore, the FWD method will typically not have an elementary integral. This is especially evident for longer randomly generated expressions and/or expressions with denominators. This means the FWD method will take numerous attempts before finding a valid (integrand, integral) pair. The BWD method, which generates an expression and calculates its derivative, has the opposite problem of long integrands and short integrals. The IBP, or integration by parts method, produces expressions that are too similar (meaning that the expressions only differ by their coefficients) which is discussed in Section 2.1. Hence, a dataset of (integrand, integral) pairs is needed for this method to work.
We propose generating (integrand, integral) pairs based on the Risch Algorithm. For one, the method will always produce an elementary integrable expression, something FWD cannot guarantee. This data generation method also does not have the issue of varied lengths between the integrands and integrals because of various parameters available from the data generation method, alleviating the length issues in the FWD and BWD methods. Lastly, this method does not require a dataset of known integrals and also does not produce expressions too similar to the rest of the dataset, which IBP suffers from. Data generation based on the Risch algorithm produces a variety of non-trivial, unique expressions that current data generation methods do not offer. Further discussion of current methods and the new method presented are discussed in Sections 2.1 and 5.
### Contributions and Plan
This paper will focus on how to generate sufficient data to make our planned application of ML. In Section 2, we overview the existing methods of data generation for the problem that we found in the literature, explaining why they are not suitable alone for our needs. Then in Section 3, we review the classical Risch algorithm which will be the basis of our new data generation method introduced in Section 4 which identifies constructive conditions for an integrand to be elementary integrable. We finish in Section 5 with a discussion on the advantages
Figure 1: The output of \(\int x\sin(x)\) from each successful sub-algorithm. The main output chosen in this case is the shortest expression chosen by an ML model, from sub-algorithm 2.
of this approach over the existing methods and what future steps still need to be undertaken.
## 2 Existing Datasets and Data Generation Methods
An important aspect of a successful ML model is that it is generalisable. That is, the model should perform well on all inputs it receives and not just inputs that look very similar to the training data. There are existing datasets and data generation methods for symbolic integration. However, each comes with its own sets of limitations that prevent an ML model trained on them to generalise well on all real-world data.
### Deep Learning For Symbolic Mathematics
In their paper (with the same name of this subsection), Lample and Charton [6] experiment on using deep learning to perform the tasks of symbolic integration and solving ordinary differential equations directly. To achieve this, they used a seq2seq model \(-\) a neural network architecture used in natural language processing for mapping sequences of tokens (usually words to another such sequence) \(-\) in the form of a transformer4.
Footnote 4: the same model which is the basis for ChatGPT
There are different classes of integrals that can be output based on its complexity.
Definition 1 (Elementary Function): A function that is defined as the sum, product, root, or composition of finitely many polynomial, rational, trigonometric, hyperbolic, and exponential functions (and their inverses) is considered elementary.
An expression that, when integrated, produces an elementary function is said to be _elementary integrable_. Most expressions one encounters in a first-year calculus class will be elementary integrable. An example of an expression that is not elementary integrable is \(f(x)=\frac{1}{\log x}\). When \(f(x)\) is integrated, the result usually produced is \(\mathrm{li}(x)\), a non-elementary function known as the Logarithmic Integral special function5.
Footnote 5: [https://dlmf.nist.gov/6.2](https://dlmf.nist.gov/6.2)
The authors of [6] created a novel way of generating data to train a transformer. Expressions are viewed as trees, where the internal nodes are operators or function names (\(+\), \(sin\), etc.), and the leaves are constants and variables as exemplified in Figure 2. An algorithm is developed to generate trees of varying length so that these expressions can be used for training the model. They added structure to the trees in the form of restriction on internal nodes and leaves such that every random tree created is a valid symbolic expression.
They treated this as a supervised learning problem, generated the following three methods to take such symbolic expressions and produced labelled training pairs:
* FWD: Integrate an expression \(f\) through a CAS to get \(F\) and add the pair \((f,F)\) to the dataset.
* BWD: Differentiate an expression \(f\) to get \(f^{\prime}\) and add the pair \((f^{\prime},f)\) to the dataset.
* IBP: Given two expressions \(f\) and \(g\), calculate \(f^{\prime}\) and \(g^{\prime}\). If \(\int f^{\prime}g\) is known then the following holds (integration-by-parts): \[\int fg^{\prime}=fg-\int f^{\prime}g.\] Thus we add the pair \((fg^{\prime},\,fg-\int f^{\prime}g)\) to the dataset.
While these three methods can generate plenty of elementary integrable expressions, they come with many limitations that can cause an ML model to overfit on the training data. For both the FWD and BWD methods, they tend to create expressions with patterns in the length. For FWD, the integrand is on average shorter than the resulting integral. BWD suffers from the opposite problem: long integrands and short integrals. Individually, these cause problems when training the transformer as the model is fitted too closely to these patterns, leading to overfitting. For example, the results from Lample & Charton show that when a model is trained on only FWD data and tested on BWD data, it only achieves an accuracy of 17.2%, and similar results are shown for training on BWD and testing on FWD. They of course train the model on all three data generation methods, but it is not clear if this addresses all the overfitting or simply encodes both sets of patterns.
Furthermore, these data generation methods suffer from producing expressions that are far too similar between the training and testing data. Piotrowski et al. [7] perform a simple analysis of substituting all coefficients with a symbolic CONST token. They examine how many expressions show up in the training set that are also the same in the testing set modulo constant and sign. For the FWD, BWD, and IBP methods, the percentage of unique data were 35%, 75%
Figure 2: Tree representation for \(3x^{2}+cos(2x)-1\) and \(2+3\times(5+2)\) from [6]. With some restrictions as to how the trees are constructed, there is a one-to-one mapping between an expression and its tree.
and 24%, respectively. A key principle of machine learning is that the testing data should be independent of the training data but this casts doubt on whether this is possible through the partition of a dataset containing such similar examples. This may be considered an example of ML "_data leakage_". Data leakage is a significant issue in machine learning. It happens when the training data we use contains the information that the model is trying to predict. This can result in unpredictable and poor predictions once the model is deployed.
### Other Existing Datasets
Currently, there are not that many (public) benchmark datasets in the field of symbolic integration, or indeed Computer Algebra more broadly. Maplesoft has an in-house test suite of integrable functions that they use to ensure software quality is maintained when making changes to int. There are 47,745 examples in the Maple test suite. Of these, only 8,174 had elementary integrands with elementary integrals which we currently study. We provide some information from the remaining (integrand, integral) pairs in Table 1.
These number of examples would not be sufficient to train a deep learning model; for reference, Lample and Charton [6] have access to 88 million examples in _Deep Learning for symbolic Mathematics_. One great property about the Maple dataset is that it was partly developed as a continuous response to feature requests and bug reports that users would make when using int in Maple. Thus, it can be said to represent the range of examples of interest to Maple users. Using this dataset to evaluate any models trained would help provide evidence that the model generalizes well for our planned use.
Rich et al. [8] developed a Rule-Based Integrator, more commonly known as RUBI. RUBI integrates an expression by applying a collection of symbolic integration rules in a systematic way. Along with RUBI, the authors have compiled
\begin{table}
\begin{tabular}{l|c|c} & Integrand & Integral \\ \hline Average Number of Operands & 2.59 & 6.52 \\ \hline Largest Number of Operands & 16 & 300 \\ \hline Is a Polynomial & 996 & 1221 \\ \hline Average Polynomial Degree & 1.80 & 2.79 \\ \hline Largest Polynomial Degree & 199 & 200 \\ \hline Contains Exponentials & 932 & 1072 \\ \hline Contains Logarithms & 756 & 3136 \\ \hline Contains Trig or Arctrig functions & 2080 & 2512 \\ \hline Contains Radicals & 2024 & 2274 \\ \hline Contains Complex Numbers & 558 & 685 \\ \end{tabular}
\end{table}
Table 1: A summary of the (integrand, integral) pairs in the Maple test suite (total 8174). We only kept functions with elementary integrands which had elementary integrals
a dataset of 72,000 integration problems. There are 9 different main categories of functions that exist in the dataset with many examples coming from various textbooks and papers. Similar to the Maple test suite, this dataset would be good for evaluating a model but due to the size of the dataset, it would not be sufficient for training a model, at least not a deep learning based model. We thus use the rest of our paper describing a new method.
## 3 The Risch Algorithm
The data generation method in this paper is based on the Risch algorithm. To explain the entire Risch algorithm would need us to introduce a lot of theory before even getting to the algorithm explanation. Instead, we will focus on the key parts of the algorithm to help the reader get an intuitive understanding of how it works and refer to [2] or [4, Ch. 11, 12] for a more detailed explanation.
For the Risch algorithm to work, we allow elementary extensions over a differential field \(K\). A differential field is a field with the derivative operator \(D\) such that \(D(a+b)=D(a)+D(b)\) and \(D(ab)=aD(b)+bD(a)\). A constant \(c\) is defined as \(Dc=0\). We usually write the derivative \(Da=a^{\prime}\).
Let \(G\) be an extension field of a differential field \(F\). For an element \(\theta\in G\), We say that \(G\) is an elementary extension of \(F\) if \(\theta\) is one of the following:
1. **logarithmic**: \(\theta\) = \(\log(u)\), \(u\in F\).
2. **exponential**: \(\theta\) = \(e^{u}\), \(u\in F\).
3. **algebraic**: \(\exists p\in F\) such that \(p(\theta)=0\).
An arbitrary amount of extensions are allowed. Rather than using \(G\) to represent the extension, we instead denote \(F_{n-1}=K(\theta_{1},\cdots,\theta_{n-1})\) as the previous differential field and \(F_{n}=F_{n-1}(\theta_{n})\) as the current elementary extension. Typically, we have \(K=\mathbb{Q}(x)\) as the base differential field.
This paper will focus solely on logarithmic and exponential extensions. We now introduce Liouville's theorem that states exactly what the form of the integral will be, if it exists.
Theorem 3.1 (Liouville's Theorem: Thm 5.5.1 in [2]): _Let \(K\) be a differential field and \(f\in K\). Let \(E\) be an elementary extension of \(K\). If \(\int f\in E\) exists, then there are \(v_{0},\cdots,v_{m}\in K\) and constants \(c_{0},\cdots,c_{m}\in K\) such that_
\[\int f=v_{0}+\sum_{i=0}^{m}c_{i}\log(v_{i})\]
Liouville's Theorem gives an explicit representation for the integral of \(f\) if it is elementary integrable. The Risch algorithm and the subsequent algorithms for computing an integral are based on Liouville's Theorem. The Risch algorithm will divide the input into two different parts. Then, the integral for both parts will take the form of Theorem 3.1.
**Risch Algorithm (Chapter 12 in [4]):** Let \(F_{n}=F_{n-1}(\theta_{n})\) be a differential field of characteristic \(0\) where \(\theta_{n}\) is elementary over \(F_{n-1}\), and \(\theta_{i}^{\prime}\neq 0,1\leq i\leq n\). For any rational function \(f=g/b\) with respect to \(\theta_{n}\), you can divide the numerator with remainder \(g=Pb+R\) where \(\deg_{\theta_{n}}(R)<\deg_{\theta_{n}}(b)\), and have \(f=P+\frac{R}{b}\). If \(f\) is elementary integrable, it follows that \(\int f=\int P+\int\frac{R}{b}\). We call \(P\) the polynomial part and \(\frac{R}{b}\) the rational part. We study these two parts for the rest of the section and then develop ways to generate elementary integrable expressions from both these parts in Section 4.
### The Rational Part
Suppose we wish to integrate \(\frac{R}{b}\), \(R,b\in F=K(x)(\theta_{1},\cdots,\theta_{n})\). There are two algorithms used to compute this integral: Hermite Reduction and the Trager-Rothstein (TR) method. Which algorithm is used depends on whether the denominator \(b\) is square-free or not.
Definition 2 (Square-free): We say \(a\in K[x]\) is square-free if \(a\) has no repeated factors i.e. \(\nexists b\in K[x]\) such that \(\deg(b)>0\) and \(b^{2}|a\). Equivalently, \(\gcd(a,a^{\prime})=1\)
When our denominator is not square-free, we use Hermite Reduction.
Theorem 3.1 (Hermite Reduction: Thm 5.3.1 in [2]): _Suppose we want to integrate \(\int\frac{R}{b}\), where \(R\),\(b\in F[\theta]\) and \(\deg_{\theta}(R)<\deg_{\theta}(b)\). Use the square-free factorization \(b=b_{1}b_{2}^{2}\cdots b_{k}^{k}\) where \(b_{i}\) is square-free. Let \(T=b/b_{k}^{k}\). Let \(\sigma\) and \(\tau\) be the solutions to the diophantine equation_
\[\sigma b_{k}^{\prime}T+\tau b_{k}=R.\]
_Then Hermite reduction tells us that_
\[\int\frac{R}{b}=\frac{-\sigma(k-1)}{b_{k}^{k-1}}+\int\frac{\tau+\frac{\sigma^{ \prime}}{k-1}T}{\frac{b}{b_{k}}}.\]
The main part to notice is that the resulting integral on the right hand side of the equation has a denominator that is at least one degree less than the input denominator (because we divide \(b\) with its highest degree factor \(b_{k}\)). This algorithm is used recursively until the resulting integral's denominator has degree one, allowing us to conclude that it is square-free. When this point is reached then the TR-method is used on the remaining integral. This method makes use of the following tool from computational algebra.
Definition 3 (Resultant): Suppose we have the following two polynomials with roots \(\alpha_{i}\) and \(\beta_{j},\alpha_{m}\neq 0\neq\beta_{n}\):
\[A =a_{0}+\cdots a_{m}x^{m}=a_{m}\prod_{i=1}^{m}(x-\alpha_{i})\] \[B =b_{0}+\cdots b_{n}x^{n}=b_{n}\prod_{j=1}^{n}(x-\beta_{j})\]
_Then their resultant is defined as \(res_{x}(A,B)=(-1)^{mn}b_{n}^{m}a_{m}^{n}\prod\limits_{j=1}^{n}\prod\limits_{i=1}^{ m}(\beta_{j}-\alpha_{i})\)_
_This implies that_
1. \(res(A,B)=\pm res(B,A)\)__
2. \(res(A,BC)=res(A,B)res(A,C)\)__
_for all nonzero polynomials \(A,B,C\)._
Note that the resultant can be calculated without finding the roots of each polynomial by using Sylvester's Matrix described on page 285 of [4].
Given an integral with square free denominator \(\int\frac{R}{b}\), we define the Trager-Rothstein resultant polynomial (TR-resultant) as \(\operatorname{res}_{\theta}(R-zb^{\prime},b)\). We will forego the details of the rest of the algorithm and focus on a key theorem involving the TR-resultant polynomial.
Theorem 3.1 (Thm 12.7 in [4]): _Suppose we are integrating \(\int\frac{R(x)}{b(x)}\), where \(R(x)\), \(b(x)\in F[x]\) and \(b(x)\) is square-free. Then we have that \(\int\frac{R(x)}{b(x)}\) is elementary integrable if and only if all the roots in \(z\) of the TR-resultant are constants._
Theorem 3.2 is the key theorem that tells us whether a rational expression will be elementary integrable or not, either in application to itself if the denominator is square free, or in application to the final integral from Hermite reduction if not. This theorem will also be the key theorem to create the data generation method for rational expressions.
### The Polynomial Part
Suppose we are integrating \(P\), a polynomial in \(F[\theta]\). We again only focus on logarithmic and exponential extensions from our field. There are two different procedures to integrate \(P\) based on if the extension is logarithmic or exponential.
#### 3.2.1 Logarithmic extension:
Let \(P=p_{0}+p_{1}\theta+\cdots+p_{l}\theta^{m}\) where \(\theta=\log(u),u,p_{i}\in F_{n-1}\). It can then be shown that
\[\int p_{0}+\cdots+p_{m}\theta^{m}=q_{0}+\cdots+q_{m+1}\theta^{m+1}+\sum_{i=1}^ {k}c_{i}\log(v_{i}), \tag{1}\]
where \(q_{m+1}\in K,q_{i}\in F_{n-1}(1\leq i\leq m),c_{j}\in K,v_{j}\in F_{n-1}(1\leq j \leq k)\). The idea behind integrating \(P\) is to differentiate Equation (1) and then equate the coefficients of like powers of \(\theta\) to solve for each \(q_{i}\). The details of this can be found in [4, page 540].
**Exponential extension:** The exponential case is similar to the logarithmic case, however a couple of adjustments need to be made. The first adjustment is that polynomial exponents are allowed to be negative for exponential extensions. Thus, \(P=p_{-l}\theta^{-l}+\cdots+p_{0}+\cdots+p_{m}\theta^{m}\) and Equation (1) becomes:
\[\int p_{-l}\theta^{-l}+\cdots+p_{0}+\cdots+p_{m}\theta^{m}=q_{-l}\theta^{-l}+ \cdots+q_{0}+\cdots+q_{m}\theta^{m}+\sum_{i=1}^{k}c_{i}\log(v_{i}). \tag{2}\]
Note that in Equation (2), the answer has a highest degree of \(m\) instead of \(m+1\). The steps for equating like powers of \(\theta\) differ between Equation (1) and (2), and we will see an example of this difference soon in Section 4.1.
## 4 Data Generation based on the Risch Algorithm
In order to generate elementary integrable expressions, we will do what the Risch algorithm does as an initial step: generate polynomial expressions and rational expressions separately. Polynomial expressions and rational expressions can then be combined together through the additive property of integrals. We first focus our attention on the simpler case: the polynomial part. Then, we will show how to generate rational expressions.
### Polynomial Integrable Expressions
Generating polynomial expressions (in \(\theta\)) that are elementary integrable requires choosing the coefficients \(q_{i}\) from Equation (1) or (2) ourselves. We differentiate the equation and equate coefficients of like powers of \(\theta\), resulting in a system of differential equations. The randomly chosen \(q_{i}\)'s are substituted into this system to generate the integrable expression.
It turns out that this is no better than just using the BWD method, i.e., we select a random polynomial in \(\theta\) with random coefficients in \(F_{n-1}\) and take its derivative. This is not as general as it could be; one would also have to generate a random integrable expression in the smaller field \(F_{n-1}\). For the sake of simplicity, we omit this step here, which could be done recursively or by using the BWD method. We provide a small example of the BWD method for polynomials in \(\theta\) to show how the data is generated.
Example 1: Suppose we want to generate a degree 2 polynomial in \(\mathbb{Q}(x)[\theta]\) where \(\theta=\ln(\frac{1}{x})\). The coefficients in \(\theta\) must be in the previous field \(\mathbb{Q}(x)\). For simplicity, the logarithms in Equation (1) are omitted. The following coefficients are generated randomly:
* \(q_{0}=-7+8x+\frac{2}{x}\)
* \(q_{1}=-5+4x-\frac{6}{x}\)
* \(q_{2}=1+2x\)
which results in the polynomial
\[P=(1+2x)\ln\biggl{(}\frac{1}{x}\biggr{)}^{2}+\biggl{(}-5+4x-\frac{6}{x}\biggr{)} \ln\biggl{(}\frac{1}{x}\biggr{)}-7+8x+\frac{2}{x}.\]
When differentiated, we get
\[P^{\prime}=2\ln\biggl{(}\frac{1}{x}\biggr{)}^{2}+\biggl{(}-\frac{2\left(1+2x \right)}{x}+4+\frac{6}{x^{2}}\biggr{)}\ln\biggl{(}\frac{1}{x}\biggr{)}-\frac{-5 +4x-\frac{6}{x}}{x}+8-\frac{2}{x^{2}}\]
and the pair \((P^{\prime},P)\) is added to our dataset.
### Rational Integrable Expressions
As we will see in a moment, generating rational integrable expressions is more complex than the polynomial case. We will introduce some strategies to generate integrable expressions with square-free denominators (using the TR-method) as well as non square-free denominators (using a combination of Hermite reduction and the TR-method). Note that most of the examples shown here will be using the extension \(\theta=\log(u)\) as this is the harder case to solve. However, extensions with \(\theta=e^{u}\) will also appear in the dataset produced.
#### 4.2.1 Square-Free Denominators:
In the normal use of the TR-method, the input is a rational elementary function \(\frac{R}{b}\) such that \(\deg_{\theta}(R)<\deg_{\theta}(b)\) and \(b\) is square-free. The method then outputs the elementary integral of \(\frac{R}{b}\), or fails if Theorem 3 does not hold. Our goal is to discover polynomials \(R,b\in F[\theta]\) such that \(\frac{R}{b}\) is guaranteed to be elementary integrable. The main idea behind the process is to fulfill the conditions of Theorem 3 so that we know for sure that the expression is elementary integrable. To accomplish this, the general outline is as follows.
1. Randomly generate the denominator \(b\) in its square-free factorization, and keep that fixed.
2. Create a partial fraction decomposition where the denominators are all factors of \(b\), and the numerators are polynomials in \(\theta\) of degree 1 less than the denominator, with symbolic coefficients.
3. Compute the TR-resultant.
4. The symbolic coefficients of \(R\) must be chosen in a way that ensures the roots of the resultant are constant. 1. If the resultant only has factors of degree 2 or less, solve directly for the roots and set each root equal to a constant. 2. Otherwise, the resultant has irreducible factors of degree 3 or higher. Divide the resultant by the leading coefficient to make it monic. Then, the symbolic coefficients must be chosen in such a way that each coefficient of this is constant.
We first put our input into partial fraction form with symbolic coefficients because when the resultant is calculated, the TR-resultant factors in a way similar to how \(b\) factors (See Definition 3). We can see this with the following example.
Example 2: Let \(b=\theta^{4}-2\theta^{2}-2\theta^{3}-2\theta-3\) where \(\theta=\log(x)\), \(F=\mathbb{Q}(x)(\log(x))\) and we have only done a single extension so \(n=1\). We wish to discover a class of numerators \(R\) so that \(\frac{R}{b}\) integrates.
* Note that \(b\) factors into \(b=(\theta+1)(\theta-3)(\theta^{2}+1)\).
* We create the partial fraction representation of our input: \(\frac{a(x)}{\theta+1}+\frac{b(x)}{\theta-3}+\frac{c(x)\theta+d(x)}{\theta^{2 }+1}\), where \(a,b,c,d\in F_{n-1}=\mathbb{Q}(x)\).
* The factored form of the TR-resultant of \(\frac{R}{b}\) is \(-(a(x)x-z)(b(x)x-z)(c(x)^{2}x^{2}-4c(x)xz+d(x)^{2}x-2d(x)xz+xz^{2}+4z^{2})\).
* Recall that by Theorem 3.1, we need the roots of the resultant to be constant. Setting each factor of the resultant equal to a constant and solving for the symbolic coefficients, we get that \(a(x)=\frac{C_{1}}{x},b(x)=\frac{C_{2}}{x},c(x)=\frac{C_{3}}{x}\), and \(d(x)=\frac{C_{4}}{x}\) for any \(C_{1},C_{2},C_{3},C_{4}\in\mathbb{Q}\).
* Therefore, \(\frac{R}{b}=\frac{C_{1}}{x(\theta+1)}+\frac{C_{2}}{x(\theta-3)}+\frac{C_{3} \theta+C_{4}}{x(\theta^{2}+1)}\) is elementary integrable for any choice of those constants. We find that: \[\int\frac{R}{b}=\frac{C_{3}\log\Bigl{(}\log(x)^{2}+1\Bigr{)}}{2}+ C_{4}\arctan(\log(x))\\ +C_{1}\log(\log(x)+1)+C_{2}\log(\log(x)-3)\,.\]
In Example 2, take note that the factored form of the resultant is similar to the factored form of the denominator \(b\): that is, the degree in \(z\) of each factor of the resultant is the same as the degree in \(\theta\) of each factor of \(b\). As well, each symbolic coefficient in the numerator of each partial fraction were also the same unknowns that show up in each factor of the resultant.
Example 2 only had linear and quadratic irreducible factors. These are quite easy to solve by just isolating the unknown or using the quadratic formula. In general, degree 3 and higher irreducible factors in the resultant will be much harder to solve. Trying to solve for the roots of an irreducible degree 3 resultant means using the Cardano formula, which produces huge answers for the root. We find that when trying to equate any of the roots to a constant and solving for the conditions of \(R\) like in Example 2, the expression size blows up and the solution starts to involve many radicals. Since radicals do not lie within our field, the symbolic coefficients then need to be chosen in a way such that the radicals disappear which adds an extra layer of complexity. The formulae size is even worse in degree 4 and then there is not even any such formula in surds for higher degree. So instead when the resultant has factors of degree higher than two, we look at two alternative options: assume the numerator to be of a specific form or analyse the resultant qualitatively to figure out the conditions of the numerator. We show the former with the following example.
Example 3: Suppose \(\theta=\ln(x)\), \(F=\mathbb{Q}(x)(\ln(x))\) and \(b=x(\theta^{3}-x)\). Note that \(b\) is square-free in \(F\). The first step is to create a partial fraction decomposition with denominator \(b\) and symbolic coefficients for the numerator. Let
\[\frac{R}{b}=\frac{a(x)\theta^{2}+b(x)\theta+c(x)}{x(\theta^{3}-x)}.\]
The TR-resultant is computed as
\[\left(-x^{3}-27x^{2}\right)z^{3}+\left(27x^{2}a(x)+9x^{2}b(x)+3x^ {2}c(x)\right)z^{2}\] \[\qquad+\left(-9x^{2}a(x)^{2}-3x^{2}a(x)\,b(x)-9xb(x)\,c(x)-3xc(x) ^{2}\right)z\] \[\qquad+a(x)^{3}\,x^{2}+3a(x)\,b(x)\,c(x)\,x-b(x)^{3}\,x+c(x)^{3}\,.\]
Finding the solution to the roots explicitly produces huge expressions for \(a(x),b(x)\) and \(c(x)\) and involve radicals outside our field. Instead, we assume the form of the symbolic coefficients to find a set of solutions. We will assume they are quadratic polynomials (an arbitrary choice). Let
* \(a(x)=a_{2}x^{2}+a_{1}x+a_{0}\),
* \(b(x)=b_{2}x^{2}+b_{1}x+b_{0}\),
* \(c(x)=c_{2}x^{2}+c_{1}x+c_{0}\),
for \(a_{i},b_{i},c_{i}\in\mathbb{Q},0\leq i\leq 2\). Since the resultant is cubic in \(z\), it will have three roots. First, substitute the assumed form of the three coefficients into the resultant. Note the leading coefficient of the resultant is \((-x^{3}-27x^{2})\). Then, let our resultant be equal to
\[(-x^{3}-27x^{2})(z-r_{0})(z-r_{1})(z-r_{2}),r_{1},r_{2},r_{3}\in\mathbb{Q}.\]
Consider the equation formed by setting the TR-resultant computed earlier equal to the form just above. Let us move the terms to one side so we have an expression equal to 0. We may now solve for each coefficient of \(z\) to be 0 giving the following solution
\[\{a_{0} =3c_{1},\,a_{1}=0,\,a_{2}=0,b_{0}=0,\,b_{1}=0,\,b_{2}=0,\] \[c_{0} =0,c_{1}\ =\ c_{1},\,c_{2}\ =\ 0,\,r_{1}=c_{1},\,r_{2}=c_{1},r_{3}=c_{1}\}.\]
This can now be substituted into \(R\) to produce
\[\int\tfrac{R}{b}=\int\tfrac{3c_{1}\ln(x)^{2}+c_{1}x}{x(\ln(x)^{3}+x}=c_{1}\ln \Bigl{(}\ln(x)^{3}+x\Bigr{)}.\]
In Example 3, we assumed a particular form for the symbolic coefficients to find a solution. This is a quick way to find a set of solutions, however this does not mean we have found all the solutions like with the linear and quadratic cases. Instead, we should try to fulfill the conditions of 4(b). That is, the symbolic coefficients are chosen in a way such that all of the coefficients of the TR-resultant are constant. To see why this is true, we give an informal proof.
Let the TR-resultant be \(f\in K[z]\). We can assume \(f\) is monic because if it were not, we will divide out the leading coefficient from the resultant to make \(f\) monic. Let \(F\) be the algebraic closure of \(K(x)\), so that \(f\in F[z]\). Factor \(f\) over \(F\) to get \(f=\prod_{i}(z-a_{i}),a_{i}\in F\). Each \(a_{i}\) is a root of \(f\). If we want the roots \(a_{i}\) to be constant, they should belong to the algebraic closure \(\bar{K}\). In that case, the coefficients of \(f\) should also belong to \(\bar{K}\) because they are the polynomials of \(a_{i}\), and they belong to \(K[x]\) because of how we defined \(f\). Thus, they belong to \(\bar{K}\cap K[x]\) which is \(K\). Therefore, \(f\) must have constant coefficients for the roots to be constant.
#### 3.2.2 Non Square-Free Denominators:
When computing the elementary integral of a rational function \(\frac{R}{b}\), the first step is to check whether \(b\) is square free or not. Similarly, what technique used to generate an elementary integrable expression depends on whether the fixed denominator \(b\) starts as square-free or not. Let us now assume \(b\) is not square-free, so the TR-method cannot be used currently. We first set up the problem just as with the square-free case: put \(b\) in partial fraction form and set symbolic coefficients for each partial fraction. The difference is that before, we would invoke the TR-method. However, \(b\) is not square free yet. Thus, we use Theorem 2.2, Hermite Reduction, recursively until we get a resulting integral whose denominator is square-free. Then, we use Theorem 3.2 just as before to find the conditions on \(R\) that make the whole expression \(\frac{R}{b}\) elementary integrable. The main benefit of non square-free denominators is that there will be more choices of freedom in choosing the symbolic coefficients compared to the square-free case. This is shown with the example below.
Example 4: Let \(\theta=\log(x)\) and \(F=\mathbb{Q}(x)(\log(x))\). Let
\[b=\theta^{3}+2x\theta^{2}+x^{2}\theta+\theta^{2}+2x\theta+x^{2}.\]
We wish to find all \(R\in F\) such that \(\frac{R}{b}\) is elementary integrable. As with Theorem 2.2, we first compute the square-free factorization of \(b\) to find \(b=(\theta+1)(\theta+x)^{2}\). The partial fraction representation in this case will be
\[\frac{R}{b}=\frac{a(x)}{(\theta+1)}+\frac{b(x)}{(\theta+x)}+\frac{c(x)}{( \theta+x)^{2}}\]
and we wish to find \(a,b,c\in F_{n-1}\) that makes the entire expression elementary integrable. Since \(b\) is not square-free, one iteration of Hermite Reduction is done to produce:
\[\int\frac{R}{b}=-\frac{c(x)\,x}{(1+x)\,(\theta+x)}\\ +\int\frac{(a(x)+b(x))\,\theta+a(x)\,x+b(x)+\left(\frac{(\frac{d }{dx}c(x))x}{1+x}+\frac{c(x)}{1+x}-\frac{c(x)x}{(1+x)^{2}}\right)(\theta+1)}{ (\theta+1)\,(\theta+x)}.\]
Let us focus on the resulting integral: the denominator is \((\theta+1)(\theta+x)\) which is now square-free. Thus, Hermite Reduction is no longer needed and instead,
the TR-method is used on it. When the resultant is calculated and the roots of the TR-resultant are solved for (so that Theorem 3.1 is true), we get that the distinct roots are:
\[\left\{xa(x)\,,\frac{x\left(\left(\frac{d}{dx}(x)\right)x^{2}+b(x)x^{2}+\left( \frac{d}{dx}(x)\right)x+2b(x)x+b(x)+c(x)\right)}{x^{3}+3x^{2}+3x+1}.\right\}\]
Setting the first root to a constant is trivial to solve: \(a(x)=\frac{C_{1}}{x},C_{1}\in\mathbb{Q}\). The second root condition contains the unknowns \(b(x)\) and \(c(x)\). This can also be set equal to a constant and then solved for \(b(x)\) obtaining
\[b(x)=\frac{-\left(\frac{d}{dx}(x)\right)x^{2}-c(x)x+C_{2}}{x},C_{2}\in\mathbb{ Q}.\]
This means \(c(x)\) can be any function from \(F_{n-1}\). Let us demonstrate this by trying some values that are arbitrarily chosen:
* \(C_{1}=2\implies a(x)=\frac{2}{x}\)
* \(C_{2}=4\) and \(c(x)=x^{2}+\frac{1}{5x}\implies b(x)=\frac{-10x^{4}+5x^{3}+60x^{2}+61x+20}{5 x(1+x)^{2}}\)
* \(\frac{R}{b}=\frac{2}{x(\ln(x)+1)}+\frac{-10x^{4}+5x^{3}+60x^{2}+61x+20}{5x(1+x )^{2}(\ln(x)+x)}+\frac{x^{2}+\frac{1}{5x}}{(\ln(x)+x)^{2}}\)
* Then when we integrate \(\frac{R}{b}\), we get: \[\int\frac{R}{b}=-\frac{5x^{3}+1}{5(1+x)(\ln(x)+x)}+2\ln(\ln(x)+1)+4\ln(\ln(x)+x)\]
Example 4 gives us a much stronger freedom of choice because unlike with the square-free case, we actually get that our coefficient \(c(x)\) can be _any_ function in \(F_{n-1}\). This effectively means that we have three choices of freedom: one for \(a(x)\) (the choice of the constant \(C_{1}\)), one for \(b(x)\) (the choice of \(C_{2}\)), and one for \(c(x)\) (any expression in the previous field). In contrast, the only choices of freedom we had in the square-free case were the constants. Additionally, Example 4 had one functional degree of freedom \(c(x)\) since one factor from the denominator \(b\) was quadratic. In general, we will have more functional degrees of freedom for higher degree factors in the denominator.
## 5 Discussion
The Risch algorithm is an integral part of any CAS (pun intended). This data generation method discusses how to create expressions that are guaranteed to be elementary integrable by using the Risch algorithm. To understand the benefit of this data generation method, we create a simple dataset of 10,000 (integrand, integral) pairs. To compare against our dataset, we take a sample of 10,000 data points from each of the FWD, BWD, and IBP datasets. Of the 10,000 we created, a third comes from generating polynomial expressions in Section 4.1, another third comes from generating rational expressions from Section 4.2, and the final third comes from combining the two sections together (similar to how the Risch algorithm separates the two parts from each other).
### Risch Data Generation Benefits
One criticism of the data generation method in [6] was that there were patterns within how the expressions are made, specifically in the FWD and BWD datasets. Recall from Section 2, the BWD method produced long integrands and short integrals whereas the FWD had the opposite problem. We take a closer look by examining the lengths of the integrands and integrals in their testing datasets. Note that the authors represent the mathematical expression in prefix (or normal Polish) notation. The length is then just the number of tokens from this representation. The lengths of the (integrand, integral) pairs are shown for all three data methods in Figure 3.
Based on Figure 3, we can see quite the difference in lengths from the FWD and BWD methods. Suppose we consider an (integrand, integral) pair close in length if the absolute value between the length of the integrand and integral is less than 10. For the FWD and BWD methods, only 29% and 9% of pairs were considered close respectively. The IBP and Risch methods do considerably better at generating close pairs with 65% and 86% of pairs being considered close
Figure 3: Lengths of the Integrands and Integrals from the three test datasets in [6] as well as our generated dataset.
respectively. As mentioned earlier in Section 2, the presence of these patterns mean that there is a risk of bias in an ML model trained on such data. Recall also from Section 2 how much of the data only differed by the choice of constants in the expression, making IBP a weaker generation method.
However, because of the choices of freedom we have in making our integrable expressions from the Risch algorithm, we can alleviate the two problems shown. This is true for both the polynomial expressions, the rational expressions, and a combination of the two. The only patterns present in our dataset are those required for the expression to be elementary integrable.
With the dataset generated, Figure 3d shows the lengths of the produced (integrand, integral) pairs through the Risch algorithm in prefix notation. Figure 3d shows that the lengths between the integrands and integrals are much more evenly distributed, fixing the problem of the FWD and BWD datasets. Recall that the FWD method is also not able to generate (integrand, integral pairs) often, leading to a slow data generation method. Our method guarantees integrands that are elementary integrable 100% of the time, making it more efficient. Furthermore, we do the same analysis of examining the dataset by substituting the integer coefficients with a CONST token, and find that 97% of the data remains unique. The reason it did not reach 100% is due to data generated in Section 4.2, the rational square-free case. The choices of freedom in this case is usually only the choice of the constant. Some randomly generated denominators happened to be the same through chance and since the solutions only differ by a constant, they end up being the same when replaced with a CONST token. If wanted, these can be removed from the dataset.
### Future Work
We have presented a novel method of creating elementary integrable functions. However, there is much work that could still be done. Bronstein [2], when first introducing the Risch algorithm, separates the algorithm into four different cases: logarithmic transcendental, exponential transcendental, pure algebraic and mixed algebraic / transcendental cases. So far, we have only explored the first two cases. It would be beneficial to understand the latter two cases as radicals are something that should not be excluded from the dataset. To understand the latter two cases, one can read [1] or [10]. As with the present paper, the idea would be to find the conditions in the polynomial and rational cases that make the entire expression elementary integrable.
Furthermore, the current data generation method proposed can be further explored in a number of ways. For one, towers of extensions (i.e. \(F_{n},n\geq 2\)) have only been considered for polynomial expressions thus far. This can also be done with the rational expression generation method to create a greater variety of elementary integrable expressions. Also, working with irreducible cubic and higher degree polynomials (in \(\theta\)) for the rational case should further be examined. We have shown that when we assume the form of the numerator (Example 3), we can find solutions. However, it would be desirable to find _all_ possible numerators that make the entire expression integrable. The key to this would be examining
the TR-resultant and instead of explicitly solving for the roots, qualitatively analysing the resultant and figuring out the conditions of the generic coefficients would help overcome the computational cost of explicitly solving for the solution as discussed at the end of Section 4.2.
**Acknowledgements:** The authors would like to thank James Davenport and Gregory Sankaran for helpful discussion on conditions around constant roots of polynomials. They would also like to thank John May for help understanding Maple's integration command and testing data and the anonymous reviewers for their comments which improved the paper.
Matthew England is supported by EPSRC Project EP/T015748/1, _Pushing Back the Doubly-Exponential Wall of Cylindrical Algebraic Decomposition_ (DEWCAD). Rashid Barker is supported on a scholarship provided by Maple-soft and Coventry University.
|
2307.02327 | Equivariant graph neural network interatomic potential for Green-Kubo
thermal conductivity in phase change materials | Thermal conductivity is a fundamental material property that plays an
essential role in technology, but its accurate evaluation presents a challenge
for theory. In this work, we demonstrate the application of $E(3)$-equivariant
neutral network interatomic potentials within Green-Kubo formalism to determine
the lattice thermal conductivity in amorphous and crystalline materials. We
apply this method to study the thermal conductivity of germanium telluride
(GeTe) as a prototypical phase change material. A single deep learning
interatomic potential is able to describe the phase transitions between the
amorphous, rhombohedral and cubic phases, with critical temperatures in good
agreement with experiments. Furthermore, this approach accurately captures the
pronounced anharmonicity that is present in GeTe, enabling precise calculations
of the thermal conductivity. In contrast, the Boltzmann transport equation
including only three-phonon processes tends to overestimate the thermal
conductivity by approximately a factor of 2 in the crystalline phases. | Sung-Ho Lee, Jing Li, Valerio Olevano, Benoit Sklénard | 2023-07-05T14:37:34Z | http://arxiv.org/abs/2307.02327v2 | # Equivariant graph neural network interatomic potential for
###### Abstract
Thermal conductivity is a fundamental material property that plays an essential role in technology, but its accurate evaluation presents a challenge for theory. In this letter, we demonstrate the application of E(3)-equivariant neutral network interatomic potentials within Green-Kubo formalism to determine the lattice thermal conductivity in amorphous and crystalline materials. We apply this method to study the thermal conductivity of germanium telluride (GeTe) as a prototypical phase change material. A single deep learning interatomic potential is able to describe the phase transitions between the amorphous, rhombohedral and cubic phases, with critical temperatures in good agreement with experiments. Furthermore, this approach accurately captures the pronounced anharmonicity present in GeTe, enabling precise calculations of thermal conductivity. In contrast, the Boltzmann transport equation tends to overestimate it by approximately a factor of two in the crystalline phases.
Thermal conductivity is an intrinsic material property with deep implications in technology since it determines thermal management in the design of electronic devices [1; 2], and specifies the figure of merit in thermoelectric devices [3; 4]. Lattice vibrations, i.e. phonons, dominate heat transport in semiconductors and insulators. Much effort has been devoted to accurate calculations of lattice thermal conductivities from a microscopic perspective. The Boltzmann transport equation (BTE) [5; 6; 7], non-equilibrium Green function (NEGF) theory [8; 9; 10], and the Green-Kubo formula (GK) [11; 12; 13] are the three major approaches to lattice thermal conductivity calculations. BTE evaluates the response of phonon occupation to a temperature gradient, typically including three-phonon scattering processes, which limits its application to weakly anharmonic crystalline materials. NEGF treats phonons quantum mechanically and takes into account contact-channel interface scatterings and phonon anharmonicity by self-energies. However, it is computationally expensive [8]. GK provides the lattice thermal conductivity from the heat flux in an equilibrium molecular dynamics (MD) simulation, accounting for anharmonic effects to all orders [14]. Furthermore, recent developments extend GK to low temperatures [12; 13], which makes it a robust approach for a wide range of temperatures and materials. GK theory provides a unified approach to compute the lattice thermal conductivity in ordered and disordered solids. For harmonic amorphous systems, thermal transport can be described by the Allen and Feldman (AF) theory [15]. However, it has been shown that AF theory may be inadequate when anharmonic effects become important [16; 17].
The MD simulation in the GK approach requires a relatively long simulation time (up to a few nanoseconds) for adequate statistical sampling and an accurate description of interactions among atoms. Such long simulation times are affordable for MD with empirical force fields, but at the price of reduced accuracy and universality. _Ab initio_ MD has better accuracy but is too computationally expensive for large systems or long MD simulations. Extrapolation schemes have been proposed [18] to reduce the computational cost, but they are unsuitable for disordered solids.
In recent years, machine learning (ML) has emerged as a viable alternative for tasks that _ab initio_ methods have faced challenges with. In particular, machine learning interatomic potentials (MLIP) have been successful in predicting energies, forces and stress tensors orders of magnitude faster than first-principle methods, while retaining their accuracy. Thermal transport GK calculations have been reported with MLIP's relying on descriptor-based approaches, such as Behler-Parrinello neural networks (NN) or kernel-based methods [19; 20; 21]. Graph NN (GNN) interatomic potentials based on message passing architectures (MPNN) [22; 23; 24; 25] have been proposed as an alternative to hand-crafted descriptors, whereby structures are encoded as a graph with atoms represented as nodes that are connected by edges. In initial models, the information at nodes and edges of the GNN was made _invariant_ with respect to the Euclidean group \(E(3)\) (i.e. the group of translations, rotations and inversions in Euclidean space), and the atomic representations were limited to scalar interatomic distances [22]. Such models have since been generally superseded by MPNN architectures built on convolution operations that are _equivariant_ with respect to the E(3) group. In equivariant approaches, isometric transformations on the relative atomic displacement vector inputs are propagated through the network to correspondingly transform the outputs. Equivariant approaches have been shown to achieve substantially improved data efficiency and un
precedented accuracy compared to their invariant counterparts [23; 24; 25]. In MPNNs, many-body interactions are captured by iteratively propagating information along the graph at each layer in the network. This has the effect of extending the local receptive field of an atom to significantly beyond the cutoff radius, which renders parallelization impractical [26]. Recently, a strictly local equivariant neural network approach has been proposed to address this drawback [26]. In this architecture, information is stored as a per-pair quantity, and instead of nodes exchanging information with its neighbours via edges, a convolution operation acts on the cutoff sphere in the form of a set of invariant (scalar) latent features and a set of equivariant (tensor) latent features that interact at each layer.
In this letter, we demonstrate that the strictly local E(3)-equivariant NN can be employed to compute the temperature-dependent thermal conductivity of germanium telluride (GeTe) in various phases using GK theory. GeTe is a chalcogenide material employed in many technological applications, such as phase change nonvolatile memory storage [27; 28], thermoelectricity [3; 29; 30] and spintronics [31; 32; 33]. It undergoes a ferroelectric phase transition from the low temperature rhombohedral \(\alpha\)-GeTe (spacegroup \(R3m\)) to a cubic \(\beta\)-GeTe (spacegroup \(Fm\bar{3}m\)) at a Curie temperature of \(T_{c}\approx 650-700\) K [34; 35; 36]. Amorphous GeTe also plays an important role in technological applications. Therefore, GeTe is an ideal prototype phase change material for the study of lattice thermal conductivity using GK theory.
The thermal conductivity tensor within GK theory is defined as:
\[\kappa_{\alpha\beta}(T)=\frac{1}{k_{\text{B}}T^{2}V}\lim_{\tau\to\infty}\int_ {0}^{\tau}dt\,\langle j_{\alpha}(t)\cdot j_{\beta}(0)\rangle_{T}, \tag{1}\]
where \(k_{\text{B}}\) is the Boltzmann constant, \(T\) the temperature, \(V\) the volume, \(j_{\alpha}(t)\) the \(\alpha\)-th Cartesian component of the macroscopic heat flux, and \(\langle j_{\alpha}(t)\cdot j_{\beta}(0)\rangle_{T}\) the heat flux autocorrelation function (HFACF), with the symbol \(\langle\cdot\rangle_{T}\) denoting ensemble average over time and over independent MD trajectories.
The total heat flux of a system of \(N\) atoms is defined
\[\mathbf{j}(t)=\sum_{i=1}^{N}\frac{d}{dt}\left(\mathbf{r_{i}}E_{i}\right), \tag{2}\]
where \(E_{i}=m_{i}\mathbf{v}_{i}^{2}/2+U_{i}\) is the total energy (i.e. kinetic and potential energy) of atom \(i\) with mass \(m_{i}\), velocity \(\mathbf{v}_{i}\) and atomic positions \(\mathbf{r_{i}}\). In MLIPs, the partitioning \(E=\sum_{i}E_{i}\) of the total energy of the system into atomic contributions \(E_{i}\) allows the total heat flux of a periodic system to be expressed as [11] :
\[\mathbf{j}(t)=\sum_{i=1}^{N}\mathbf{v}_{i}E_{i}-\sum_{i=1}^{N}\sum_{j\neq i}\mathbf{r}_{ ij}\left(\frac{\partial U_{i}}{\partial\mathbf{r}_{ij}}\cdot\mathbf{v}_{j}\right) \tag{3}\]
where the sum over \(j\) runs over the atoms that are within the cutoff radius \(r_{c}\) of atom \(i\) defined for the MLIP. We implemented the calculation of Eq. (3) in the LAMMPS code [37]. The term \(\partial U_{i}/\partial\mathbf{r}_{ij}\) is obtained by automatic differentiation of atomic energies \(U_{i}\) computed by the MLIP. It was also used for the calculation of the virial tensor [11; 38], which is required to perform simulations in the isothermal-isobaric (NpT) ensemble.
To generate the reference dataset to train the MLIP, _ab initio_ MD simulations based on density functional theory (DFT) were performed with temperatures ranging from 100 K to 2500 K using the VASP code [39; 40]. The generalized gradient approximation of Perdew-Burke-Ernzerhof (PBE) [41] was used for the exchange-correlation energy and Grimme's D3 dispersion correction [42] was applied. The supercells contained 192 and 216 atoms for the initial rhombohedral and cubic structures, respectively. Then, 6000 structures in total were taken from the MD trajectories and recomputed to obtain more accurate energy, forces, and stress tensors. We used an energy cutoff of 400 eV and a \(2\times 2\times 2\)\(k\)-mesh to sample the Brillouin zone. The equivariant NN model was trained on energy, forces and stress using the Allegro package [26]. The root mean squared errors (RMSE) and mean absolute errors (MAE) on the predicted energies, forces and stress tensors on the test dataset are 0.90 meV/atom, 29.87 meV/A, 0.28 meV/A\({}^{3}\) and 1.07 meV/atom, 42.97 meV/A, 0.37 meV/A\({}^{3}\), respectively (see Suppl. Mat. for more information on the training procedure and dataset partitioning).
To further validate the MLIP, the equilibrium geometries of crystalline GeTe were optimized using the MLIP. For \(\alpha\)-GeTe, the lattice parameter was \(a=4.42\) A and the angle \(\alpha=57.13^{\circ}\), close to DFT results of \(a=4.41\) A and \(\alpha=57.42^{\circ}\). Similarly, for \(\beta\)-GeTe, the MLIP yields \(a=4.24\) A, in excellent agreement with the lattice parameter from DFT of \(a=4.23\) A.
Moreover, the phonon dispersion from the MLIP is in excellent agreement with DFT for both \(\alpha\) and \(\beta\)-GeTe, as shown in Fig. 1. In particular, our model describes optical phonons well, which is usually challenging for MLIPs [19; 43]. Imaginary soft phonon modes in cubic GeTe are also well described by the MLIP, which is essential to capture the phase transition [44; 45]. These phonon dispersions were computed using the finite displacement method implemented in Phonopy [46] with \(3\times 3\times 3\) and \(5\times 5\times 2\) supercells of the conventional unit cells for cubic and rhombohedral phases, respectively. For the DFT calculations, we used the same settings as those used to generate the reference dataset. LO-TO splitting was not included in our calculations as long-range Coulomb interactions tend to be screened by free carriers in real samples [47].
We investigated the lattice dynamics of GeTe through MD simulations across the \(\alpha\to\beta\) phase transition with
our MLIP. For each temperature, GeTe supercells were first equilibrated for at least 200 ps in the NpT ensemble at ambient pressure with a 2 fs timestep in order to obtain the averaged temperature-dependent structural parameters shown in Fig. 2. The rhombohedral lattice parameter \(a\) and angle \(\alpha\) reach cubic values at \(T\approx 650\) K, in good agreement with experimental data.
By employing the temperature-dependent effective-potential (TDEP) method [48; 49; 50], the temperature-dependent interatomic force constants (IFCs) were extracted from a 600 ps MD simulation in the microcanonical ensemble, after equilibrating the system in the NVT ensemble using the structural parameters depicted in Fig. 2. By utilizing these IFCs, we computed phonon spectra as a function of temperature (refer to the Suppl. Mat. for more detailed information). Fig. 3 presents the evolution of the longitudinal and transverse optical phonon modes (\(\Gamma_{6}\) and \(\Gamma_{4}\), respectively) as a function of temperature. The softening of these two modes up to the Curie temperature is corroborated by previous theoretical studies [44; 45] and is comparable to experiments [51; 47; 52]. Beyond 650 K, the optical phonons merge, indicating the transition to the cubic phase where optical phonons exhibit three-fold degeneracy.
To compute the GK thermal conductivity of cubic, rhombohedral and amorphous GeTe, MD simulations with the MLIP were performed at different temperatures. The amorphous GeTe structure was generated using a melt-quench process (see Suppl. Mat.). The heat flux was calculated during MD simulations in the microcanonical ensemble and the ensemble average was performed over independent trajectories of at least 1 ns after equilibration in the NpT ensemble. After testing the convergence with respect to system size (see Suppl. Mat.), we used supercells containing 360 atoms for the rhombohedral phase and 512 atoms for the amorphous and cubic phases.
Figure 1: Comparison of phonon dispersions computed with DFT and with the MLIP of (a) \(\alpha\)-GeTe and (b) \(\beta\)-GeTe
Figure 3: Temperature evolution of A\({}_{1}\) and E optical phonon modes computed with the TDEP method and compared against experimental data from Ref. [51; 47; 52].
Figure 2: Evolution of (a) the lattice parameter \(a\) and (b) the angle \(\alpha\) as a function of temperature in the NpT MD simulations of crystalline GeTe, compared against experimental data from Ref. [34; 35; 36]. Simulated lattice parameters in (a) were shifted by \(-0.1\) Å.
Although cubic GeTe is metastable below \(T_{c}\), GK is able to determine its lattice thermal conductivity as it becomes dynamically stable at \(T\geq 300\) K (see finite temperature phonon spectra in Suppl. Mat.). Rhombohedral GeTe shows a higher thermal conductivity than cubic GeTe before 650 K (see Fig. 4) after which the two curves merge, reflecting the \(\alpha\rightarrow\beta\) phase transition.
The comparison against experiments is challenging because experimental values of lattice thermal conductivities of crystalline GeTe show a large dispersion. There are two reasons for this. First, thermal conductivity comprises a lattice contribution and an electronic contribution. Therefore, experimental lattice thermal conductivity is an indirect measurement, which is obtained by removing the electronic contribution, typically evaluated using the Wiedmann-Franz law that introduces an additional approximation from the Lorenz number. Second, the sample quality varies. Extrinsic scatterings due to defects may alter the thermal conductivity measurements. For example, an extra phonon-vacancy scattering has to be included in order to recover a good agreement with experimental data [54, 55]. Despite the significant experimental variations mentioned above, the calculated GK thermal conductivity values are found to fall within the range of experimental values.
The GK lattice thermal conductivity for the amorphous phase (solid green line) is in excellent agreement with the experimental data of Ref. [54] (green squares). This can be regarded as a direct comparison with the experiment since the electronic contribution to the thermal conductivity was found to be negligible in amorphous GeTe [56]. A previous study obtained a similar value of \(0.27\pm 0.05\) W\(\cdot\)m\({}^{-1}\cdot\)K\({}^{-1}\) at 300 K from GK simulations with a Behler-Parrinello-type MLIP [21]. The predicted thermal conductivity for amorphous GeTe is constant until \(\sim 450\) K. It then starts to increase, indicating a transition to a crystalline phase, as evidenced by the evolution of the radial distribution function (see Suppl. Mat.) and consistent with the amorphous-crystalline phase transition temperature observed experimentally [54].
To obtain the BTE thermal conductivity, we used the TDEP 2nd and 3rd order IFCs from MD simulations and a \(30\times 30\times 30\)\(q\)-mesh. This allows a direct comparison between GK and BTE as both calculations were on the same footing, with identical interatomic potential and the same temperature; the only difference being the thermal transport formalism. BTE overestimates the thermal conductivity by about 1.8 W\(\cdot\)m\({}^{-1}\cdot\)K\({}^{-1}\), which is about twice the GK result at 300 K, and about three times that at 900 K. Such overestimation is an indication that BTE cannot capture the strong anharmonicity exhibited by GeTe.
In conclusion, we developed an equivariant graph neural network interatomic potential to study thermal transport in amorphous and crystalline GeTe. The potential describes GeTe at a near-_ab initio_ level of accuracy for the rhombohedral, cubic and amorphous phases with a single model. Our potential also correctly captures phase transitions with Curie temperatures in good agreement with experimental data. Combined with the Green-Kubo theory, it can determine the lattice thermal conductivity not only for strongly anharmonic crystals, but also for the amorphous phase.
We thank F. Bottin and J. Bouchet for discussions about TDEP calculation. This work was performed using HPC/AI resources from GENCI-IDRIS (Grant 2022-A0110911995) and was partially funded by European commission through ECSEL-IA 101007321 project StorAIge and the French IPCEI program.
|
2310.16945 | Causal Q-Aggregation for CATE Model Selection | Accurate estimation of conditional average treatment effects (CATE) is at the
core of personalized decision making. While there is a plethora of models for
CATE estimation, model selection is a nontrivial task, due to the fundamental
problem of causal inference. Recent empirical work provides evidence in favor
of proxy loss metrics with double robust properties and in favor of model
ensembling. However, theoretical understanding is lacking. Direct application
of prior theoretical work leads to suboptimal oracle model selection rates due
to the non-convexity of the model selection problem. We provide regret rates
for the major existing CATE ensembling approaches and propose a new CATE model
ensembling approach based on Q-aggregation using the doubly robust loss. Our
main result shows that causal Q-aggregation achieves statistically optimal
oracle model selection regret rates of $\frac{\log(M)}{n}$ (with $M$ models and
$n$ samples), with the addition of higher-order estimation error terms related
to products of errors in the nuisance functions. Crucially, our regret rate
does not require that any of the candidate CATE models be close to the truth.
We validate our new method on many semi-synthetic datasets and also provide
extensions of our work to CATE model selection with instrumental variables and
unobserved confounding. | Hui Lan, Vasilis Syrgkanis | 2023-10-25T19:27:05Z | http://arxiv.org/abs/2310.16945v4 | # Causal Q-Aggregation for CATE Model Selection
###### Abstract
Accurate estimation of conditional average treatment effects (CATE) is at the core of personalized decision making. While there is a plethora of models for CATE estimation, model selection is a nontrivial task, due to the fundamental problem of causal inference. Recent empirical work provides evidence in favor of proxy loss metrics with double robust properties and in favor of model ensembling. However, theoretical understanding is lacking. Direct application of prior theoretical work leads to suboptimal oracle model selection rates due to the non-convexity of the model selection problem. We provide regret rates for the major existing CATE ensembling approaches and propose a new CATE model ensembling approach based on Q-aggregation using the doubly robust loss. Our main result shows that causal Q-aggregation achieves statistically optimal oracle model selection regret rates of \(\frac{\log(M)}{n}\) (with \(M\) models and \(n\) samples), with the addition of higher-order estimation error terms related to products of errors in the nuisance functions. Crucially, our regret rate does not require that any of the candidate CATE models be close to the truth. We validate our new method on many semi-synthetic datasets and also provide extensions of our work to CATE model selection with instrumental variables and unobserved confounding.
## 1 Introduction
Identifying optimal decisions requires understanding the causal effect of an action on an outcome of interest. With the emergence of rich and large datasets in many application domains such as digital experimentation, precision medicine, and digital marketing, identifying optimal personalized decisions has emerged as a mainstream topic in the literature. Identifying optimal personalized decisions requires understanding how the causal effect changes with observable characteristics of the treated unit. For this reason, many recent works have studied the estimation of conditional average treatment effects (CATE):
\[\tau_{0}(X):=\mathbb{E}[Y(1)-Y(0)\mid X]\]
where \(X\) are observable features and \(Y(d)\) is the potential outcome of interest under treatment \(d\in\{0,1\}\).
This has led to a surge of many different methods for CATE estimation using machine learning techniques, such as deep learning (Shalit et al., 2017; Shi et al., 2019), lasso (Imai and Ratkovic, 2013), random forests (Wager and Athey, 2018; Oprescu et al., 2019), Bayesian regression trees (Hahn et al., 2020), as well as model-agnostic frameworks such as meta-learners (Kunzel et al., 2019) and double machine learning (Kennedy, 2020; Foster and Syrgkanis, 2023; Nie and Wager, 2021). Machine learning CATE estimation has been considered both under the assumption of unconfoundedness, i.e., that outcomes \(Y(1),Y(0)\) are independent of the assigned treatment \(D\), conditional on the observed features \(X\), as well as when there is unobserved confounding and access to instrumental variables (Hartford et al., 2017; Syrgkanis et al., 2019).
However, the effectiveness of each approach depends on the learnability of the various causal mechanisms at play and the structure of the CATE function. This has led to the pursuit of automated and data-driven model selection approaches for CATE estimation (Schuler et al., 2018; Mahajan et al., 2022; Curth and van der Schaar, 2023; Alaa and Van Der Schaar, 2019). Due to the fundamental problem of causal inference, i.e., that we do not observe both counterfactual outcomes \(Y(1),Y(0)\) for each unit, there does not exist a perfect analogue of out-of-sample model selection based on mean squared error, as is typically invoked in regression problems. For this reason, many works have proposed proxy loss metrics Nie and Wager (2021); Foster and Syrgkanis (2023); Kennedy (2020); Alaa and Van Der Schaar (2019), analogous to the mean |
2306.15214 | Mirror symmetry for circle compactified 4d $\mathcal{N}=2$ SCFTs | We propose a mirror symmetry for 4d $\mathcal{N}=2$ superconformal field
theories (SCFTs) compactified on a circle with finite size. The mirror symmetry
involves vertex operator algebra (VOA) describing the Schur sector (containing
Higgs branch) of 4d theory, and the Coulomb branch of the effective 3d theory.
The basic feature of the mirror symmetry is that many representational
properties of VOA are matched with geometric properties of the Coulomb branch
moduli space. Our proposal is verified for a large class of Argyres-Douglas
(AD) theories engineered from M5 branes, whose VOAs are W-algebras, and Coulomb
branches are the Hitchin moduli spaces. VOA data such as simple modules, Zhu's
algebra, and modular properties are matched with geometric properties like
$\mathbb{C}^*$-fixed varieties in Hitchin fibers, cohomologies, and some DAHA
representations. We also mention relationships to 3d symplectic duality. | Peng Shan, Dan Xie, Wenbin Yan | 2023-06-27T05:25:44Z | http://arxiv.org/abs/2306.15214v1 | # Mirror symmetry for circle compactified 4d \(\mathcal{N}=2\) SCFTs
###### Abstract
We propose a mirror symmetry for 4d \(\mathcal{N}=2\) superconformal field theories (SCFTs) compactified on a circle with finite size. The mirror symmetry involves vertex operator algebra (VOA) describing the Schur sector (containing Higgs branch) of 4d theory, and the Coulomb branch of the effective 3d theory. The basic feature of the mirror symmetry is that many representational properties of VOA are matched with geometric properties of the Coulomb branch moduli space. Our proposal is verified for a large class of Argyres-Douglas (AD) theories engineered from M5 branes, whose VOAs are W-algebras, and Coulomb branches are the Hitchin moduli spaces. VOA data such as simple modules, Zhu's algebra, and modular properties are matched with geometric properties like \(\mathbb{C}^{*}\)-fixed varieties in Hitchin fibers, cohomologies, and some DAHA representations. We also mention relationships to 3d symplectic duality.
## 1 Introduction
Mirror symmetry plays an important role in modern theoretical physics and mathematics as it connects a large number of disciplines including string theory, geometry, algebra, representation theory and etc. The two dimensional mirror symmetry [1] involves a pair of Calabi-Yau (CY) manifold \(X,\check{X}\) which can be used to define a pair of two dimensional
\((2,2)\) superconformal field theories (SCFTs) \({\cal T}(X)\) and \({\cal T}(\check{X})\). The statement is then that \({\cal T}(X)\) and \({\cal T}(\check{X})\) are dual in the infrared (IR)
\[{\cal T}(X)\simeq{\cal T}(\check{X}). \tag{1}\]
The basic feature of the mirror symmetry is that: the same physical quantities (such as prepotential) can be computed from different geometrical data of \(X\) or \(X^{\vee}\)[2], which leads to many interesting correspondences in mathematics. More importantly, things which are difficult to compute on one side might become easier by looking at its mirror.
Three dimensional \({\cal N}=4\) SCFTs also have similar mirror symmetric properties [3], which often involves two hyper-Kahler manifolds \(X\) and \(Y\) acting as moduli spaces of vacua of the 3d theory. The basic feature of 3d mirror symmetry discussed in [3] is that \(X\) (resp. \(Y\)) can be realized either as the Higgs (resp. Coulomb) branch of one theory \({\cal T}_{1}\) or the Coulomb (resp. Higgs) branch of another theory \({\cal T}_{2}\). Again, the manifold which is difficult to describe on one side may have a simpler description in its mirror. It was further realized in [4; 5; 6; 7] that there are duality involving geometric properties of \(X\) and \(Y\). For example, one can get an algebra \({\cal A}_{X}\) through the quantization of \(X\) (and its resolution), and the representation theory of \({\cal A}_{X}\) is closed related to the geometric property of \(Y\)
\[{\cal A}_{X}\longleftrightarrow Y. \tag{2}\]
This kind of duality is called symplectic duality [5; 6].
Now consider a four dimensional \({\cal N}=2\) SCFT compactified on a circle \(S^{1}\) with finite radius. One may wonder whether there is a similar mirror symmetry. The resulting 3d effective theory has a Coulomb branch \({\cal M}_{C}\) which is a hyper-Kahler manifold admitting torus fibration [8], and a Higgs branch \({\cal M}_{H}\) which is the same hyper-Kahler cone as that of the original 4d theory. In this case, \({\cal M}_{H}\) and \({\cal M}_{C}\) are rather different and one does not expect to find a dual theory which exchanges the role of \({\cal M}_{C}\) and \({\cal M}_{H}\).
However, motivated by the symplectic duality interpretation of the 3d mirror symmetry, the analog of the mirror symmetry of circle compactified 4d theories might be formulated as an algebra/geometry duality. Indeed, there are strong evidence [9; 10; 11] that the algebra should be the vertex operator algebra (VOA) associated with the 4d theory [12] which indeed consists of Higgs branch operators as a subset [13; 14; 15], and the geometric side should be the Coulomb branch. Given an arbitrary 4d \({\cal N}=2\) SCFT \({\cal T}\), we propose the following mirror symmetry between the corresponding VOA(\({\cal T}\)) and the Coulomb branch \({\cal M}_{C}({\cal T})\) of \({\cal T}\) compactified on the circle
\[{\rm VOA}({\cal T})\longleftrightarrow{\cal M}_{C}({\cal T}), \tag{3}\]
with the dictionary summarized in table 1. Given a 4d \({\cal N}=2\) SCFT, it is in general difficult to know neither its associated VOA nor the Coulomb branch \({\cal M}_{C}\). However, in a series of previous works, both the corresponding VOA [13; 16; 17; 18] and the Coulomb branch of a large class of 4d \({\cal N}=2\) SCFTs [17; 19; 20] are known 1, so one can thoroughly study and check the mirror symmetry for this class of theories.
Footnote 1: Corresponding VOAs of many different series in this class of generalized AD theories were already studied in [13; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40].
This class of theories is engineered by compactification of a 6d \((2,0)\) theory of type \(\mathfrak{j}=\) ADE on a sphere with a regular and an irregular singularity. For our interest, the irregular singularity is labelled by a rational number \(\nu=\frac{u}{m}\)2 (see table 9 for allowed values), and the regular singularity is labelled by a nilpotent orbit of \(\mathfrak{j}\). It was found in [13, 16, 18] that the associated VOA is the W-algebra \(W_{-h^{\vee}+\frac{1}{\nu}}(\mathfrak{j},f)\), and the associated \(\mathcal{M}_{C}\) is the Hitchin moduli space \(\mathcal{M}_{Hit}(\mathfrak{j},\nu,(f^{\vee},c))\) with \((f^{\vee},c)\) being the dual of \(f\) and \(c\) being a conjugacy class of the component group 3[41]. Therefore the mirror symmetry is the correspondence between the following two objects
Footnote 2: \(m\) takes value from a finite set given by the Lie algebra, and \(u\geq 1\).
Footnote 3: We will omit \(c\) when the component group is trivial or \(c=1\).
\[W_{-h^{\vee}+\frac{1}{\nu}}(\mathfrak{j},f)\longleftrightarrow\mathcal{M}_{ Hit}(\mathfrak{j},\nu,(f^{\vee},c)). \tag{4}\]
One can also get non-simply laced W-algebra by doing outer automorphism twist around the singularity [17], and the pair of objects are
\[W_{-h^{\vee}+\frac{1}{n\nu}}(\mathfrak{g},f)\longleftrightarrow\mathcal{M}_{ Hit}((\mathfrak{j},o),\nu,(f^{\vee},c)). \tag{5}\]
Here \(o\) is the outer automorphism of ADE Lie algebra \(\mathfrak{j}\) whose invariant Lie algebra is \(\mathfrak{g}^{\vee}\) (the Langlands dual of \(\mathfrak{g}\)), \(n\) is the lacety of \(\mathfrak{g}\), summarized in table 3. The simply laced case (4) can also be fit into (5) by noticing that \(\mathfrak{j}=\mathfrak{g}=\mathfrak{g}^{\vee}\) when \(\mathfrak{j}\) is simply laced and choosing \(o=\{1\}\). The appearance of a Lie algebra and its Langlands dual on each side of the duality is a feature similar to many dualities of physical theories found before (For example, in 4d \(\mathcal{N}=4\) SYM theories).
In the following we briefly explain how the representation aspects of VOA is related to geometric property of Coulomb branch in our particular class of examples. Part of the statements can be formulated rigorously and will be proved in a parallel math paper [42].
1. **Simple modules in the category \(\mathcal{O}\) of VOA and \(\mathbb{C}^{*}\) fixed varieties of \(\mathcal{M}_{C}\)**: There is a bijection between the simple modules of \(W_{-h^{\vee}+\frac{1}{n\nu}}(\mathfrak{g},f)\)4 and the \(\mathbb{C}^{*}\)-fixed varieties of \(\mathcal{M}_{Hit}(\mathfrak{j},o),\nu,(f^{\vee},c))\). It was first observed in [9] for cases when 4d theories are \((A_{N-1},A_{M-1})\) Argyres-Douglas (AD) theories with \(N\) and \(M\) coprime 5, then generalized to cases when the 4d theories are \((A_{1},A_{N})\) and \((A_{1},D_{N})\) AD
theories for \(N\in\mathbb{Z}_{>0}\) in [10]. To generalize this correspondence to arbitrary \(\mathfrak{g}\), \(\nu\) and \(f\), a crucial observation is that the fixed varieties of Hitchin moduli spaces \(\mathcal{M}_{Hit}((\mathfrak{j},o),\nu,f^{\vee})\) are reduced to that of the affine Springer fibre of elliptic type, and there is a nice algebraic description of the latter. Using this description, we find a natural bijection between fixed varieties and simple modules of the corresponding affine Lie algebra when the level is boundary admissible 6. This will be explained in [42]. For general W-algebras, it is conjectured in [43] that simple modules can be obtained from simple modules of the affine Lie algebra from BRST reduction. We explain also in loc. cit. that this reduction is the same as a reduction of fixed varieties on the Hitchin side. Moreover, our results also provide predictions for classifications of simple modules of non-admissible W-algebras. Footnote 6: This happens when \(\nu=\frac{u}{h_{\theta}}\), where \(h_{\theta}\) is the Coxeter number for untwisted theory, and twisted Coxeter number for twisted case.
2. **Conformal weight and momentum map**: One can compute the momentum map for a fixed point using the Morse theory on \(\mathcal{M}_{C}\), and match this with the conformal weight of the corresponding VOA [9; 10]. In this work, we propose a general formula relating conformal weights of simple modules of \(W_{-h^{\vee}+\frac{1}{n\nu}}(\mathfrak{g},f)\) to the values of moment maps of \(\mathbb{C}^{*}\)-fixed points of \(\mathcal{M}_{Hit}((\mathfrak{j},o),\nu,(f^{\vee},c))\).
3. **Modular transformation and DAHA**: The space of characters of simple modules of some VOA's admit modular property with respect to certain \(SL(2,\mathbb{Z})\) action. This was shown for admissible AKM [43] and W-algebras [44]. On the other hand, the cohomology of fixed varieties of \(\mathcal{M}_{Hit}((\mathfrak{j},o),\nu,f^{\vee})\) gives a finite dimensional representation of double affine Hecke algebra (DAHA)[45; 46], and in some cases, it admits a projective action of \(SL(2,\mathbb{Z})\) which is compatible with corresponding automorphisms of DAHA [47]. For admissible W-algebras, we show in [42] that the \(SL(2,\mathbb{Z})\) representations on both sides coincide 7. Our result also gives interesting insights on the modular property of non-admissible W-algebras. Footnote 7: The relation between modular matrices of minimal W-algebras of \(A\) type and spherical DAHA of \(A\) type was already shown in [48].
4. **Modular property and Coulomb branch index**: The Coulomb branch index of a 4d theory on lens space \(L(k,1)\) times \(S^{1}\) can be computed using the Morse theory data on the fixed varieties of Coulomb branch. It was found in [10; 49] that the Coulomb branch index is related to the modular properties of the corresponding VOA. We will show that the same relation works for the admissible cases, which gives strong hint that such relation should work in general.
5. **Zhu's \(C_{2}\) algebra and cohomology ring**: There is a so-called Zhu's \(C_{2}\)-algebra associated with the VOA\((\mathcal{T})\)[50]. It provides important information on the representation theory. On the Hitchin side, one naturally have a cohomological ring. In the context of principal admissible W-algebra, we find that Zhu's \(C_{2}\) algebra is the same as the cohomology ring of \(\mathcal{M}_{C}(\mathcal{T})\). In general, we would expect that the cohomo
logical ring should be related to some algebra on the VOA side which characterizes simple modules.
6. **Relation with 3d symplectic duality**: One can take the radius of the compactification circle to zero to get a 3d \(\mathcal{N}=4\) SCFT from the 4d theory \(\mathcal{T}\). The Higgs branch \(\mathcal{M}_{H}^{3d}(\mathcal{T})\) of the 3d theory is the same as \(\mathcal{M}_{H}(\mathcal{T})\), while the Coulomb branch \(\mathcal{M}_{C}^{3d}(\mathcal{T})\) is related to \(\mathcal{M}_{C}(\mathcal{T})\) in a less obvious way [51; 52]. The Higgs and Coulomb branch of 3d theory can also be described by its 3d mirror [53]. \(\mathcal{M}_{H}^{3d}(\mathcal{T})\) and \(\mathcal{M}_{C}^{3d}(\mathcal{T})\) naturally forms a symplectic pair, and many known symplectic pairs can be obtained in this way. Moreover, a finite W-algebra can be found as the twisted Zhu's algebra of VOA(\(\mathcal{T}\)) [54], which is exactly the same algebra studied in the context of 3d symplectic duality. So from 4d perspective, the appearance of an algebra in the symplectic duality is natural.
We would like to add that there is one more interesting relation for the mirror pair: the character of VOA modules can be computed using the wall crossing data on \(\mathcal{M}_{C}(\mathcal{T})\)[23; 26; 55]. We would not discuss this duality in this paper, but hope to study it in the future.
**Physical interpretation of mirror symmetry**: Let us now justify the name of mirror symmetry, namely the Coulomb branch of circle compactified 4d theory \(\mathcal{T}_{1}\) is given by the Higgs branch of another theory \(\mathcal{T}_{2}\). The crucial difference with respect to the 3d mirror is that \(\mathcal{T}_{2}\) has to be a **five** dimensional theory. Following the discussion in [52], one first compactifies the 6d theory on a Riemann surface \(\Sigma\) and then on a circle \(S^{1}\) to get a 4d \(\mathcal{N}=2\) theory on a circle. On the other hand, by changing the order of compactification, one first gets a 5d maximal SYM in the low energy from the 6d theory. The Coulomb branch of original theory is then the Higgs branch of the 5d theory compactified on \(\Sigma\). This leads to the description of the Coulomb branch of the 4d theory on a circle as the Hitchin moduli space by explicitly writing down the Higgs branch equation of motion of the 5d theory (figure 1).
The paper is organized as the following: in section 2, we review the classification of 4d \(\mathcal{N}=2\) SCFTs from 6d \((2,0)\) theory and the structure of their Coulomb and Higgs (Schur) branches. Section 3 reviews the representation theory of admissible W-algebras. Section 4 discusses the zero fiber of Hitchin moduli space, its relation to affine Springer fibre, and the computation of fixed varieties. Using the knowledge of previous sections, we finally check the dictionary of the mirror symmetry in table 1 which is the main focus of section 5. We will mainly provide examples and predictions here. Finally various generalizations are discussed in section 6.
## 2 4d \(\mathcal{N}=2\) SCFTs from 6d SCFTs on a sphere
4d \(\mathcal{N}=2\) theories has two kinds of moduli spaces of vacua: the Coulomb branch and the Higgs branch. The low energy effective theory of the Coulomb branch is solved by the Seiberg-Witten (SW) solution [56; 57]. Roughly speaking, the SW solution is given by a family of algebraic varieties fibered over a base manifold \(B\) which is the Coulomb branch of the 4d theory on flat space. If we further compactify 4d theory on a circle with finite radius \(R\), the effective theory also has a Coulomb branch \(\mathcal{M}_{C}\) which is a hyper-Kahler manifold [8]. \(\mathcal{M}_{C}\) is given by an abelian variety fibered over the base \(B\) in one of its complex structures.
In general, it is difficult to find the SW solution for an arbitrary 4d \(\mathcal{N}=2\) theory. However, for models constructed using the 6d \((2,0)\) theory, one can find SW solutions using Hitchin moduli spaces [58; 59]. Given a 6d \((2,0)\) theory of type j, a Riemann surface \(\Sigma_{g,n}\) of genus \(g\) with \(n\) punctures, one obtains a \(4d\)\(\mathcal{N}=2\) SCFT by compactification of the 6d theory on \(\Sigma_{g,n}\), then the Coulomb branch of this 4d theory on \(S^{1}\) is the same as the moduli space of the Hitchin system on \(\Sigma_{g,n}\). In the following section, we will review data required to specify the 4d theory when the Riemann surface is a sphere with one irregular and one regular punctures [17; 19; 20].
### Basic constructions
One can engineer a large class of 4d \(\mathcal{N}=2\) SCFTs by putting a 6d \((2,0)\) theory of type \(\mathsf{j}=ADE\) on a sphere with an irregular singularity and a regular singularity [17; 19; 20; 58; 59] (figure 2). The Coulomb branch of this 4d \(\mathcal{N}=2\) theory is captured by a Hitchin system with the following boundary conditions near the irregular singularities
\[\Phi(z)=\left(\frac{T_{k}}{z^{2+\frac{k}{b}}}+\sum_{-b\leq l<k}\frac{T_{l}}{z^ {2+\frac{l}{b}}}+\cdots\right)dz. \tag{1}\]
Figure 1: Left: One first compactify 6d \((2,0)\) theory on a Riemann surface to get a 4d theory, and then on a circle to get an effective 3d theory; Right: One first compactify 6d \((2,0)\) theory on a circle to get a 5d theory and then on a Riemann surface to get an effective 3d theory. The Coulomb branch of the theory on the left is given by the Higgs branch of the theory on the right.
ere one first choose a \(\mathbb{Z}/b\mathbb{Z}\) grading (a positive principal grading) of Lie algebra j [60]
\[\mathfrak{j}=\oplus_{i\in\mathbb{Z}/b\mathbb{Z}}\mathfrak{j}_{i/b}, \tag{2.2}\]
then each \(T_{l}\) is a regular semi-simple element in \(\mathfrak{j}_{i/b}\). Possible choices of the integer \(b\) for each \(\mathfrak{j}\) are listed in table 2, and the integer \(k\) is greater than \(-b\). Subsequent terms of the Higgs field are chosen such that they are compatible with the leading order term (essentially determined by the grading). We call them irregular punctures of \(\mathfrak{j}^{b}[k]\) type. This choice of irregular singularities ensures that the resulting 4d \(\mathcal{N}=2\) theory has a \(U(1)_{R}\) symmetry and therefore superconformal. Theories constructed using only these irregular singularities can also be engineered by putting type IIB string theory on a three dimensional singularity [61] as summarized in table 2. One can add another regular singularity which is labelled by an element \(f\) in a nilpotent orbit of \(\mathfrak{j}\)8. All in all the 4d theory in our consideration is specified by four labels \(\boxed{<\mathfrak{j},b,k,f>}\), with \(\mathfrak{j}\) labelling the type of 6d \((2,0)\) SCFT, \(b,k\) specifying the irregular singularity, and \(f\) fixing the regular singularity.
Footnote 8: We use Nahm labels such that the trivial orbit corresponding to regular puncture with maximal flavor symmetry. A detailed discussion about these defects can be found in [62].
To get non-simply laced flavor groups, we need to specify some outer-automorphism twist of ADE Lie algebra \(\mathfrak{j}\). A systematic study of these AD theories was performed in [17]. Denoting by \(\mathfrak{g}^{\vee}\) the invariant algebra of \(\mathfrak{j}\) under the twist and \(\mathfrak{g}\) its Langlands dual. Outer-automorphisms and invariant algebras are summarized in table 3. The irregular singularity of regular semi-simple type is also classified in table 4 with the following form,
\[\Phi(z)=\left(\frac{T^{t}}{z^{2+\frac{k_{t}}{b_{t}}}}+\cdots\right)dz. \tag{2.3}\]
Here \(T^{t}\) is a simi-simple element of Lie algebra \(\mathfrak{g}^{\vee}\), and the novel thing is that \(k_{t}\) can take half-integer value or in \(\frac{1}{3}\mathbb{Z}\) (\(\mathfrak{g}=G_{2}\)) [17]. We could again add a twisted regular puncture labeled by a nilpotent orbit \(f\) of \(\mathfrak{g}\). A 4d \(\mathcal{N}=2\) theory is then determined by following data \(\boxed{<\mathfrak{j},o,b_{t},k_{t},f>}\), with \(\mathfrak{j}\) labelling the type of 6d \((2,0)\) SCFT, \(o\) being the
Figure 2: A 4d AD theory is constructed by putting a 6d \((2,0)\) theory of type \(\mathfrak{j}\) on a sphere with one irregular singularity and one regular singularity. The irregular singularity is labeled by \(\Phi\), see (2.1), and the regular singularity is labeled by \(f\).
outer automorphism twist, \(b_{t}\) and \(k_{t}\) together determining the irregular singularity, and finally \(f\) fixing the regular singularity.
**Remark**: The label \(f\) here is actually the so-called Nahm (Higgs) label. The actual boundary condition of the Higgs field \(\Phi\) around the regular singularity looks like
\[\Phi(z)\sim\left(\frac{f^{\vee}}{z}+\cdots\right)dz, \tag{4}\]
where \(f^{\vee}\in\overline{\mathcal{O}}_{f^{\vee}}\). The nilpotent orbit \(\mathcal{O}_{f^{\vee}}\) in \(\mathfrak{g}^{\vee}\) is the Spaltenstein dual of \(\mathcal{O}_{f}\). More carefully, one also needs to specify a conjugacy class \(c\) in the component group for the Higgs field [62], which will be reviewed in section 4.
### Coulomb branch as Hitchin moduli space
As discussed above, the Coulomb branch of the theory \(\mathcal{T}_{\mathfrak{j},b,k,f}\) (resp. \(\mathcal{T}_{\mathfrak{j},o,b_{t},k_{t},f}\)) on a circle is specified by the Hitchin moduli space \(\mathcal{M}_{Hit}(\mathfrak{j},\nu,(f^{\vee},c))\) with \(\nu=\frac{k}{b}+1\) (resp.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline j & \(b\) & Singularity & Spectral curve at SCFT point & \(\Delta[z]\) & \(\mu\) \\ \hline \(A_{N-1}\) & \(N\) & \(x_{1}^{2}+x_{2}^{2}+x_{3}^{3}+z^{k}=0\) & \(x^{N}+z^{k}=0\) & \(\frac{N}{N+k}\) & \((N-1)(k-1)\) \\ \hline & \(N-1\) & \(x_{1}^{2}+x_{2}^{2}+x_{3}^{N}+x_{3}z^{k}=0\) & \(x^{N}+xz^{k}=0\) & \(\frac{N-1}{N+k-1}\) & \(N(k-1)+1\) \\ \hline \(D_{N}\) & \(2N-2\) & \(x_{1}^{2}+x_{2}^{N-1}+x_{2}x_{3}^{2}+z^{k}=0\) & \(x^{2N}+x^{2}z^{k}=0\) & \(\frac{2N-2}{2N+k-2}\) & \(N(k-1)\) \\ \hline & \(N\) & \(x_{1}^{2}+x_{2}^{N-1}+x_{2}x_{3}^{2}+z^{k}x_{3}=0\) & \(x^{2N}+z^{2k}=0\) & \(\frac{12}{N+k}\) & \(2k(N-1)-N\) \\ \hline \(E_{6}\) & \(12\) & \(x_{1}^{2}+x_{3}^{2}+x_{3}^{4}+z^{k}=0\) & \(x^{12}+z^{k}=0\) & \(\frac{12}{12+k}\) & \(6k-6\) \\ \hline & \(9\) & \(x_{1}^{2}+x_{3}^{2}+x_{3}^{4}+z^{k}x_{3}=0\) & \(x^{12}+x^{3}z^{k}=0\) & \(\frac{9}{9+k}\) & \(8k-6\) \\ \hline & \(8\) & \(x_{1}^{2}+x_{3}^{2}+x_{3}^{4}+z^{k}x_{2}=0\) & \(x^{12}+x^{4}z^{k}=0\) & \(\frac{8}{8+k}\) & \(9k-6\) \\ \hline \(E_{7}\) & \(18\) & \(x_{1}^{2}+x_{3}^{2}+x_{2}x_{3}^{3}+z^{k}=0\) & \(x^{18}+z^{k}=0\) & \(\frac{18}{18+k}\) & \(7k-7\) \\ \hline & \(14\) & \(x_{1}^{2}+x_{3}^{2}+x_{2}x_{3}^{3}+z^{k}x_{3}=0\) & \(x^{18}+x^{4}z^{k}=0\) & \(\frac{14}{14+k}\) & \(9k-7\) \\ \hline \(E_{8}\) & \(30\) & \(x_{1}^{2}+x_{3}^{3}+x_{3}^{5}+z^{k}=0\) & \(x^{30}+z^{k}=0\) & \(\frac{30}{30+k}\) & \(8k-8\) \\ \hline & \(24\) & \(x_{1}^{2}+x_{2}^{3}+x_{3}^{5}+z^{k}x_{3}=0\) & \(x^{30}+x^{6}z^{k}=0\) & \(\frac{24}{24+k}\) & \(10k-8\) \\ \hline & \(20\) & \(x_{1}^{2}+x_{2}^{3}+x_{3}^{5}+z^{k}x_{2}=0\) & \(x^{30}+x^{10}z^{k}=0\) & \(\frac{20}{20+k}\) & \(12k-8\) \\ \hline \end{tabular}
\end{table}
Table 2: Three-fold isolated quasi-homogenous singularities of cDV type corresponding to the \(\mathfrak{j}^{b}[k]\) irregular punctures of the regular-semisimple type in [20]. These 3d singularity is very useful in extracting the Coulomb branch spectrum [61].
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(j\) & \(A_{2N}\) & \(A_{2N-1}\) & \(D_{N+1}\) & \(E_{6}\) & \(D_{4}\) \\ \hline Outer-automorphism \(o\) & \(\mathbb{Z}_{2}\) & \(\mathbb{Z}_{2}\) & \(\mathbb{Z}_{2}\) & \(\mathbb{Z}_{2}\) & \(\mathbb{Z}_{3}\) \\ \hline Invariant subalgebra \(\mathfrak{g}^{\vee}\) & \(B_{N}\) & \(C_{N}\) & \(B_{N}\) & \(F_{4}\) & \(G_{2}\) \\ \hline Flavor symmetry \(\mathfrak{g}\) & \(C_{N}^{(1)}\) & \(B_{N}\) & \(C_{N}^{(2)}\) & \(F_{4}\) & \(G_{2}\) \\ \hline Lacety \(n\) & \(4\) & \(2\) & \(2\) & \(2\) & \(3\) \\ \hline \(h_{\theta}\) & \(4N+2\) & \(4N-2\) & 2N+2 & 18 & 12 \\ \hline \end{tabular}
\end{table}
Table 3: Outer-automorphisms of simple Lie algebras \(\mathfrak{j}\), its invariant subalgebra \(\mathfrak{g}^{\vee}\) and flavor symmetry \(\mathfrak{g}\) from the Langlands dual of \(\mathfrak{g}^{\vee}\).
\({\cal M}_{Hit}((\mathfrak{j},o),\nu,(f^{\vee},c))\) with \(\nu=\frac{k_{t}}{b_{t}}+1\)). Given a solution \(\Phi(z)\in{\cal M}_{Hit}\), its spectral curve
\[\det(x-\Phi(z))=0 \tag{5}\]
is identified with the SW curve. In certain cases, the spectral curve is equivalent to the mini-versal deformation of the singularity (listed in table 2 and 4). One can see that \({\cal M}_{Hit}\) is fibered over \(B\) through the Hitchin map
\[{\cal M}_{Hit}\to B, \tag{6}\]
where the base \(B\) is the moduli space of the spectral (SW) curve which is just the Coulomb branch of the 4d theory on flat spaces.
Properties of \({\cal M}_{Hit}\) with \(f\) trivial were recently studied in [64]. One interesting information is the complex dimension of the base \(B\), which is equal to the dimension of the fibre due to the property of hyper-Kahler manifold. Here we provide a way to compute \(\dim B\) from physics. Since coordinates of \(B\) are parameterized by vacuum expectation values (vev's) of 4d Coulomb branch operators, we can find \(\dim B\) by counting the number of 4d Coulomb branch operators. This can be done as following: the spectral curve takes the form \(f_{ADE}(x,y,z,w)+\sum a_{i}\phi_{i}(z)=0\), and the existence of \(\mathbb{C}^{*}\) action on \({\cal M}_{Hit}\) ensures that one can define a \(\mathbb{C}^{*}\) action on the coordinates \(x,y,z,w\) by requiring that the spectral curve is homogeneuous under the \(\mathbb{C}^{*}\) action and \([x]+[z]=1\). From these, one can deduce the \(\mathbb{C}^{*}\) charge of the coordinate \(a_{i}\). Those \(a_{i}\)'s with \(\mathbb{C}^{*}\) charge greater than 1 are identified as Coulomb branch operators, then \(\dim B\) is the number of such \(a_{i}\)'s.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\mathfrak{j}/o\) & \(b_{t}\) & SW geometry at SCFT point & Spectral curve at SCFT point & \(\Delta[z]\) \\ \hline \(A_{2N}/\mathbb{Z}_{2}\) & \(2N+1\) & \(x_{1}^{2}+x_{2}^{2}+x^{2N+1}+z^{k+\frac{1}{2}}=0\) & \(x^{2N+1}+z^{k+\frac{1}{2}}=0\) & \(\frac{4N+2}{4N+2k+3}\) \\ \hline & \(2N\) & \(x_{1}^{2}+x_{2}^{2}+x^{2N+1}+xz^{k}=0\) & \(x^{2N+1}+xz^{k}=0\) & \(\frac{2N}{k+2N}\) \\ \hline \(A_{2N-1}/\mathbb{Z}_{2}\) & \(2N-1\) & \(x_{1}^{2}+x_{2}^{2}+x^{2N}+xz^{k+\frac{1}{2}}=0\) & \(x^{2N}+xz^{k+\frac{1}{2}}=0\) & \(\frac{4N-2}{4N+2k+1}\) \\ \hline & \(2N\) & \(x_{1}^{2}+x_{2}^{2}+x^{2N}+z^{k}=0\) & \(x^{2N}+z^{k}=0\) & \(\frac{2N}{2N+k}\) \\ \hline \(D_{N+1}/\mathbb{Z}_{2}\) & \(N+1\) & \(x_{1}^{2}+x_{2}^{N}+x_{2}x_{3}^{2}+x_{3}z^{k+\frac{1}{2}}=0\) & \(x^{2N+2}+z^{2k+1}=0\) & \(\frac{2N}{2k+2N+3}\) \\ \hline & \(2N\) & \(x_{1}^{2}+x_{2}^{N}+x_{2}x_{3}^{2}+z^{k}=0\) & \(x^{2N+2}+x^{2}z^{k}=0\) & \(\frac{2N}{k+2N}\) \\ \hline \(D_{4}/\mathbb{Z}_{3}\) & 4 & \(x_{1}^{2}+x_{2}^{3}+x_{2}x_{3}^{2}+x_{3}z^{k\pm\frac{1}{2}}=0\) & \(x^{8}+z^{2k\pm\frac{1}{2}}=0\) & \(\frac{12}{12+3k\pm 1}\) \\ \hline & 6 & \(x_{1}^{2}+x_{2}^{3}+x_{2}x_{3}^{2}+z^{k}=0\) & \(x^{8}+x^{2}z^{k}=0\) & \(\frac{6}{6+k}\) \\ \hline \(E_{6}/\mathbb{Z}_{2}\) & 9 & \(x_{1}^{2}+x_{2}^{3}+x_{3}^{4}+x_{3}z^{k+\frac{1}{2}}=0\) & \(x^{12}+x^{3}z^{k+\frac{1}{2}}=0\) & \(\frac{18}{18+2k+1}\) \\ \hline & 12 & \(x_{1}^{2}+x_{2}^{3}+x_{3}^{4}+z^{k}=0\) & \(x^{12}+z^{k}=0\) & \(\frac{12}{12+k}\) \\ \hline & 8 & \(x_{1}^{2}+x_{2}^{3}+x_{3}^{4}+x_{2}z^{k}=0\) & \(x^{12}+x^{4}z^{k}=0\) & \(\frac{8}{8+k}\) \\ \hline \end{tabular}
\end{table}
Table 4: SW geometry of twisted theories at the SCFT point. Here we also list the scaling dimension of coordinate \(z\). All \(k\)’s in this table are integer valued and the power of \(z\) coordinate in singularity is equal to \(k_{t}\) used in equation (3). For example, in the case \((D_{4},\mathbb{Z}_{3},b_{t}=4)\), \(k_{t}\) is \(k\pm\frac{1}{3}\). The definition of \(k_{t}\) and \(b_{t}\) are slightly different from [63] but the ration \(k_{t}/b_{t}\) remains the same.
**Example 2.1**.: Consider a theory whose spectral curve is given by \(x^{2}+z^{5}+u_{1}z^{3}+u_{2}z^{2}+u_{3}z+u_{4}=0\). The \(\mathbb{C}^{*}\) charges are \([x]=\frac{5}{7},[z]=\frac{2}{7}\), so the scaling dimensions of base coordinates are
\[[u_{1}]=\frac{4}{7},\ \ [u_{2}]=\frac{6}{7}\ \,[u_{3}]=\frac{8}{7}\ \,[u_{4}]= \frac{10}{7}, \tag{7}\]
so there are two coordinates with \(\mathbb{C}^{*}\) charge greater than \(1\), then
\[\dim B=2. \tag{8}\]
One can compute \(\dim B\) by the Milnor number of singularity. First, the dimension of the charge lattice of the Coulomb branch is \(2\dim B+\mathfrak{f}\), where \(\mathfrak{f}\) is the rank of flavor symmetries. This dimension is the same as the Milnor number \(\mu\) of the singularity, so we have the formula [65, 66, 67, 61]
\[\dim B=\frac{1}{2}(\mu-\mathfrak{f}). \tag{9}\]
For a quasi-homogeneous singularity, one can assign a weight \(q_{i}\) for the \(i\)-th coordinate such that the weight of the singularity is one, then the Milnor number of the singularity is
\[\mu=\prod_{i}(1-\frac{1}{q_{i}}), \tag{10}\]
which is always an integer. We then need to find out the number of mass parameters (those coordinates in the mini-versal deformations with scaling dimension one) which gives \(\mathfrak{f}\).
**Example 2.2**.: Consider the singularity which is given as \(x^{2}+y^{5}=0\) with weight assignments \((x,y)=(\frac{1}{2},\frac{1}{5})\), then the Milnor number is \(\mu=4\), and there is also no mass parameter, so
\[\dim B=\mu/2=2. \tag{11}\]
In general, the dimension of \(B\) of the theory \(\mathcal{T}_{\mathfrak{j},b,k,f}\) and \(\mathcal{T}_{\mathfrak{j},o,b_{t},k_{t},f}\) are specified by the following formula:
* For the untwisted theory \(\mathcal{T}_{\mathfrak{j},b,k,f}\), \[\dim B=\frac{(h\frac{k}{b}-1)\operatorname{rank}(\mathfrak{g})-f_{0}}{2}- \frac{\dim\mathcal{O}_{prin}}{2}+\frac{\dim\mathcal{O}_{f^{\vee}}}{2}.\] (12) Here \(h\) is the Coxeter number for the Lie algebra \(\mathfrak{j}\). \(f_{0}\) is the number of mass parameter in irregular singularity [68, 17], and \(\dim\mathcal{O}_{prin}\) is the complex dimension of principal nilpotent orbit of \(\mathfrak{g}\) which is equal to \(\dim(\mathfrak{j})-\operatorname{rank}(\mathfrak{j})\).
* For the twisted theory \(\mathcal{T}_{\mathfrak{j},o,b_{t},k_{t},f}\), \[\dim B=\frac{(h_{\theta}\frac{k_{t}^{\prime}}{b_{t}}-1)\operatorname{rank}( \mathfrak{g})-f_{0}}{2}-\frac{\dim\mathcal{O}_{prin}}{2}+\frac{\dim\mathcal{O }_{f^{\vee}}}{2}.\] (13) Here \(k_{t}^{\prime}=nk_{t}+nb_{t}\) and \(n\) is the order of outer-outmorphism \(o\). \(h_{\theta}\) is the twisted Coxeter number listed in the last line of \(3\). \(f_{0}\) is the number of mass parameters in irregular singularity [68, 17], and \(\mathcal{O}_{prin}\) is the principal nilpotent orbit of Lie algebra \(\mathfrak{g}^{\vee}\).
The above formula is found by explicitly computing the graded Coulomb branch dimensions, see [63] for the derivation. We also give the explicit expression for \(\dim B\) when \(f\) is trivial or principal orbit.
* If \(f\) is trivial, the dimension of \(B\) is \[\dim B=\frac{(h_{\theta}\frac{k_{\rm f}^{\prime}}{b_{\rm t}}-1)\operatorname{ rank}(\mathfrak{g})-f_{0}}{2}.\] (14) This is the same as the result in [64].
* If \(f\) is principal, the dimension of \(B\) is given by \[\dim B=\frac{\left(h_{\theta}\frac{k_{\rm f}^{\prime}}{b_{\rm t}}-h(\mathfrak{g} ^{\vee})-1\right)\operatorname{rank}(\mathfrak{g})-f_{0}}{2}.\] (15)
In order to derive dimension formulae (12) and (13), we start with a non-twisted theory \(\mathcal{T}(\mathfrak{j},b,k,f)\), and if there is irregular singularity only (i.e. \(f\) is chosen to be principal), the same theory can also be engineered by putting type IIB theory on a three-fold singularity which are listed in the third column of table 2. One can then compute \(\dim B\) using equation (9). The tables of \(\mu\) in each cases can also be found in the last column of table I of [16] and we also reproduce them in the last column of table 2 for reader's convenience. Adding a regular singularity with Nahm label \(f\), will change \(\dim B\) into
\[\dim B=\frac{1}{2}(\mu-f_{0})+\frac{1}{2}\dim\mathcal{O}_{f^{\vee}}. \tag{16}\]
Finally one can check case by case that the Milnor number \(\mu\) for non-twisted cases can also be written uniformly as
\[\mu=(h_{\rm i}\frac{k}{b}+h_{\rm i}-1)\operatorname{rank}(\mathfrak{j})-\dim \mathcal{O}_{prin} \tag{17}\]
The dimension formula for twisted cases is a direct generalization of the untwisted one.
**Example 2.3**.: When \(\mathfrak{j}=A_{N-1}\), the number \(b\) can be either \(N\) or \(N-1\) as table 2. If there is no regular puncture, the corresponding 3-fold singularity is
\[x^{2}+y^{2}+z^{N}+w^{k}=0,\quad b=N\] \[x^{2}+y^{2}+z^{N}+zw^{k}=0,\quad b=N-1 \tag{18}\]
For \(b=N\), the Milnor number \(\mu=(N-1)(k-1)\). On the other hand, since \(h=N\), \(\operatorname{rank}(\mathfrak{j})=N-1\) and \(\dim\mathcal{O}_{prin}=N^{2}-N\), we have
\[(h_{\rm i}\frac{k}{b}+h_{\rm i}-1)\operatorname{rank}(\mathfrak{j})-\dim \mathcal{O}_{prin}=(k-1)(N-1)=\mu. \tag{19}\]
For \(b=N-1\), the Milnor number is \(\mu=N(k-1)+1\), which also agrees with equation (17)
\[(h_{\rm i}\frac{k}{b}+h_{\rm i}-1)\operatorname{rank}(\mathfrak{j })-\dim\mathcal{O}_{prin} \tag{20}\] \[=Nk+(N-1)^{2}-N(N-1)=N(k-1)+1=\mu.\]
There is a different way of counting \(\dim B\) by using the fact that the dimension of the fibre is the same as the dimension of the base \(B\). The dimension formula of the Hitchin fibre can also be found in math literature [46; 69; 70] for both untwisted and twisted cases, which is exactly the formula we found using physics arguments. This provids a cross check of (12) and (13).
### Schur sector and W-algebra
The Higgs branch of a 4d \(\mathcal{N}=2\) theory is given by a Hyper-Kahler manifold. Unlike the Coulomb branch, there are many \(\mathcal{N}=2\) theories which do not have Higgs branch. However, all \(\mathcal{N}=2\) theories do have a Schur sector, which includes the Higgs branch when exists. For general \(\mathcal{N}=2\) theories especially strongly coupled theories, direct computations of Higgs (Schur) sector are very difficult. Luckily one can get a 2d VOA(\(\mathcal{T}\)) from the Schur sector of a 4d \(\mathcal{N}=2\) SCFT \(\mathcal{T}\) with the following properties [12]:
* There is a subalgebra \(V_{k_{2d}}(\mathfrak{g}_{F})\) in VOA(\(\mathcal{T}\)), where \(V_{k_{2d}}(\mathfrak{g}_{F})\) is the simple quotient of the affine vertex algebra of the affine Kac-Moody (AKM) algebra \(\hat{\mathfrak{g}}_{F}\) at level \(k_{2d}\), and \(\mathfrak{g}_{F}\) is the Lie algebra of 4d flavor symmetry \(G_{F}\).
* The 2d central charge \(c_{2d}\) and the level of the AKM algebra \(k_{2d}\) are related to the 4d central charge \(c_{4d}\) and the flavor central charge \(k_{F}\) as9 Footnote 9: Our normalization of \(k_{F}\) is half of that of [12; 71]. \[c_{2d}=-12c_{4d},\ \ k_{2d}=-k_{F}.\] (21)
* The (normalized) vacuum character of VOA(\(\mathcal{T}\)) is the 4d Schur index \(\mathcal{I}(q)\). The growth function \(G\) of the vacuum character is related to 4d central charges by \[-48(a_{4d}-c_{4d})=G\] (22)
* The associated variety \(X_{\text{VOA($\mathcal{T}$)}}\) is the Higgs branch \(\mathcal{M}_{H}\) of \(\mathcal{T}\)[13; 14; 15].
If we can find the VOA for a given 4d \(\mathcal{N}=2\) SCFT, then the Higgs (Schur) sector can be solved.
In general there is no systematical way to get VOA(\(\mathcal{T}\)) from a given \(\mathcal{T}\), However, for our theory \(\mathcal{T}_{\mathfrak{j},k,b,f}\) and \(\mathcal{T}_{\mathfrak{j},o,k,b,f}\), if the irregular singularity carries no flavor symmetry, the corresponding VOA are respectively the following W-algebra [13; 16; 17; 18]
\[W_{-h^{\vee}+\frac{b}{b+k}}(\mathfrak{j},f), \tag{23}\]
and
\[W_{-h^{\vee}(\mathfrak{g})+\frac{1}{n}\frac{b}{b_{t}+k_{t}}}(\mathfrak{g},f). \tag{24}\]
Here \(h^{\vee}\) is the dual Coxeter number of \(\mathfrak{j}\), \(h^{\vee}(\mathfrak{g})\) is the dual Coxeter number for \(\mathfrak{g}\), and \(n\) is the lacety listed in table 3. The constraints on the irregular singularity \(\mathfrak{j}^{b}[k]\) which has no mass deformation are summarized in tables 5 and 6[16; 72].
From tables 2 and 4, one can see that given the irregular singularity \(\mathfrak{j}^{b}[k]\), the allowed values of \(b\) is always smaller or equal to the dual Coxeter number \(h^{\vee}\) of \(\mathfrak{j}\). Also recall that a level of the W-algebra \(W_{\kappa}(\mathfrak{g},f)\) is called **admissible** if it has the form
\[\kappa=-h^{\vee}+\frac{p}{q},\quad p\geq h^{\vee},\ q\in\mathbb{Z}_{\geq 0},\ \gcd(p,q)=1. \tag{25}\]
When \(p=h^{\vee}\) the corresponding level is called **boundary admissible**. Then the W-algebra (23) and (24) are always boundary admissible or **non-admissible**.
## 3 Representation theory of admissible W-algebras
As mentioned in the introduction, the core correspondence of the mirror symmetry here is the bijection between simple modules and fixed points. In this section, we will review key information on the representation theory of W-algebras \(W_{\kappa}(\mathfrak{g},f)\) at boundary admissible level, which will provide crucial examples for our duality.
\begin{table}
\begin{tabular}{|c|c|c|} \hline \(\mathfrak{j}/o\) & \(b_{t}\) & no mass \\ \hline \(A_{2N}/\mathbb{Z}_{2}\) & \(2N+1\) & \(\frac{4N+2}{\gcd(4N+2,2k+1)}\) even \\ \hline & \(2N\) & \(\frac{2N}{\gcd(2N,k)}\) even \\ \hline \(A_{2N-1}/\mathbb{Z}_{2}\) & \(2N-1\) & \(\frac{4N-2}{\gcd(4N-2,2k+1)}\) even \\ \hline & \(2N\) & \(\frac{2N}{\gcd(2N,k)}\) even \\ \hline \(D_{N}/\mathbb{Z}_{2}\) & \(N+1\) & \(\frac{2N}{\gcd(2k+1,2N)}\) even \\ \hline & \(2N\) & \(\frac{2N-2}{\gcd(k,2N-2)}\), \(\gcd(k,2N-2)\) even \\ \hline \(D_{4}/\mathbb{Z}_{3}\) & \(4\) & No constraint \\ \hline & \(6\) & \(k\neq 6n\) \\ \hline \(E_{6}/\mathbb{Z}_{2}\) & \(9\) & No constraint \\ \hline & \(12\) & \(k\neq 12n\) \\ \hline & \(8\) & \(k\neq 8n\), \(k\) even \\ \hline \end{tabular}
\end{table}
Table 6: Constraints \(k_{t}\) so that the twisted irregular singularity has no mass deformation.
\begin{table}
\begin{tabular}{|c|l|l|l|} \hline \(\mathfrak{j}^{b}[k]\) & no mass & \(\mathfrak{j}^{b}[k]\) & no mass \\ \hline \(A_{N}^{N}-[k]\) & \((k,N)=1\) & \(A_{N-1}^{N-1}[k]\) & No solution \\ \hline \(D_{N}^{2N-2}[k]\) & \(\frac{2N-2}{\gcd(k,2N-2)}\) even, \(\gcd(k,2N-2)\) odd & \(D_{N}^{N}[k]\) & \(\frac{N}{\gcd(k,N)}\) even \\ \hline \(E_{6}^{12}[k]\) & \(k\neq 3n\) & \(E_{6}^{9}[k]\) & \(k\neq 9n\) \\ \hline \(E_{6}^{8}[k]\) & No solution & \(E_{7}^{18}[k]\) & \(k\neq 2n\) \\ \hline \(E_{7}^{14}[k]\) & \(k\neq 2n,n>1\) & \(E_{8}^{30}[k]\) & \(k\neq 30n\) \\ \hline \(E_{8}^{24}[k]\) & \(k\neq 24n\) & \(E_{8}^{20}[k]\) & \(k\neq 20n\) \\ \hline \end{tabular}
\end{table}
Table 5: Constraints on \(k\) so that irregular singularity denoted by \(\mathfrak{j}^{b}[k]\) has no mass deformation.
### Principal admissible modules of \(V_{\kappa}(\mathfrak{g})\)
Let \(\hat{\mathfrak{g}}\) be the (non-twisted) affine Lie algebra of \(\mathfrak{g}\). Let us start from the representation theory of the simple VOA \(V_{\kappa}(\mathfrak{g})\) given by the unique simple quotient of the universal vertex algebra associated with \(\hat{\mathfrak{g}}\) at level \(\kappa\). The level \(\kappa\) is called admissible if it has the following form [73]
\[\kappa=-h^{\vee}+\frac{p}{u},\quad p\geq h^{\vee},\ u\in\mathbb{Z}_{>0},\ \gcd(p,u)=\gcd(n,u)=1. \tag{3.1}\]
Here \(h^{\vee}\) is the dual Coxeter number of \(\mathfrak{g}\). By [74], simple modules of \(V_{\kappa}(\mathfrak{g})\) at admissible level in the category \(\mathcal{O}\) of \(\hat{\mathfrak{g}}\) are the so-called admissible modules defined in [73]. Admissible modules have many properties similar to modules at integeral levels, therefore are interesting objects in VOA research.
From now on, we fix \(\kappa\) to be the boundary admissible level, i.e.,
\[\kappa=-h^{\vee}+\frac{h^{\vee}}{u},\quad u\in\mathbb{Z}_{>0},\ \gcd(h^{\vee},u)=\gcd(n,u). \tag{3.2}\]
In this case, the highest weight of admissible modules are given as follows. One first defines a set of affine coroots \(S_{u}\) depending on \(u\)10
Footnote 10: We identify \(\mathfrak{h}\) with \(\mathfrak{h}^{\vee}\) using the natural pairing between roots.
\[S_{u}\equiv\{-\theta^{\vee}+u\delta,\alpha_{1}^{\vee},\ldots,\alpha_{r}^{\vee }\}, \tag{3.3}\]
where \(\theta^{\vee}\) is the coroot corresponding to the highest root \(\theta\) of \(\mathfrak{g}\), and \(\delta\) is the imaginary root, \(\{\alpha_{1}^{\vee},\cdots,\alpha_{r}^{\vee}\}\) is the set of simple coroots of \(\mathfrak{g}\). The set of admissible weights at level \(\kappa\) is given by
\[\mathrm{Adm}_{\kappa}=\{w.(\kappa\Lambda_{0})\ |\ w\in W_{ext},\ w(S_{u})\subset \hat{\Delta}_{+}^{\vee}\}, \tag{3.4}\]
where \(W_{ext}\) is the extended affine Weyl group, \(\Lambda_{0}\) is the \(0\)-th affine fundamental weight, and \(\hat{\Delta}_{+}^{\vee}\) is the set of positive real coroots. The dot action \(w.\Lambda\) is defined as
\[w.\Lambda\equiv w(\Lambda+\hat{\rho})-\hat{\rho}, \tag{3.5}\]
with \(\hat{\rho}=\sum_{i=0}^{r}\Lambda_{i}\) being the affine Weyl vector. Here \(\Lambda_{i}\)'s are affine fundamental weights of \(\hat{\mathfrak{g}}\) and \(r=\mathrm{rank}\mathfrak{g}\). Moreover, \(w.(\kappa\Lambda_{0})=w^{\prime}.(\kappa\Lambda_{0})\) if and only if \(w^{-1}w^{\prime}(S_{u})=S_{u}\). Let
\[\begin{split} W_{u}=&\{w\in W_{ext}\ |\ w(S_{u}) \subset\hat{\Delta}_{u}^{\vee}\},\\ \Omega_{u}=&\{w\in W_{ext}\ |\ w(S_{u})=S_{u}\}, \end{split} \tag{3.6}\]
then there is a bijection
\[W_{u}/\Omega_{u}\xrightarrow{\sim}\mathrm{Adm}_{\kappa}. \tag{3.7}\]
The number of admissible weights at level \(\kappa=-h^{\vee}+h^{\vee}/u\) is \(u^{r}\). An admissible module \(L(\Lambda)\) is just the simple highest weight module of \(\hat{\mathfrak{g}}\) with the highest weight \(\Lambda\in\mathrm{Adm}_{\kappa}\). The conformal weight \(h_{\Lambda}\) of the highest weight state of \(L(\Lambda)\) is
\[h_{\Lambda}=\frac{(\Lambda,\Lambda+2\hat{\rho})}{2(\kappa+h^{\vee})}. \tag{3.8}\]
Since \(W_{ext}\) is a semi-direct product of the coweight lattice \(P^{\vee}\) and the Weyl group of \(\mathfrak{g}\), we can also write each \(w\in W_{ext}\) uniquely as a composition of a translation in \(\beta\in P^{\vee}\) and a Weyl transformation \(y\in W\)
\[w=t_{\beta}y, \tag{3.9}\]
with
\[t_{\beta}(\lambda)=\lambda+\lambda(K)\beta-\left((\lambda,\beta)+\frac{1}{2} \lambda(K)(\beta,\beta)\right)\delta. \tag{3.10}\]
Here \(K\) is the central element in \(\hat{\mathfrak{g}}\). We will also denote \(w=t_{\beta}y\) by \((\beta,y)\). Each \(\Lambda\in\mathrm{Adm}_{\kappa}\) can also be written as \((t_{\beta}w).(\kappa\Lambda_{0})\) for some \((\beta,w)\).
Given \(\Lambda\in\mathrm{Adm}_{\kappa}\), let \(\mathrm{ch}_{\Lambda}(z;\tau,t)\) be the character of the admissible module \(L(\Lambda)\). The space spanned by characters of admissible modules carries modular transformations generated by
\[\begin{split} T:&(z,\tau,t)\mapsto(z,\tau+1,t),\\ S:&(z,\tau,t)\mapsto\left(\frac{z}{\tau},-\frac{1}{ \tau},t-\frac{(z,z)}{2\tau}\right).\end{split} \tag{3.11}\]
Explicitly, we have
\[\begin{split}\mathrm{ch}_{\Lambda}(z;\tau+1,t)&= \sum_{\Lambda^{\prime}\in\mathrm{Adm}_{\kappa}}\mathbb{T}_{\Lambda,\Lambda^{ \prime}}\mathrm{ch}_{\Lambda^{\prime}}(z;\tau,t),\\ \mathrm{ch}_{\Lambda}\left(\frac{z}{\tau},-\frac{1}{\tau},t- \frac{(z,z)}{2\tau}\right)&=\sum_{\Lambda^{\prime}\in\mathrm{ Adm}_{\kappa}}\mathbb{S}_{\Lambda,\Lambda^{\prime}}\mathrm{ch}_{\Lambda^{ \prime}}(z;\tau,t).\end{split} \tag{3.12}\]
Given \(\Lambda=(t_{\beta}y).(\kappa\Lambda_{0})\) and \(\Lambda^{\prime}=(t_{\beta^{\prime}}y^{\prime}).(\kappa\Lambda_{0})\), entries of matrices \(\mathbb{T}\) and \(\mathbb{S}\) are
\[\begin{split}\mathbb{T}_{\Lambda,\Lambda^{\prime}}& =e^{2\pi i\left(h_{\Lambda-\frac{c}{24}}\right)}\delta_{\Lambda,\Lambda^{\prime}},\\ \mathbb{S}_{\Lambda,\Lambda^{\prime}}&=\left|\frac{ P^{\vee}}{uh^{\vee}Q^{\vee}}\right|^{-\frac{1}{2}}\epsilon(yy^{\prime})\prod_{ \alpha\in\Delta_{+}}2\sin\frac{\pi iu(\rho,\alpha)}{h^{\vee}}e^{-2\pi i\left( \left(\rho,\beta+\beta^{\prime}\right)+\frac{h^{\vee}(\beta,\beta^{\prime})} {u}\right)}.\end{split} \tag{3.13}\]
Here \(c=c(V_{\kappa}(\mathfrak{g}))=\frac{\kappa\dim\mathfrak{g}}{\kappa+h^{\vee}}\) is the central charge of \(V_{\kappa}(\mathfrak{g})\), \(\left|\frac{P^{\vee}}{uh^{\vee}Q^{\vee}}\right|\) in the index of the sublattice \(uh^{\vee}Q^{\vee}\) in \(P^{\vee}\), \(\epsilon(yy^{\prime})\) is the sign of the Weyl group element \(yy^{\prime}\).
**Example 3.1**.: Let \(\mathfrak{g}=\mathfrak{sl}_{2}\) with boundary admissible level \(\kappa=-2+\frac{2}{u}\). Let \(\alpha\) be the unique positive coroot. The set \(S_{u}\) is
\[S_{u}=\{-\theta+u\delta,\alpha\}=\{-\alpha+u\delta,\alpha\}. \tag{3.14}\]
The set \(\hat{\Delta}_{+}^{\vee}\) is
\[\hat{\Delta}_{+}^{\vee}=\{\alpha+n\delta\ |\ n\in\mathbb{Z}_{\geq 0}\}\cup\{- \alpha+n\delta\ |\ n\in\mathbb{Z}_{>0}\}. \tag{3.15}\]
The finite Weyl group is generated by \(s_{\alpha}\), and the co-weight lattice is spanned by \(\frac{\alpha}{2}\), so \(W_{ext}\) is
\[W_{ext}=\{t_{-m\alpha/2},t_{-n\alpha/2}s_{\alpha}\ |\ m,n\in\mathbb{Z}\}. \tag{3.16}\]
And \(\Omega_{u}\simeq\mathbb{Z}_{2}\) is generated by \(t_{u\alpha/2}s_{\alpha}\). Because \(t_{u\alpha/2}s_{\alpha}\) sends \(t_{-m\alpha/2}\) to \(t_{-n\alpha/2}s_{\alpha}\) for some \(n\) and vice versa. We only need to consider \(w=t_{-\frac{m}{2}\alpha}\) satisfying \(w(S_{u})\subset\hat{\Delta}_{+}^{\vee}\). The action of \(w=t_{-\frac{m}{2}\alpha}\) on elements in \(S_{u}\) is
\[t_{-\frac{m}{2}\alpha}(-\alpha+u\delta)=-\alpha+(-m+u)\delta,\] \[t_{-\frac{m}{2}\alpha}(\alpha)=\alpha+m\delta. \tag{3.17}\]
The condition \(w(S_{u})\subset\hat{\Delta}_{+}^{\vee}\) constraints the allowed values of \(m\) to be \(0\leq m<u\), and the total number admissible weights is indeed \(u\). Using (3.4), the set \(\mathrm{Adm}_{\kappa}\) is
\[\mathrm{Adm}_{\kappa}=\{\Lambda_{m}\equiv\left(\kappa+\frac{2m}{u}\right) \Lambda_{0}-\frac{2m}{u}\Lambda_{1},\ 0\leq m<u\}. \tag{3.18}\]
**Example 3.2**.: Let \(\mathfrak{g}=\mathfrak{sl}_{3}\) with boundary admissible level \(\kappa=-3+\frac{3}{u}\) such that \(\gcd(u,3)=1\). The representatives of \(W_{u}/\Omega_{u}\) are
\[\{t_{-(k_{1}\omega_{1}+k_{2}\omega_{2})}\mid k_{1}\geq 0,\ k_{2}\geq 0,\ k_{1}+k_{2 }\leq u-1\}\cup\{t_{(k_{1}\omega_{1}+k_{2}\omega_{2})}s_{\theta}\mid k_{1}\geq 1,\ k_{2}\geq 1,\ k_{1}+k_{2}\leq u\}. \tag{3.19}\]
Here \(\omega_{1}\) and \(\omega_{2}\) are fundamental weights of \(\mathfrak{sl}_{3}\), and \(s_{\theta}\) is the reflection with repsect to the highest root \(\theta=\alpha_{1}+\alpha_{2}\). The total number of admissible weights are \(u^{2}\). For \(u=4\), there are a total of 16 admissible weights listed in table 7.
### Representation theory of boundary admissible W-algebras
Let \(f\) be a nilpotent element of \(\mathfrak{g}\), and include \(f\) in an \(\mathfrak{sl}_{2}\)-triple \((e,f,x)\), so that \([x,e]=e\), \([x,f]=-f\) and \([e,f]=x\). Then \(\mathfrak{g}\) admits an eigenvalue decomposition withe respect to the adjoint action of \(x\)
\[\mathfrak{g}=\oplus_{j\in\mathbb{Z}}\mathfrak{g}_{j}. \tag{3.20}\]
By definition \(f\in\mathfrak{g}_{-1}\). One can define an affine W-algebra \(W_{\kappa}(\mathfrak{g},f)\) associated with \(\mathfrak{g}\), \(f\) at level \(\kappa\) by the quantum Drinfeld-Sokolov (qDS) reduction [75; 76]. The central charge of \(W_{\kappa}(\mathfrak{g},f)\) is [76]
\[c(W_{\kappa}(\mathfrak{g},f))=\mathrm{dim}\mathfrak{g}_{0}-\frac{1}{2}\mathrm{ dim}\mathfrak{g}_{\frac{1}{2}}-\frac{12}{\kappa+h^{\vee}}|\rho-(k+h^{\vee})x|^{2}, \tag{3.21}\]
\begin{table}
\begin{tabular}{|c|c||c|c||c|c|} \hline \([t_{\beta}y]\) & \(\Lambda\) & \([t_{\beta}y]\) & \(\Lambda\) & \([t_{\beta}y]\) & \(\Lambda\) \\ \hline \(1\) & \(-\frac{9}{4}\Lambda_{0}\) & \(t_{-\omega_{2}}\) & \(-\frac{6}{4}\Lambda_{0}-\frac{3}{4}\Lambda_{2}\) & \(t_{-2\omega_{2}}\) & \(-\frac{3}{4}\Lambda_{0}-\frac{6}{4}\Lambda_{2}\) \\ \hline \(t_{-3\omega_{2}}\) & \(-\frac{9}{4}\Lambda_{2}\) & \(t_{-\omega_{1}}\) & \(-\frac{6}{4}\Lambda_{0}-\frac{3}{4}\Lambda_{1}\) & \(t_{-\omega_{1}-\omega_{2}}\) & \(-\frac{3}{4}\Lambda_{0}-\frac{3}{4}\Lambda_{1}-\frac{3}{4}\Lambda_{2}\) \\ \hline \(t_{-\omega_{1}-2\omega_{2}}\) & \(-\frac{3}{4}\Lambda_{1}-\frac{6}{4}\Lambda_{2}\) & \(t_{-2\omega_{1}}\) & \(-\frac{3}{4}\Lambda_{0}-\frac{6}{4}\Lambda_{1}\) & \(t_{-2\omega_{1}-\omega_{2}}\) & \(-\frac{6}{4}\Lambda_{1}-\frac{3}{4}\Lambda_{2}\) \\ \hline \(t_{-3\omega_{1}}\) & \(-\frac{9}{4}\Lambda_{1}\) & \(t_{\omega_{1}+\omega_{2}}s_{\theta}\) & \(\frac{1}{4}\Lambda_{0}-\frac{5}{4}\Lambda_{1}-\frac{5}{4}\Lambda_{2}\) & \(t_{\omega_{1}+2\omega_{2}}s_{\theta}\) & \(-\frac{2}{4}\Lambda_{0}-\frac{5}{4}\Lambda_{1}-\frac{2}{4}\Lambda_{2}\) \\ \hline \(t_{\omega_{1}+3\omega_{2}}s_{\theta}\) & \(-\frac{5}{4}\Lambda_{0}-\frac{5}{4}\Lambda_{1}+\frac{1}{4}\Lambda_{2}\) & \(t_{2\omega_{1}+\omega_{2}}s_{\theta}\) & \(-\frac{2}{4}\Lambda_{0}-\frac{2}{4}\Lambda_{1}-\frac{5}{4}\Lambda_{2}\) & \(t_{2\omega_{1}+2\omega_{2}}s_{\theta}\) & \(-\frac{2}{4}\Lambda_{0}-\frac{2}{4}\Lambda_{1}-\frac{2}{4}\Lambda_{2}\) \\ \hline \(t_{3\omega_{1}+\omega_{2}}s_{\theta}\) & \(-\frac{5}{4}\Lambda_{0}+\frac{1}{4}\Lambda_{1}-\frac{5}{4}\Lambda_{2}\) & & & & \\ \hline \end{tabular}
\end{table}
Table 7: The list of admissible weights of \(V_{-3+3/4}(\mathfrak{sl}_{3})\). The first column summarizes the representatives of classes in \(W_{u}/\Omega_{u}\). The second column gives the admissible weight corresponding to the elements of the first column.
where \(\rho\) is the Weyl vector of \(\mathfrak{g}\). Although the vertex algebra structure of \(W_{\kappa}(\mathfrak{g},f)\) does not depend on the choices of \((e,f,x)\), the conformal structure does 11. To match the central charge of the corresponding 4d theory, \((e,f,x)\) is chosen to be the \((X,Y,H/2)\) with \((X,Y,H)\) being the standard \(\mathfrak{sl}_{2}\)-triple defined in [77].
Footnote 11: Actually, the data to get a W-algebra can be relaxed to a nilpotent element \(f\) and a good grading on \(\mathfrak{g}\) such that \(f\in\mathfrak{g}_{-1}\). The grading obtained from an \(\mathfrak{sl}_{2}\)-triple is called Dynkin which is always good. We will not discuss the construction of W-algebra from more general good gradings in this work.
Simple modules of \(W_{\kappa}(\mathfrak{g},f)\) can be obtained from admissible modules of \(V_{\kappa}(\mathfrak{g})\) by qDS-reduction. Firstly conjugate \((e,f,x)\) to a new \(\mathfrak{sl}_{2}\)-triple \((e^{\prime},f^{\prime},h^{\prime})\) such that \(f^{\prime}\) a regular nilpotent element in a standard Levi subalgebra \(\mathfrak{l}\) of \(\mathfrak{g}\). Here \(\mathfrak{l}\) is the centralizer of
\[\mathfrak{h}^{f}=\{h\in\mathfrak{h}\ |\ f(h)=0\}. \tag{3.22}\]
\(Z_{\mathfrak{g}}(\mathfrak{h}^{f})\). The root system of \(\mathfrak{l}\) is given by
\[\Delta_{\mathfrak{l}}\equiv\{\alpha\in\Delta\ |\ \alpha|_{\mathfrak{h}^{f}}=0\}. \tag{3.23}\]
The simple roots of \(\Delta_{\mathfrak{l}}\) is required to be a subset of simple roots of \(\mathfrak{g}\) because \(\mathfrak{l}\) is standard. Kac and Wakimoto [43] (Generalizing [78]) defined a functor
\[H_{f}(-):V_{\kappa}(\mathfrak{g})-\mathrm{mod}\to W_{\kappa}(\mathfrak{g},f)- \mathrm{mod}, \tag{3.24}\]
and they conjectured that this functor sends admissible module \(L(\Lambda)\) of \(V_{\kappa}(\mathfrak{g})\) to either \(0\) or simple modules of \(W_{\kappa}(\mathfrak{g},f)\), and all simple modules of \(W_{\kappa}(\mathfrak{g},f)\) are obtained in this way 12. They further conjectured that \(H_{f}(L(\Lambda))\neq 0\) if and only if
Footnote 12: When \(f\) admits an even grading, \(H_{f}(L(\Lambda))\) is a usual module of \(W_{\kappa}(\mathfrak{g},f)\). When \(f\) does not admit an even grading, \(H_{f}(L(\Lambda))\) is a Ramond twisted module of \(W_{\kappa}(\mathfrak{g},f)\)[43].
\[t_{\beta}y(S_{u})\subset\hat{\Delta}_{+}^{\vee}\backslash\Delta_{\mathfrak{l} }^{\vee},\quad\Lambda=(t_{\beta}y).(\kappa\Lambda_{0}), \tag{3.25}\]
and \(H_{f}(L(\Lambda))\) is isomorphic to \(H_{f}(L(\Lambda^{\prime}))\) if and only if
\[\Lambda^{\prime}\in W_{f}.\Lambda, \tag{3.26}\]
where \(W_{f}\) is the Weyl group generated by roots of \(\Delta_{\mathfrak{l}}\). These conjectures are partially proved in [79; 80; 81; 82]. The conformal weight of \(H_{f}(L(\Lambda))\) is [43; 81]
\[h_{H_{f}(L(\Lambda))}=\frac{u}{2h^{\vee}}(|\lambda+\rho|^{2}-|\rho|^{2})- \frac{h^{\vee}}{2u}|x|^{2}+(x,\rho), \tag{3.27}\]
with \(\lambda\) being the finite part of \(\Lambda\). Note that the first term of (3.27) is invariant under the actiion of \(W_{f}\), and the choice of \(x\) only change the conformal dimension by a constant shift.
The characters \(\mathrm{ch}_{H_{f}(L(\Lambda))}\) of \(W_{\kappa}(\mathfrak{g},f)\) also enjoy similar modular properties as characters of \(V_{\kappa}(\mathfrak{g})\) modules. If \(L(\Lambda)\) and \(L(\Lambda^{\prime})\) are two admissible modules which reduce to different W-algebra modules, the elements of modular matrices are
\[\begin{split}\mathbb{T}_{H_{f}(L(\Lambda)),H_{f}(L(\Lambda^{ \prime}))}&=e^{2\pi i\left(h_{H_{f}}(L(\Lambda))-\frac{e}{24} \right)}\delta_{H_{f}(L(\Lambda)),H_{f}(L(\Lambda^{\prime}))},\\ \mathbb{S}_{H_{f}(L(\Lambda)),H_{f}(L(\Lambda^{\prime}))}& =(-i)^{\frac{1}{2}(\dim\mathfrak{g}-\dim\mathfrak{g}^{f})} \sum_{y\in W^{f}}\mathbb{S}_{\Lambda,y.\Lambda^{\prime}},\end{split} \tag{3.28}\]
where \(\mathbb{S}_{\Lambda,y,\Lambda^{\prime}}\) is the modular \(S\) matrix of the parent affine vertex algebra, and \(\mathfrak{g}^{f}=\dim\mathfrak{g}_{0}+\dim\mathfrak{g}_{1/2}\).
**Example 3.3**.: Let \(\mathfrak{g}=\mathfrak{sl}_{2}\), \(\kappa=-2+2/u\) and \(f\in\mathcal{O}_{[2]}\) an element in the principal nilpotent orbit. Choose \((e,f,h)=(e_{\alpha},f_{\alpha},x)\), then \(\Delta_{\mathrm{I}}=\{\alpha\}\), and \(W_{f}\) is just the Weyl group of \(\mathfrak{sl}_{2}\). The condition when the admissible weight \(\Lambda=(t_{\beta}y).(\kappa\Lambda_{0})\) does not reduce to zero becomes
\[t_{\beta}y(S_{u})\subset\hat{\Delta}_{+}\backslash\Delta_{\mathrm{I}}=\{\pm \alpha+m\delta\ |\ m\in\mathbb{Z}_{>0}\}. \tag{3.29}\]
Using admissible modules of \(V_{2+\frac{2}{u}}(\mathfrak{sl}_{2})\) worked out in example 3.1, one can see that the module \(L(\kappa\Lambda_{0})\) reduces to \(0\), while \(L(\Lambda_{m})\) and \(L(\Lambda_{u-m})\) reduces to the same \(W_{-2+2/u}(\mathfrak{sl}_{2},[2])\) module, so the total number of simple modules are \((u-1)/2\). The algebra \(W_{-2+2/u}(\mathfrak{sl}_{2},[2])\) is isomorphic to the \((2,u)\) minimal model (the minimal series representation of the Virasoro algebra with central charge \(c=1-\frac{3(u-2)^{2}}{u}\)). The conformal dimension of \(H_{f}(L(\Lambda_{m}))\) is
\[h_{H_{f}(L(\Lambda_{m}))}=-\frac{1}{2u}(m-1)(u-m-1), \tag{3.30}\]
which is symmetric under the exchange \(m\leftrightarrow u-m\) and matches with the \((m,1)\) module of the \((2,u)\) minimal model.
**Example 3.4**.: Let \(\mathfrak{g}=\mathfrak{sl}_{3}\), \(\kappa=-3+3/u\) and \(f\in\mathcal{O}_{[2,1]}\) an element of the minimal nilpotent orbit. To match the 2d central charge with the 4d central charge, one should choose \(f\) to be \(f_{\theta}\) and \(x=\frac{1}{2}(\omega_{1}+\omega_{2})\), with the price that \(f\) is not regular in a standard Levi. However, we can choose \((f^{\prime},x^{\prime})=(f_{\alpha_{1}},\omega_{1})\) which are conjugate to \((f,x)\), such that \(\Delta_{\mathrm{I}}=\{\pm\alpha_{1}\}\) defines a standard Levi. Now \(W_{f}\) is generated by \(s_{1}\). The condition for the admissible module \(L(\Lambda)\) with \(\Lambda=(t_{\beta}y).(\kappa\Lambda_{0})\) not reducing to \(0\) becomes
\[t_{\beta}y(S_{u})\subset\hat{\Delta}_{+}\backslash\Delta_{\mathrm{I}}=\hat{ \Delta}_{+}\backslash\{\alpha_{1}\}. \tag{3.31}\]
When \(u=4\), one can use results in table 7 to work out the simple modules of \(W_{-3+3/4}(\mathfrak{sl}_{2},[2,1])\) explicitly. There are three modules with conformal weight \(0\), one modules with conformal weight \(-1/4\), and two modules with conformal weight \(-1/2\) (Computed using \(x^{\prime}\)). Results are summarized in table 8. One can then map the modules obtained above to modules defined by \(x\) using the method in [43].
## 4 Coulomb branch and its \(\mathbb{C}^{*}\)-fixed points
In the last section, we reviewed the representation theory of W-algebras which gives the information on the **Higgs (Schur)** sector of the 4d theory. When classifying simple modules, the computation reduces to the counting of extended affine Weyl group elements satisfying certain conditions. In this section, we go back to the Hitchin moduli space which describes the Coulomb branch of the 4d theory on a circle. Our Hitchin moduli space has a \(\mathbb{C}^{*}\) action which is the \(U(1)_{r}\) symmetry in the superconformal group. It was found previously in several class of theories that \(\mathbb{C}^{*}\)-fixed points are in one to one correspondence to the simple modules of the corresponding W-algebra [9; 10]. In those work, fixed points do not
have an obvious representation theory meaning, hence it is difficult to generalize them to more complicated cases. We will show that affine Springer fibers provide an alternative description of this fixed varieties which makes the classification and matching (with the modules) more straightforward.
### More on Higgs bundles and Higgs fields
As mentioned in section 2, the Coulomb branch is given by the Hitchin system defined on \(\mathbb{P}^{1}\) with one regular and one irregular singularity. We now review some details on the Higgs bundle and the Higgs field in this setting. \(\mathcal{M}_{Hit}\) is the space of solutions to Hitchin equation defined on a Riemann surface \(\Sigma\)[83]. It has a hyper-Kahler structure with three complex structures \(I,J,K\). In complex structure \(I\), each point of \(\mathcal{M}_{Hit}\) describes a Higgs bundle \((E,\Phi)\), where \(E\) is a holomorphic \(G^{\vee}\)-vector bundle on \(\Sigma\), and \(\Phi\) is a Higgs field which is a holomorphic section of \(\mathrm{End}(E)\otimes K_{\Sigma}\). Here \(G^{\vee}\) be a connected and simply connected Lie group whose Lie algebra is \(\mathfrak{g}^{\vee}\), and we have \(\mathfrak{g}=\mathfrak{g}^{\vee}=\mathfrak{j}\) for the untwisted case labelled by ADE Lie algebra \(\mathfrak{j}\), while \(\mathfrak{g}\) and \(\mathfrak{g}^{\vee}\) defined in table 3 in the twisted case labelled by \((\mathfrak{j},o)\). At each singularity, \(E\) is equipped with a level structure (which determines the correct gauge transformation at the singularity) and \(\Phi\) satisfies certain boundary condition.
First consider the irregular singularity at \(z=\infty\). Choose a \(\mathbb{Z}/b\mathbb{Z}\) grading on \(\mathfrak{j}\)[60]
\[\mathfrak{j}=\oplus_{i\in\mathbb{Z}/b\mathbb{Z}}\mathfrak{j}_{i/b}. \tag{4.1}\]
At \(\infty\), \(E\) is equipped with a level structure determined by the grading (4.1) [64]. The Higgs field behaves as
\[\Phi(z)\sim(T_{k}z^{\frac{b}{b}}+\ldots)dz, \tag{4.2}\]
when \(z\to\infty\). The leading coefficient \(T_{k}\) is regular semi-simple in \(\mathfrak{j}_{k/b}\) and invariant under the action of \(o\). Details on the choices of subsequent coefficients can be found in [17; 68]. For the later purpose, we redefine the Higgs field as
\[\Phi(z)=\frac{\Phi^{\prime}(z)}{z} \tag{4.3}\]
\begin{table}
\begin{tabular}{|c|c||c|c|} \hline \([t_{\beta y}],\Lambda\) & \(h\) & \([t_{\beta y}],\Lambda\) & \(h\) \\ \hline \([t_{-\omega_{1}}]\), \(-\frac{6}{4}\Lambda_{0}-\frac{3}{4}\Lambda_{1}\) & \(0\) & \([t_{-3\omega_{1}}]\), \(-\frac{9}{4}\Lambda_{1}\) & \(0\) \\ \([t_{\omega_{1}+3\omega_{2}}s_{\theta}]\), \(-\frac{5}{4}\Lambda_{0}-\frac{5}{4}\Lambda_{1}+\frac{1}{4}\Lambda_{2}\) & & \([t_{3\omega_{1}+\omega_{2}}s_{\theta}]\), \(-\frac{5}{4}\Lambda_{0}+\frac{1}{4}\Lambda_{1}-\frac{5}{4}\Lambda_{2}\) & \\ \hline \([t_{-2\omega_{1}}]\), \(-\frac{3}{4}\Lambda_{0}-\frac{6}{4}\Lambda_{1}\) & \(-1/4\) & \([t_{-\omega_{1}-\omega_{2}}]\), \(-\frac{3}{4}\Lambda_{0}-\frac{3}{4}\Lambda_{1}-\frac{3}{4}\Lambda_{2}\) & \(-1/2\) \\ \([t_{2\omega_{1}+2\omega_{2}}s_{\theta}]\), \(-\frac{5}{4}\Lambda_{0}-\frac{2}{4}\Lambda_{1}-\frac{2}{4}\Lambda_{2}\) & & \([t_{\omega_{1}+2\omega_{2}}s_{\theta}]\), \(-\frac{2}{4}\Lambda_{0}-\frac{5}{4}\Lambda_{1}-\frac{2}{4}\Lambda_{2}\) & \(-1/2\) \\ \hline \([t_{-2\omega_{1}-\omega_{2}}]\), \(-\frac{6}{4}\Lambda_{1}-\frac{3}{4}\Lambda_{2}\) & & \([t_{-\omega_{1}-2\omega_{2}}]\), \(-\frac{3}{4}\Lambda_{1}-\frac{6}{4}\Lambda_{2}\) & \(-1/2\) \\ \([t_{2\omega_{1}+\omega_{2}}s_{\theta}]\), \(-\frac{2}{4}\Lambda_{0}-\frac{2}{4}\Lambda_{1}-\frac{5}{4}\Lambda_{2}\) & & \([t_{\omega_{1}+\omega_{2}}s_{\theta}]\), \(\frac{1}{4}\Lambda_{0}-\frac{5}{4}\Lambda_{1}-\frac{5}{4}\Lambda_{2}\) & \\ \hline \end{tabular}
\end{table}
Table 8: The list of simple modules of \(W_{-3+3/4}(\mathfrak{sl}_{2},[2,1])\). The first column is the weight of the admissible module \(L(\Lambda)\) which does not reduce to \(0\), and the second column is the conformal weight of \(H_{f}(L(\Lambda))\). Two weights which reduces to the same W-algebra module are related by the dot action of \(s_{1}\). The \(L(\Lambda)\)’s which reduce to \(0\) are not listed. The conformal dimensions are computed using \(x^{\prime}\).
and the asymptotical behavior for \(\Phi^{\prime}(z)\) at \(z=\infty\) is then
\[\Phi^{\prime}\sim(T_{k}z^{\nu}+\ldots)dz, \tag{4.4}\]
where \(\nu=\frac{k}{b}+1\) and \(\nu>0\) because \(k>-b\). So the irregular singularity is specified by a rational number \(\nu\).
The regular singularity at \(z=0\) is labeled by a nilpotent element \(f\) of \(\mathfrak{g}\). Recall that we assume that \(f\) is a regular nilpotent element in a Levi \(\mathfrak{l}\) with \(\mathfrak{l}\) defined in section 3.2. Then on the Hitchin side, we should consider the Langlands dual \(\mathfrak{l}^{\vee}\). Let \(\mathfrak{p}^{\vee}=\mathfrak{l}^{\vee}+\mathfrak{n}^{\vee}\subset\mathfrak{ g}^{\vee}\) be the parabolic subalgebra with Levi factor \(\mathfrak{l}^{\vee}\), and \(\mathfrak{n}^{\vee}\) be its nilradical part. Let \(P^{\vee}\subset G^{\vee}\) be the parabolic subgroup whose Lie algebra is \(\mathfrak{p}^{\vee}\). Then at \(z=0\), the Higgs bundle \(E\) is equipped with a \(P^{\vee}\)-level structure, which means the allowed gauge transformation around \(z=0\) is of the form [84]
\[g=g_{0}+g_{1}z+g_{2}z^{2}+\cdots,\quad g_{0}\in P^{\vee},\ g_{i>0}\in G^{\vee}. \tag{4.5}\]
The boundary condition of \(\Phi^{\prime}\) at \(z=0\) is
\[\Phi^{\prime}\sim((m+\beta)+\cdots)\,dz, \tag{4.6}\]
where the mass deformation \(m\) is in the center of \(\mathfrak{l}^{\vee}\) and \(\beta\in\mathfrak{n}^{\vee}\). In the massless limit \(m\to 0\), the boundary condition becomes
\[\lim_{z\to 0}\Phi^{\prime}\in\mathfrak{n}^{\vee}. \tag{4.7}\]
This boundary condition is related to the boundary condition (2.4) because
\[\mathcal{O}_{f^{\vee}}=d(\mathcal{O}_{f})=\mathrm{Ind}_{\mathfrak{l}^{\vee}}^{ \mathfrak{g}^{\vee}}d(\mathcal{O}_{f}^{\mathfrak{l}})=\mathrm{Ind}_{ \mathfrak{l}^{\vee}}^{\mathfrak{g}^{\vee}}\mathcal{O}_{0}^{\mathfrak{l}^{ \vee}}, \tag{4.8}\]
and \(\overline{\mathrm{Ind}_{\mathfrak{l}^{\vee}}^{\mathfrak{g}^{\vee}}\mathcal{O}_ {0}^{\mathfrak{l}^{\vee}}}=G^{\vee}\cdot\mathfrak{n}^{\vee}\)[77]. Here \(\mathrm{Ind}\) means the induction of orbit and \(d(\mathcal{O}_{f}^{\mathfrak{l}})\) is the dual orbit of \(\mathcal{O}_{f}^{\mathfrak{l}}\) in \(\mathfrak{g}^{\vee}\). Since \(f\) is in regular in \(\mathfrak{l}\), \(d(\mathcal{O}_{f}^{\mathfrak{l}})=\mathcal{O}_{0}^{\mathfrak{l}^{\vee}}\). To further specify the Coulomb branch operators on Hitchin base \(B\), one also needs an element \(c\) in the so-called component group \(A(f^{\vee})\) of \(f^{\vee}\) introduced in [41], then Coulomb branch operators are gauge invariant functions of \(\Phi\) which are also invariant under the action of \(c\)[62]. In summary, the Hitchin moduli space is specified by a Lie algebra \(\mathfrak{j}\) (resp. \((\mathfrak{j},o)\)), a rational number \(\nu=k/b+1\) and a pair \((f^{\vee},c)\) together with suitable level structure on the Higgs bundle, and the corresponding moduli space might be labeled as
\[\mathcal{M}_{Hit}(\mathfrak{j},\nu,(f^{\vee},c))\ (\text{resp.}\ \mathcal{M}_{Hit}( \mathfrak{j},o),\nu,(f^{\vee},c))). \tag{4.9}\]
### Zero fibre of the Hitchin moduli space and the affine Springer fibre
The Hitchin system considered in this paper has a positive \(\mathbb{C}^{*}\) action on \((x,z)\) coordinates
\[x\rightarrow\lambda^{\alpha}x,\ \ z\rightarrow\lambda^{\beta}z, \tag{4.10}\]
which makes the spectral curve \(\det(x-\Phi(z))=0\) invariant. This implies that the \(\mathbb{C}^{*}\) weight of \(x\) should be the same as \(\Phi(z)\), inducing an action on the AKM algebra where
\(\Phi(z)\) lives in. The invariance of the spectral curve fixes the weight of the leading order coefficient \(T\) of Higgs field is \(0\), and the weights of \(x\) and \(z\) are related by \(\alpha=\beta\frac{k}{b}\). Because of this weight assignment, the \(\mathbb{C}^{*}\)-fixed points of \(\mathcal{M}_{Hit}\) belong to the fibre over the \(\mathbb{C}^{*}\)-fixed point on the Hitchin base, which corresponds to the curve at the SCFT point listed in table 2 and 4. We call this fibre the **zero fibre**13.
Footnote 13: This is called the central fibre in [9]
Below, we consider a local situation in which we may assume that the holomorphic bundle \(E\) of the Higgs pair \((E,\Phi)\) is trivial. Now the Hitchin moduli space can be described using the language of affine Lie algebra.
**Untwisted cases:** First consider the untwisted theories \(\mathcal{T}_{\mathfrak{j},b,k,f}\) with \(\mathfrak{j}=ADE\). Let \(\hat{\mathfrak{j}}=\mathfrak{j}[z,z^{-1}]\oplus\mathbb{C}K\oplus\mathbb{C}d\) be the AKM algebra associated with \(\mathfrak{j}\). Here \(\mathfrak{j}[z,z^{-1}]\) is the polynomials in \(z\) and \(z^{-1}\) with coefficient valued in \(\mathfrak{j}\). The modified Higgs field \(\Phi^{\prime}(z)\) is now an element in \(\hat{\mathfrak{j}}\) satisfying the boundary condition (4.4) and (4.7) in last subsection.
**Twisted cases:** Now consider the twisted case \(\mathcal{T}_{\mathfrak{j},o,b_{t},k_{t},f}\). The space \(\mathfrak{j}\) has a decomposition
\[\mathfrak{j}=\mathfrak{j}_{0}\oplus\mathfrak{j}_{\omega}\oplus\cdots\oplus \mathfrak{j}_{\omega^{n-1}}, \tag{4.11}\]
under the action of \(o\). Here the subscripts denote the eigenvalues under the action of \(o\) and \(n\) the order of \(o\). By definition \(\mathfrak{j}_{0}=\mathfrak{g}^{\vee}\) listed in table 3. The twisted affine Lie algebra \({}^{n\hat{\mathfrak{j}}}\) corresponding to \((\mathfrak{j},o)\) is then
\[{}^{n\hat{\mathfrak{j}}}=\oplus_{k\in\mathbb{Z}}\left(\mathfrak{j}_{0}z^{k} \oplus\mathfrak{j}_{\omega}z^{k+\frac{1}{n}}\oplus\cdots\oplus\mathfrak{j}_{ \omega^{n-1}}z^{k+\frac{n-1}{n}}\right)\oplus\mathbb{C}K\oplus\mathbb{C}d. \tag{4.12}\]
Below we will set formally \({}^{1\hat{\mathfrak{j}}}\) as \(\hat{\mathfrak{j}}\) so we can treat untwisted and twisted case uniformly.
By construction the modified Higgs field \(\Phi^{\prime}(z)\) is an element in \({}^{n\hat{\mathfrak{j}}}\) satisfying the following boundary conditions
\[\begin{split}\Phi^{\prime}(z)&\sim(Tz^{\nu}+\cdots )dz,\quad z\to\infty,\\ \Phi^{\prime}(z)&\sim(\beta^{\vee}+\cdots)dz,\quad z \to 0.\end{split} \tag{4.13}\]
Here \(\nu=k_{t}/b_{t}+1\), and \(\beta^{\vee}\) is an element in \(\mathfrak{n}^{\vee}\subset\mathfrak{g}^{\vee}\).
For the purpose of counting the fixed varieties, we only need to consider the zero fibre of the Hitchin moduli space and it is easier to describe it using the affine Springer fibre which we will review in the following. Choose an elliptic element \(\gamma\in{}^{n\hat{\mathfrak{j}}}\) whose spectral curve is the \(\mathbb{C}^{*}\)-fixed point in \(B\). Let \(\mathbf{G}^{\vee}\) be a connected and simply-connected affine Lie group whose Lie algebra is \({}^{n\hat{\mathfrak{j}}}\)14. Let \(\tilde{\mathfrak{n}}^{\vee}\) be the Lie algebra with the root system \(\Delta_{\tilde{\mathfrak{n}}^{\vee}}\equiv\hat{\Delta}^{\vee}_{+}\backslash \Delta^{\vee}_{\mathfrak{l}}\). Here \(\hat{\Delta}^{\vee}\) is the set of positive real roots of affine Lie algebra \({}^{n\hat{\mathfrak{j}}}\). Let \(\mathbf{P}^{\vee}\subset\mathbf{G}^{\vee}\) be the parahori subgroup whose root system is \(\hat{\Delta}^{\vee}_{+}\cup\Delta^{\vee}_{\mathfrak{l}}\). Then the affine Spaltenstein variety is [45, 46]
Footnote 14: We will always put a \(\vee\) symbol on objects on the fibre side as they are always the Langlands dual of the corresponding objects on the VOA side.
\[Sp_{\gamma,\mathbf{P}^{\vee}}=\{g\in\mathbf{P}^{\vee}\backslash\mathbf{G}^{ \vee}\ |\ g\gamma g^{-1}\subset\tilde{\mathfrak{n}}^{\vee}\}. \tag{4.14}\]
In [64] the authors proved that the zero fibre of the Hitchin moduli space \(\mathcal{M}_{Hit}((\mathfrak{j},o),\nu,(f^{\vee},c))\) is homeomorphic to \(Sp_{\gamma,\mathbf{P}^{\vee}}\), with the relation
\[\Phi(z)^{\prime}=g\gamma g^{-1}dz. \tag{4.15}\]
The choice of \(\gamma\) ensures that \(\Phi^{\prime}\) satisfies the boundary condition from the irregular singularity at \(\infty\) while \(g\gamma g^{-1}\subset\tilde{\mathfrak{n}}^{\vee}\) ensures that \(\Phi^{\prime}\) satisfies the boundary condition from the regular singularity at \(0\).
**Example 4.1**.: Consider \(\nu=\frac{u}{\hbar^{\vee}}\), the elliptic element of \(\tilde{\mathfrak{n}}\) is
\[\gamma=e_{-\theta}z^{u}+\sum_{i=1}^{r}e_{\alpha_{i}}. \tag{4.16}\]
Here \(\theta\) is the longest root, \(\alpha_{i}\)'s are simple roots, and \(e_{\alpha}\) is an element in the Chevalley basis corresponding to the root \(\alpha\). In particular when \(\mathfrak{j}=A_{N-1}\) and \(\nu=\frac{u}{N}\), the spectral curve of \(\gamma\) is
\[x^{N}+z^{u-N}=0, \tag{4.17}\]
so \(\gamma\) lies in the central fibre. It is useful to redefine the coordinate \(x^{\prime}=xz\), and so the spectral curve takes the form
\[x^{\prime N}+z^{u}=0, \tag{4.18}\]
and the SW differential in the new variable is \(x^{\prime}\frac{dz}{z}\).
**Example 4.2**.: Take the Lie algebra \(\mathfrak{g}=A_{N-1}\), and let \(e_{1},\dots,e_{N}\) be the standard basis of \(\mathbb{R}^{N}\). We give the explicit description of \(\mathfrak{n}^{\vee}\). The set of positive roots are \(\Delta_{+}=\{e_{i}-e_{j}|1<i<j\leq N\}\), and the set of simple roots are \(\Pi=\{e_{1}-e_{2},\dots e_{N-1}-e_{N}\}\). Given a partition \(d=[d_{1},\cdots,d_{s}]\) of \(N\), one pick the following set of simple roots corresponding to \(d\)
\[\Pi_{d}=\Pi_{1}\cup\Pi_{2}\cup\cdots\cup\Pi_{s}, \tag{4.19}\]
where
\[\Pi_{i}=\{e_{\sum_{l=1}^{i-1}d_{l}}-e_{\sum_{l=1}^{i-1}d_{l}+1},\cdots,e_{\sum _{l=1}^{i}d_{l}-1}-e_{\sum_{l=1}^{i}d_{l}}\}. \tag{4.20}\]
Now let \(\Delta_{d}\subset\Delta\) be the sub-root system generated by \(\Pi_{d}\). The standard Levi subalgebra corresponding to \(d\) is
\[\mathfrak{l}^{\vee}_{d}=\mathfrak{h}^{\vee}\oplus_{\alpha\in\Delta_{d}} \mathfrak{g}^{\vee}_{\alpha}, \tag{4.21}\]
while the standard parabolic algebra \(\mathfrak{p}^{\vee}_{d}\) containing \(\mathfrak{l}^{\vee}_{d}\) is
\[\mathfrak{p}^{\vee}_{d}=\mathfrak{l}^{\vee}_{d}\oplus_{\alpha\in\Delta_{+} \backslash\Delta_{d}}\mathfrak{g}^{\vee}_{\alpha}. \tag{4.22}\]
\(\mathfrak{p}^{\vee}_{d}\) has a Levi decomposition \(\mathfrak{p}^{\vee}_{d}=\mathfrak{l}^{\vee}_{d}\oplus\mathfrak{n}^{\vee}_{d}\) with
\[\mathfrak{n}^{\vee}_{d}=\oplus_{\alpha\in\Delta_{+}\backslash\Delta_{d}} \mathfrak{g}^{\vee}_{\alpha}. \tag{4.23}\]
The set of roots of \(\tilde{\mathfrak{n}}^{\vee}\) is then
\[\Delta_{\tilde{\mathfrak{n}}^{\vee}}=\{\alpha+n\delta|\alpha\in\Delta,n\in \mathbb{Z}_{>0}\}\cup(\Delta^{\vee}_{+}\backslash\Delta_{\Gamma^{\vee}}). \tag{4.24}\]
In particular, if \(d=[N]\) (so-called trivial puncture in physics literature), \(\mathfrak{n}^{\vee}\) is zero, then \(\tilde{\mathfrak{n}}^{\vee}\) is the Lie algebra generated by the root system \(\hat{\Delta}^{\vee}_{+}\backslash\Delta^{\vee}_{+}\). If \(d=[1^{N}]\) (so-called full puncture in physics literature), \(\tilde{\mathfrak{n}}^{\vee}\) is generated by the root system \(\hat{\Delta}^{\vee}_{+}\).
**The requirement of elliptic element:** In this work we focus on cases when there are no mass parameters in irregular singularity, which puts the constraint on choices of the rational number \(\nu\) which is called slope. Recall the constraints on \((b,k)\) (resp. \((b_{t},k_{t})\)) for irregular singularities without mass deformation listed in table 5 (resp. table 6), and \(\nu=\frac{k}{b}+1\) (resp. \(\nu=\frac{k_{t}}{b_{t}}+1\)). The requirement of no mass deformation imposes constraints on the denominator \(m\) of \(\nu=u/m\) which are listed in 9. Interestingly, such choices of \(m\) coincides with the so-called elliptic numbers [45, 46]. Similarly, the allowed elliptic numbers for the twisted case is also given in table 9. An elliptic number is called regular if it is the same as the dual Coxeter number \(h^{\vee}\). The dimension for the elliptic affine Springer fiber is computed by [46, 69, 70], which is the same as our result in section 2.2.
### Counting fixed varieties
In previous section we argued that the affine Springer fibre can be used to replace the Hitchin moduli space when considering the \(\mathbb{C}^{*}\)-fixed points. For the elliptic case, there is a nice combinatorial counting algorithm [45, 46] which we will explain here. Given an elliptic slope \(\nu=u/m\) for an (twisted) affine Lie algebra \(\hat{\mathfrak{j}}\) (\(n\hat{\mathfrak{j}}\)), define the set \(L_{\nu}\) as
\[L_{\nu}=\{\alpha+l\delta\in\hat{\Delta}^{\vee}\ |\ \nu\alpha(\rho)+l=0\}, \tag{4.25}\]
and the set \(S_{\nu}\) as
\[S_{\nu}=\{\alpha+l\delta\in\hat{\Delta}^{\vee}\ |\ \nu\alpha(\rho))+l=\nu\}. \tag{4.26}\]
Here \(\hat{\Delta}^{\vee}\) is the set of real **roots** of \(\hat{\mathfrak{j}}\) (\(n\hat{\mathfrak{j}}\)), \(\rho\) the co-Weyl vector of the finite part of \(\hat{\mathfrak{j}}\) (\(n\hat{\mathfrak{j}}\)). Denoted by \(W_{\nu}\) the Weyl group generated by roots in \(S_{\nu}\).
With \(L_{\nu}\) and \(S_{\nu}\), the set of fixed varieties \(Sp^{T}_{\gamma,\mathbf{P}^{\vee}}\) of the affine Springer fiber \(Sp_{\gamma,\mathbf{P}^{\vee}}\) is labelled by the affine Weyl group element up to the action of \(W_{\mathbf{P}^{\vee}}\) and \(W_{\nu}\)15,
Footnote 15: Our \(\tilde{w}\) is \(\tilde{w}^{-1}\) in [46].
\[Sp^{T}_{\gamma,\mathbf{P}^{\vee}}=\sqcup H_{\tilde{w}},\quad\{\tilde{w}\in W _{\mathbf{P}^{\vee}}\backslash W_{aff}/W_{\nu}\ |\ \mathrm{Ad}(\tilde{w})\gamma\in\tilde{\mathfrak{n}}^{\vee}\}. \tag{4.27}\]
\begin{table}
\begin{tabular}{|c|c|} \hline j & Elliptic number \(m\) \\ \hline \(A_{n}\) & \(n+1\) \\ \hline \(D_{n}\) & \(m\) even, \(\frac{2n-2}{m}\) odd \\ \hline & \(m\) even, \(\frac{2n}{m}\) even \\ \hline \(E_{6}\) & \(12,9,6,3\) \\ \hline \(E_{7}\) & \(18,14,6,2\) \\ \hline \(E_{8}\) & \(2,3,5,6,10,15,30\) \\ \hline & \(4,8,12,24\) \\ \hline & \(20\) \\ \hline \end{tabular}
\begin{tabular}{|c|c|} \hline j,\(o\) & Elliptic number \(m\) \\ \hline \(A_{2n},\mathbb{Z}_{2}\) & \(m=2r,\ r\) odd, \(\frac{2n+1}{r}\) odd \\ \hline & \(m=2r,\ r\) odd, \(\frac{2n}{m}\) even \\ \hline \(A_{2n-1},\mathbb{Z}_{2}\) & \(m=2r,\ r\) odd, \(\frac{2n-1}{r}\) odd \\ \hline & \(m=2r,\ r\) odd, \(\frac{2n}{r}\) even \\ \hline \(D_{n},\mathbb{Z}_{2}\) & \(m\) even, \(\frac{2n}{m}\) odd \\ \hline & \(m\) even, \(\frac{2n-2}{m}\) even \\ \hline \(D_{4},\mathbb{Z}_{3}\) & \(12,6,3\) \\ \hline \(E_{6},\mathbb{Z}_{2}\) & \(18,12,6,4,2\) \\ \hline \end{tabular}
\end{table}
Table 9: List of elliptic number \(m\).
Here \(W_{\mathbf{P}^{\vee}}\) is the Weyl group for the parahori subgroup \(\mathbf{P}^{\vee}\). The dimension of each fixed variety is [46]
\[\boxed{\dim H_{\tilde{w}}=|\tilde{w}L_{\nu}\backslash\Delta_{\tilde{\mathfrak{h} }^{\vee}}|-|\tilde{w}S_{\nu}\backslash\Delta_{\tilde{\mathfrak{h}}^{\vee}}|.} \tag{4.28}\]
In the following, we will apply above formula to several interesting cases.
**Regular elliptic case:** Let \(\hat{\mathfrak{j}}\) be a simply laced AKM algebra, then there is no difference between roots and coroots and \(\hat{\Delta}_{+}=\hat{\Delta}_{+}^{\vee}\), \(\nu=\frac{u}{h^{\vee}}\), \(f^{\vee}\) is the principal nilpotent orbit, so \(W_{\mathbf{P}^{\vee}}\) is trivial. The group \(\mathbf{P}^{\vee}\) in this case is an Iwahori subgroup and is denoted as \(\mathbf{I}^{\vee}\), and \(\tilde{\mathfrak{n}}^{\vee}\) is the same as the set of positive affine roots \(\hat{\Delta}_{+}^{\vee}\). \(L_{\nu}\) is empty because the maximal height of a finite root \(\alpha\) is \(h^{\vee}-1\), so the equation
\[\frac{u}{h^{\vee}}\alpha(\rho)+l=0, \tag{4.29}\]
has no solution. Elements of \(S_{\nu}\) satisfying the following
\[\frac{u}{h^{\vee}}\overline{\alpha}(\rho)+l=\frac{u}{h^{\vee}}, \tag{4.30}\]
and the set of solutions is
\[S_{\nu}=\{-\theta+u\delta,\alpha_{1},\ldots,\alpha_{r}\}, \tag{4.31}\]
which is the same as \(S_{u}\) defined previously in equation (3.3). The elliptic element \(\gamma\) can be chosen as
\[\gamma=e_{-\theta}z^{u}+\sum_{i}e_{\alpha_{i}}. \tag{4.32}\]
The fixed varieties are labelled by the following elements in the affine Weyl group
\[\{\tilde{w}\in W_{aff}\ |\ \tilde{w}S_{\nu}\subset\hat{\Delta}_{+}\}. \tag{4.33}\]
Because \(L_{\nu}\) is empty, we have
\[|\tilde{w}L_{\nu}\backslash\hat{\Delta}_{+}|=0. \tag{4.34}\]
Also because \(\tilde{w}S_{\nu}\subset\hat{\Delta}_{+}\),
\[\tilde{w}S_{\nu}\backslash\hat{\Delta}_{+}=\emptyset. \tag{4.35}\]
The dimension formula (4.28) then tells us that each fixed variety \(H_{\tilde{w}}\) has dimension 0. The number of fixed points \(|Sp_{\frac{u}{h^{\vee}},\mathbf{I}^{\vee}}|\) is then \(u^{r}\)[45].
**Sub-regular case**: Again consider \(\hat{\mathfrak{j}}\) simply laced, but now take \(\nu=\frac{u}{m}\) with \(m\) being the next to maximum value in table 9, and \(f^{\vee}\) is still the principal nilpotent one. Notice that now there is only one finite root \(\mu\) of \(\hat{\mathfrak{j}}\) with height \(m\), so \(L_{\nu}\) consists two roots
\[L_{\nu}=\{\pm(\mu-u\delta)\}. \tag{4.36}\]
The set \(S_{\nu}\) is
\[S_{\nu}=\{\alpha+l\delta\in\hat{\Delta}\ |\ \frac{u}{m}\alpha(\rho^{\vee}))+l= \frac{u}{m}\}. \tag{4.37}\]
Now \(S_{\nu}\) contains both positive and negative affine roots. One choice of the elliptic element can be
\[\gamma=\sum_{\alpha+l\delta\in S_{\nu},l\geq 0}e_{\alpha}z^{l}. \tag{4.38}\]
Since \(L_{\nu}=\{\pm\tilde{\alpha}\}\), and given \(\tilde{w}\in W_{aff}\), the cardinality of \(\tilde{w}L_{\nu}\backslash\hat{\Delta}_{+}\) is always \(1\), the first term in the dimension formula (4.28) is always \(1\). The fixed varieties are separated into two groups by their dimensions.
1. \(\dim H_{\tilde{w}}=1\): so \(|\tilde{w}S_{\nu}\backslash\hat{\Delta}_{+}|=0\), i.e. \(\tilde{w}(S_{\nu})\subset\hat{\Delta}_{+}\).
2. \(\dim H_{\tilde{w}}=0\): so \(|\tilde{w}S_{\nu}\backslash\hat{\Delta}_{+}|=1\), i.e. \(\tilde{w}S_{\nu}\cap\hat{\Delta}_{-}\) has exactly \(1\) element.
In next section we will provide explicit examples.
**Twisted case**: consider a twisted affine Lie algebra \({}^{2}\hat{A}_{3}\) with the slope \(\nu=\frac{1}{2}\). The set of real roots is 16
Footnote 16: Our imaginary root \(\delta\) is \(n\) times the imaginary root of [85].
\[\hat{\Delta}^{\vee}=\{\alpha^{\vee}+\frac{n}{2}\delta\ |\ \alpha^{\vee}\in \Phi^{0}_{s},\ n\in\mathbb{Z}\}\cup\{\alpha^{\vee}+n\delta\ |\ \alpha^{\vee}\in\Phi^{0}_{l},\ n\in \mathbb{Z}\}, \tag{4.39}\]
where \(\Phi^{0}_{s}\) and \(\Phi^{0}_{l}\) are respectively the set of short and long roots of \(C_{2}\) Lie algebra which is the finite part of \({}^{2}\hat{A}_{3}\). In orthogonal basis spanned by \(\{\beta_{i}\}\)
\[\Phi^{0}_{l}=\{\pm 2\beta_{i}\},\ \ \ \Phi^{0}_{s}=\{\pm\beta_{i}\pm\beta_{j},\ \ i,j=1,2,\ \ i\neq j\}. \tag{4.40}\]
The set of simple roots is
\[\{\alpha^{\vee}_{1}=\beta_{1}-\beta_{2},\ \alpha^{\vee}_{2}=2\beta_{2}\}. \tag{4.41}\]
The set \(L_{\nu}\) and \(S_{\nu}\) when \(\nu=1/2\) are
\[L_{\nu}=\{\pm(\alpha_{1}+\alpha_{2}-\delta)\}\cup\{\pm(\alpha_{1}-\frac{1}{2} \delta)\} \tag{4.42}\]
and
\[S_{\nu}=\{\alpha_{1},\ \alpha_{2},\ -\alpha_{1}+\delta,\ -\alpha_{2}+\delta,\ 2 \alpha_{1}+\alpha_{2}-\delta,\ -2\alpha_{1}-\alpha_{2}+2\delta,\ \ \alpha_{1}+\alpha_{2}-\frac{\delta}{2},\ -\alpha_{1}-\alpha_{2}+\frac{3 \delta}{2}\}. \tag{4.43}\]
The fixed variety can be found by using the definition (4.27) and the dimension formula (4.28). On the other hand, there is also a bijection between fixed varieties and alcoves in the Cartan \(\mathfrak{h}\), with algorithm listed below [45; 46]:
1. For each element \(\alpha+l\delta\) in \(S_{\nu}\), one draw a wall which is a hyperplane \(H_{\alpha+l\delta}\subset\mathfrak{h}\) defined by \(\{x|(x,\alpha)+l=0\}\) (red lines in figure 3). For each element \(\alpha+l\delta\) in \(L_{\nu}\), one draws a mirror which is the hyperplane \(H_{\alpha+l\delta}\) (blue lines in figure 3). The fundamental alcove \(\Delta_{0}\) is the region defined by \((x,\alpha)>0\), \(i=1,\cdots,r\) and \((x,-\alpha_{\theta})+1>0\) (the shaded area of \(3\)).
2. For an element \(\tilde{w}\in W_{aff}\), \(H_{\tilde{w}}\) is a fixed variety if \(\tilde{w}^{-1}\Delta_{0}\) lies in a bounded region closed by walls. If any points \(x\in\tilde{w}^{-1}\Delta_{0}\) satisfy \((x,\alpha)>0\) (or \((x,\alpha)<0\)), we call \(\tilde{w}^{-1}\Delta_{0}\) is on the positive (or negative) side of the wall \(H_{\alpha}\). \(|\tilde{w}S_{\nu}\backslash\hat{\Delta}_{+}|\) is the number of walls of which \(\tilde{w}^{-1}\Delta_{0}\) lies on the negative side. Finally, if \(\tilde{w_{1}}^{-1}\Delta_{0}\) is the same as \(\tilde{w_{2}}^{-1}\Delta_{0}\) reflected by some mirrors, they correspond to the same fixed variety.
Alcoves corresponding to fixed varieties in our example are shown in figure 3 with dimensions labelled. The alcoves marked by red dimension numbers are not reflected by the mirrors so they correspond to different fixed varieties. There are total of \(4\) fixed varieties.
## 5 Mirror symmetry for circle compactified 4d \(\mathcal{N}=2\) theory
With necessary knowledge reviewed in previous sections, we can finally make precise statements on the mirror symmetry and provide various checks. For a circle compactified 4d \(\mathcal{N}=2\) SCFT, we now have two objects: the first one is the W-algebra \(W_{-h^{\vee}+\frac{1}{n\nu}}(\mathfrak{g},f)\)17 which describes Schur sector. The second one is the Hitchin moduli space \(\mathcal{M}_{hit}((i,o),\nu,(f^{\vee},c))\) for the Coulomb branch. Notice that the defining data involves Langlands dual on algebras:
Footnote 17: Here \(n\) is the lacety number which is the same as the rank of the automorphism group when \(n>1\) in table 3.
1. The W-algebra \(W_{-h^{\vee}+\frac{1}{n\nu}}(\mathfrak{g},f)\) on the Schur sector is related to the affine Lie algebra \(\hat{\mathfrak{g}}\), while the Hitchin moduli space involves the twisted affine Lie algebra based \(\hat{n^{\prime}}\) which is the **Langlands dual** of \(\hat{\mathfrak{g}}\).
2. The \((f^{\vee},c)\in\mathfrak{g}^{\vee}\) used in the Hitchin moduli space is the dual of the nilpotent element \(f\in\mathfrak{g}\) in \(W_{-h^{\vee}+\frac{1}{n\nu}}(\mathfrak{g},f)\), and \(\mathfrak{g}^{\vee}\) is the Langlands dual of \(\mathfrak{g}\).
### Simple modules of W-algebra and fixed points
Our first statement is that there is a natural bijection between simple modules of \(W_{-h^{\vee}+\frac{1}{n\nu}}(\mathfrak{g},f)\) and irreducible components of \(\mathbb{C}^{*}\)-fixed varieties of \(\mathcal{M}_{hit}((j,o),\nu,(f^{\vee},c))\). The case when \(\mathfrak{g}=A_{N-1}\) and \(f\) principal was first noticed in [9]. Cases when \(\mathfrak{g}=A_{1}\), \(f\) trivial or cases when the W-algebra being \(W_{N}\) and \(B_{N}\) algebras are discussed in [10]. Our results vastly generalize previous understanding of the bijection between modules and fixed varieties.
Figure 3: Alcoves correspond to fixed varieties of \({}^{2}\hat{A}_{3}\) with \(\nu=1/2\) are marked with dimensions, and the fundamental alcove is the shaded region. Alcoves with dimension marked in red below to different \(W_{\nu}\) orbit. There are one 2d fixed variety, one 1d fixed variety and two fixed points. Redlines are walls from roots in \(S_{\nu}\), blue lines are mirrors from roots in \(L_{\nu}\). Here \(\alpha_{\theta}=2\alpha_{1}+\alpha_{2}\), \(\alpha_{s}=\alpha_{1}+\alpha_{2}\) is the highest short root, and \(\alpha_{0}=-\alpha_{s}+\delta/2\).
Recall that irreducible components of fixed varieties of \(\mathcal{M}_{Hit}((\mathfrak{i},o),\nu,(f^{\vee},c))\) are the same as fixed varieties of corresponding affine Springer fibre \(Sp_{\eta,\mathbf{P}^{\vee}}\) which are parameterized by the affine Weyl elements \(\tilde{w}\) satisfying condition in (4.27) up to a double coset. The bijection between fixed varieties of Hitchin moduli space and weights of simple modules of \(W_{-h^{\vee}+\frac{1}{n\nu}}(\mathfrak{g},f)\) is
\[\begin{split}&\text{Fixed varieties of }\mathcal{M}_{hit}((\mathfrak{i},o),\nu,(f^{\vee},C))\xrightarrow{\simeq} \text{Irrep}(W_{-h^{\vee}+\frac{1}{n\nu}}(\mathfrak{g},f)),\\ & H_{\tilde{w}}\mapsto H_{f}(L(\Lambda_{\tilde{w}})),\quad\text{ with }\Lambda_{\tilde{w}}=\tilde{w}\cdot(\kappa\Lambda_{0}),\end{split} \tag{5.1}\]
Here \(H_{\tilde{w}}\) denotes the irreducible component of the fixed varieties, and \(\Lambda_{\tilde{w}}\) denotes the affine weight.
**Remark**: In above proposal, we assume that the module of W-algebra is defined by choosing \(f\) regular in a standard Levi \(\mathfrak{i}\) which naturally matches the definition of fixed varieties on the Hitchin side. To get the data for the W-algebra corresponding to the 4d theory (namely the grading has to be given by the standard \(\mathfrak{sl}_{2}\) triple), one need to do a further transformation resulting a shift in the conformal dimension.
#### 5.1.1 W-algebras at boundary admissible level
Given a Lie algebra \(\mathfrak{g}\), if the level \(\kappa\) is at the boundary admissible level \(\kappa=-h^{\vee}+\frac{h^{\vee}}{u}\) with \(\gcd(u,h^{\vee})=1\), then the slope \(\nu\) of the corresponding fibre is \(\frac{u}{h^{\vee}}\), and the denominator \(h^{\vee}\) is a regular elliptic number. In this situation, \(L_{\nu}\) is always an empty set. By the dimension formula (4.28) all fixed varieties have dimension 0 (fixed points). For boundary admissible case, the bijection can be proved rigorously and is given in our accompanying paper [42].
**AKM cases:** Let \(f\) being the trivial nilpotent orbit. One gets the associated vertex algebra \(L_{-h^{\vee}+\frac{h^{\vee}}{u}}(\mathfrak{g})\) on the VOA side. Following the notation in section 3.1, the set of admissible weights are given by
\[\{\tilde{w}.(\kappa\Lambda_{0})\ |\ \tilde{w}\in W_{ext}/\Omega_{u},\ \tilde{w}(S_{u})\subset\hat{\Delta}_{+}^{\vee}\}. \tag{5.2}\]
On the Hitchin side, both \(W_{\nu}\) and \(W_{\mathbf{P}^{\vee}}\) are trivial, and the set of fixed varieties is labelled by (in this case, \(\tilde{\mathfrak{n}}^{\vee}\) is equal to the set of positive affine roots \(\hat{\Delta}_{+}^{\vee}\))
\[\{\tilde{w}\in W_{aff}\ |\ \tilde{w}(S_{\nu})\subset\hat{\Delta}_{+}^{\vee}\}. \tag{5.3}\]
The dimension of each fixed variety is 0 (fixed points). Notice that here \(S_{u}=S_{\nu}\). One can show that for each element of the coset \(W_{ext}/\Omega_{u}\), there is one and only one element in \(W_{aff}\)18, hence the bijection. Both the number of admissible modules and fixed points are \(u^{r}\).
Footnote 18: Note that both \(W_{ext}\) and \(W_{aff}\) are invariant under Langlands dual.
**Example 5.1**.: \(L_{-2+2/u}(\mathfrak{sl}_{2})\leftrightarrow\mathcal{M}_{Hit}(\mathfrak{sl}_{2 },\frac{u}{2},[2])\). Here \(u\) should be an odd integer. We have \(\mathfrak{g}=\mathfrak{g}^{\vee}=\mathfrak{sl}_{2}\), \(S_{u}=S_{\nu}=\{\alpha,-\alpha+u\delta\}\). An element in the affine Weyl group can always be written as an element of the finite Weyl group followed by a translation in the
root lattice \(t_{m\alpha}s\) with \(s\) being \(1\) or \(s_{\alpha}\). The fixed points are labelled by the following subset of \(W_{aff}\)
\[\{t_{-m\alpha}\ |\ 0\leq 2m<u\}\cup\{t_{m\alpha}s_{\alpha}\ |\ 0<2m\leq u\}. \tag{5.4}\]
The first set has \((u+1)/2\) elements and the second one has \((u-1)/2\) elements, so the total number of fixed points are \(u\). Using the formula (5.1), we found that weights from the first set are
\[\{\left(-2+\frac{2}{u}+\frac{4m}{u}\right)\Lambda_{0}-\frac{4m}{u}\Lambda_{1} \ |\ 0\leq 2m<u\} \tag{5.5}\]
and weights from the second set are
\[\{\left(-2+\frac{2}{u}+\frac{2u-4m}{u}\right)\Lambda_{0}-\frac{2u-4m}{u} \Lambda_{1}\ |\ 0<2m<u\}. \tag{5.6}\]
They give exactly the same weights as (3.18) in example 3.1.
**Example 5.2**.: \(L_{-3+3/u}(\mathfrak{sl}_{3})\leftrightarrow\mathcal{M}_{Hit}(\mathfrak{sl}_ {3},u/3,[3])\). Here \(u\) is coprime with \(3\). \(S_{\nu}\) is the same as \(S_{u}\)
\[S_{u}=S_{\nu}=\{-\theta+u\delta,\alpha_{1},\alpha_{2}\} \tag{5.7}\]
with \(\theta=\alpha_{1}+\alpha_{2}\). The condition for \(\tilde{w}=t_{\beta}y\) to give rise to a fixed point is
\[t_{\beta}y(S_{u})\subset\hat{\Delta}_{+},\quad\beta\in Q, \tag{5.8}\]
with \(Q\) being the root lattice of \(\mathfrak{sl}_{3}\). For \(u=4\), the list of fixed points and the corresponding affine weights (5.1) are listed in table 10, and we get exactly the same weights in table 7 in example 3.2. We also plot all alcoves corresponding to fixed points in figure 4.
**W-algebras case:** Now we consider cases when \(f\) is a regular nilpotent element in a Levi. As discussed in section 3.2, simple modules of \(W_{-h^{\vee}+\frac{h^{\vee}}{u}}(\mathfrak{g},f)\) are reduced from modules of \(L_{-h^{\vee}+\frac{h^{\vee}}{u}}(\mathfrak{g})\) satisfying the condition (3.25). In particular, some modules are projected out, and multiple modules of AKM is mapped to the same simple module of the W-algebra. On the Hitchin side, one can easily see the similar pattern: firstly the set \(L_{\nu}\) and \(S_{\nu}\) is not changed, secondly the set of affine roots of \(\tilde{\mathfrak{n}}^{\vee}\) is now smaller than \(\hat{\Delta}_{+}^{\vee}\) and so some of the previous fixed points will be projected out, thirdly one should quotient by a Weyl group \(W_{\mathbf{P}^{\vee}}\) action to get the final results. So the pattern on Hitchin side matches precisely with that on the VOa side.
\begin{table}
\begin{tabular}{|c|c||c|c||c|c|} \hline \(t_{\beta}y\) & \(\Lambda\) & \(t_{\beta}y\) & \(\Lambda\) & \(t_{\beta}y\) & \(\Lambda\) \\ \hline \(1\) & \(-\frac{9}{4}\Lambda_{0}\) & \(t_{-\alpha_{1}-\alpha_{2}}\) & \(-\frac{3}{4}\Lambda_{0}-\frac{3}{4}\Lambda_{1}-\frac{3}{4}\Lambda_{2}\) & \(t_{-\alpha_{1}-2\alpha_{2}}\) & \(-\frac{9}{4}\Lambda_{2}\) \\ \hline \(t_{-2\alpha_{1}-\alpha_{2}}\) & \(-\frac{9}{4}\Lambda_{1}\) & \(t_{-\alpha_{2}}s_{1}\) & \(-\frac{2}{4}\Lambda_{0}-\frac{5}{4}\Lambda_{1}-\frac{2}{4}\Lambda_{2}\) & \(t_{\alpha_{1}-\alpha_{2}}s_{1}\) & \(-\frac{5}{4}\Lambda_{0}+\frac{1}{4}\Lambda_{1}-\frac{5}{4}\Lambda_{2}\) \\ \hline \(t_{-\alpha_{1}}s_{2}\) & \(-\frac{2}{4}\Lambda_{0}-\frac{2}{4}\Lambda_{1}-\frac{5}{4}\Lambda_{2}\) & \(t_{-\alpha_{1}+\alpha_{2}}s_{2}\) & \(-\frac{5}{4}\Lambda_{0}-\frac{2}{4}\Lambda_{1}+\frac{1}{4}\Lambda_{2}\) & \(t_{\alpha_{2}}s_{2}s_{1}\) & \(-\frac{3}{4}\Lambda_{1}-\frac{6}{4}\Lambda_{2}\) \\ \hline \(t_{2\alpha_{2}}s_{2}s_{1}\) & \(-\frac{3}{4}\Lambda_{0}-\frac{6}{4}\Lambda_{1}\) & \(t_{\alpha_{1}+2\alpha_{2}}s_{2}s_{1}\) & \(-\frac{6}{4}\Lambda_{0}-\frac{3}{4}\Lambda_{2}\) & \(t_{\alpha_{1}}s_{1}s_{2}\) & \(-\frac{6}{4}\Lambda_{1}-\frac{3}{4}\Lambda_{2}\) \\ \hline \(t_{2\alpha_{1}}s_{1}s_{2}\) & \(-\frac{3}{4}\Lambda_{0}-\frac{9}{4}\Lambda_{2}\) & \(t_{2\alpha_{1}+\alpha_{2}}s_{1}s_{2}\) & \(-\frac{6}{4}\Lambda_{0}-\frac{3}{4}\Lambda_{1}\) & \(t_{\alpha_{1}+\alpha_{2}}s_{1}s_{2}s_{1}\) & \(\frac{1}{4}\Lambda_{0}-\frac{5}{4}\Lambda_{1}-\frac{5}{4}\Lambda_{2}\) \\ \hline \(t_{2\alpha_{1}+2\alpha_{2}}s_{1}s_{2}s_{1}\) & \(-\frac{5}{4}\Lambda_{0}-\frac{2}{4}\Lambda_{1}-\frac{2}{4}\Lambda_{2}\) & & & & \\ \hline \end{tabular}
\end{table}
Table 10: Fixed points of \(\mathcal{M}_{Hit}(\mathfrak{sl}_{3},3/u,[3])\) and their images under the bijection (5.1).
**Example 5.3**.: \(W_{-3+3/u}(\mathfrak{sl}_{3},[2,1])\leftrightarrow\mathcal{M}_{Hit}(\mathfrak{sl}_ {3},u/3,[2,1])\)_. \(L_{\nu}\) is again empty and \(S_{\nu}\) is the same as equation (5.7) but_
\[\Delta_{\widehat{\mathfrak{n}}^{\vee}}=\hat{\Delta}_{+}\backslash\{\alpha_{1}\}, \tag{5.9}\]
_and so \(W_{\mathbf{P}^{\vee}}\) is generated by \(s_{1}\). The condition for fixed points are_
\[\tilde{w}(S_{u})\subset\Delta_{\widehat{\mathfrak{n}}^{\vee}}=\hat{\Delta}_{+ }^{\vee}\backslash\{\alpha_{1}\},\quad\tilde{w}\in W_{\mathbf{P}^{\vee}} \backslash W_{aff}. \tag{5.10}\]
_The total 6 fixed points when \(u=4\) are listed in table 11 which are matched to the modules in table 8 through the bijection (5.1). Alcoves of fixed points are drawn in figure 5. In general there are \(u(u-1)/2\) fixed points._
\begin{tabular}{|c||c|} \hline \(t_{\beta}y,\;\Lambda\) & \(t_{\beta}y,\;\Lambda\) \\ \hline \(t_{2\alpha_{1}+\alpha_{2}}s_{1}s_{2},\;-\frac{6}{4}\Lambda_{0}-\frac{3}{4} \Lambda_{1}\) & \(t_{-2\alpha_{1}-\alpha_{2}},\;-\frac{9}{4}\Lambda_{1}\) \\ \(t_{-\alpha_{1}+\alpha_{2}}s_{2},\;-\frac{5}{4}\Lambda_{0}-\frac{5}{4}\Lambda_{ 1}+\frac{1}{4}\Lambda_{2}\) & \(t_{\alpha_{1}-\alpha_{2}}s_{1},\;-\frac{5}{4}\Lambda_{0}+\frac{1}{4}\Lambda_{1} -\frac{5}{4}\Lambda_{2}\) \\ \hline \(t_{2\alpha_{2}}s_{2}s_{1},\;-\frac{3}{4}\Lambda_{0}-\frac{6}{4}\Lambda_{1}\) & \(t_{-\alpha_{1}-\alpha_{2}},\;-\frac{3}{4}\Lambda_{0}-\frac{3}{4}\Lambda_{1}- \frac{3}{4}\Lambda_{2}\) \\ \(t_{2\alpha_{1}+2\alpha_{2}}s_{1}s_{2}s_{1},\;-\frac{5}{4}\Lambda_{0}-\frac{2}{ 4}\Lambda_{1}-\frac{2}{4}\Lambda_{2}\) & \(t_{-\alpha_{2}}s_{1},\;\frac{-2}{4}\Lambda_{0}-\frac{5}{4}\Lambda_{1}-\frac{2 }{4}\Lambda_{2}\) \\ \hline \(t_{\alpha_{1}}s_{1}s_{2},\;-\frac{9}{4}\Lambda_{1}-\frac{3}{4}\Lambda_{2}\) & \(t_{\alpha_{2}}s_{2}s_{1},\;-\frac{3}{4}\Lambda_{1}-\frac{6}{4}\Lambda_{2}\) \\ \(t_{-\alpha_{1}}s_{2},\;-\frac{2}{4}\Lambda_{0}-\frac{2}{4}\Lambda_{1}-\frac{5}{ 4}\Lambda_{2}\) & \(t_{\alpha_{1}+\alpha_{2}}s_{1}s_{2}s_{1},\;\frac{1}{4}\Lambda_{0}-\frac{5}{4} \Lambda_{1}-\frac{5}{4}\Lambda_{2}\) \\ \hline \end{tabular}
**Table 11**: Fixed points of \(\mathcal{M}_{Hit}(\mathfrak{sl}_{3},3/4,[2,1])\) and their images under the bijection (5.1). The affine Weyl elements in the same box are related by the left action of \(s_{1}\), so they reduces to the same fixed points.
Figure 4: Alcoves corresponding fixed points for \(A_{2}\), \(\nu=4/3\). Each alcove gives rise to a affine Weyl group element whose inverse gives rise to a fixed point: here \(s_{0}=s_{\theta}t_{-\theta}\), and \(s_{i}\) is the Weyl reflection generated by the simple roots of the Lie algebra. For example, the element \(s_{0}\) in the region gives rise to an element \(s_{0}^{-1}=t_{\theta}s_{\theta}=t_{\alpha_{1}+\alpha_{2}}s_{1}s_{2}s_{1}\).
**Example 5.4**.: \(W_{-3+3/u}(\mathfrak{sl}_{3},[3])\leftrightarrow\mathcal{M}_{Hit}(\mathfrak{sl}_{ 3},u/3,[1,1,1])\). Here \(\Delta_{\bar{\mathfrak{n}}^{\vee}}=\hat{\Delta}_{+}^{\vee}\backslash\{\alpha_{ 1},\alpha_{2}\}\), and \(W_{\mathbf{P}^{\vee}}\) is the full Weyl group of \(\mathfrak{sl}_{3}\), so one only has to consider the affine Weyl group elements of the form \(t_{-k_{1}\alpha_{1}-k_{2}\alpha_{2}}\). Constraints on fixed points are
\[t_{-k_{1}\alpha_{1}-k_{2}\alpha_{2}}(-\alpha_{1}-\alpha_{2}+u \delta)=-\alpha_{1}-\alpha_{2}+(-k_{1}-k_{2}+u)\delta, \tag{5.11}\] \[t_{-k_{1}\alpha_{1}-k_{2}\alpha_{2}}(\alpha_{1})=\alpha_{1}+(2k _{1}-k_{2})\delta,\] \[t_{-k_{1}\alpha_{1}-k_{2}\alpha_{2}}(\alpha_{2})=\alpha_{2}+(2k _{2}-k_{1})\delta.\]
The set of allowed \((k_{1},k_{2})\) is
\[\{(k_{1},k_{2})\in\mathbb{Z}^{2}\ |\ u-k_{1}-k_{2}>0,\ 2k_{1}-k_{2}>0,\ 2k_{2}-k_{1}>0\} \tag{5.12}\]
and the number of fixed points are \(\frac{(u-2)(u-1)}{6}\). The corresponding W-algebra \(W_{-3+3/u}(\mathfrak{sl}_{3},[3])\) is ismorphic to \(W_{3}(3,3+u)\) minimal model and this is the bijection discussed in [9]. Alcoves corresponding to fixed points when \(u=4\) are shown in figure 6.
#### 5.1.2 Non-admissible W-algebras
In general it is not easy to study the representation theory of non-admissible W-algebras. On the other hand, computing fixed manifolds of corresponding affine Springer fibers is straight forward. Although we will not be able to provide a proof of the bijection like in boundary admissible cases, we can show that the bijection still holds for the few cases when the simple modules of non-admissible W-algebras are known [86, 87], and it is also interesting to use this bijection to predict information on other non-admissible W-algebras.
Figure 5: Alcoves corresponding fixed points for \(A_{2}\), \(\nu=4/3\), \(f^{\vee}=[2,1]\). Alcoves with a blue edge (corresponding to reflection \(s_{1}\)) on the walls does not reduce to a fixed point (under \(s_{1}\) they are reflected out of the area bounded by walls). Alcoves separated by a blue edge (two alcoves encircled by red edges and black edges) reduce to the same fixed point. Note that the label in each alcove is \(w^{-1}\) in terms of simple reflections.
For example, consider the affine vertex algebra \(L_{-2}(D_{4})\). Since \(h^{\vee}=6\), the level \(\kappa=-6+4/1\) is non admissible. One the fibre side we have \(\mathfrak{g}=D_{4},\nu=\frac{1}{4},\mathbf{P}^{\vee}=\mathbf{I}^{\vee}\). To compute fixed varieties, we first find \(L_{\nu}\) and \(S_{\nu}\). The set \(L_{\nu}=\{\alpha+l\delta\ |\ \frac{1}{4}(\alpha,\rho^{\vee})+l=0\}\) in this case is non-empty
\[L_{\nu}=\{\pm(-\mu+\delta)\}, \tag{113}\]
where \(\mu=\alpha_{1}+\alpha_{2}+\alpha_{3}+\alpha_{4}\), so \(W_{\nu}\) is the Weyl group generated by \(s_{-\mu+\delta}\). The set \(S_{\nu}=\{\alpha+l\delta\ |\ \frac{1}{4}(\alpha,\rho^{\vee})+l=\frac{1}{4}\}\) is also larger than the admissible case:
\[S_{\nu}=\{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},-\mu+\alpha_{1}+\delta, -\mu+\alpha_{3}+\delta,-\mu+\alpha_{4}+\delta,\theta-\delta\}, \tag{114}\]
with \(\theta=\mu+\alpha_{2}=\alpha_{1}+2\alpha_{2}+\alpha_{3}+\alpha_{4}\) being the highest root. We adopt the Bourbaki numbering for simple roots [85, 88].
By the discussion in section 4.3, the dimension one fixed variety is given by the affine Weyl element \(\tilde{w}\) such that \(\tilde{w}(S_{\nu})\subset\hat{\Delta}_{+}\) up to the right action of \(W_{\nu}\). There are only two such elements in \(W_{aff}\), and they are indeed in the same \(W_{\nu}\) orbit
\[\begin{split}\tilde{w}_{0}&=s_{0}=t_{\theta}s_{2}s_ {3}s_{1}s_{2}s_{4}s_{2}s_{3}s_{1}s_{2},\\ \tilde{w}_{0}^{\prime}&=s_{0}s_{-\mu+\delta}=t_{\mu} s_{3}s_{1}s_{2}s_{4}s_{2}s_{3}s_{1}s_{2},\end{split} \tag{115}\]
where \(s_{i}\) is the simple reflection corresponding to the simple root \(\alpha_{i}\). Therefore there is only one fixed variety with dimension 1.
Figure 6: Alcoves corresponding fixed points for \(A_{2}\), \(\nu=4/3\), \(f^{\vee}=[1,1,1]\). Alcoves encircled by a hexagon of black edges are in the same \(W_{\mathbf{P}^{\vee}}\) orbit so reduces to the same fixed point. Since only one such hexagon in the area bounded by walls, so there is only one fixed point. Note that the label in each alcove is \(w^{-1}\) in terms of simple reflections.
The dimension \(0\) fixed points corresponds to affine Weyl group elements \(\tilde{w}\) satisfying \(|\tilde{w}(S_{\nu})\cap\hat{\Delta}_{-}|=1\) up to the right action by \(W_{\nu}\), and there are four fixed points
\[\begin{split}&\tilde{w}_{1}=1\\ &\tilde{w}_{2}=s_{1}s_{0}=t_{\theta}s_{2}s_{3}s_{1}s_{2}s_{4}s_{2} s_{3}s_{1}s_{2}s_{1},\\ &\tilde{w}_{3}=s_{3}s_{0}=t_{\theta}s_{2}s_{3}s_{1}s_{2}s_{4}s_{1 }s_{2}s_{3}s_{1}s_{2},\\ &\tilde{w}_{4}=s_{4}s_{0}=t_{\theta}s_{4}s_{2}s_{3}s_{1}s_{2}s_{4 }s_{2}s_{3}s_{1}s_{2}.\end{split} \tag{5.16}\]
The weights under the bijection (5.1) are given by \(t_{\beta}w.(-2\Lambda_{0})\) and results are summarized in table 12 (\(-2\Lambda_{0}\) is invariant under the dot action of \(W_{\nu}\) so \(\tilde{w}\) and \(\tilde{w}s_{-\mu+\delta}\) give the same weight). Indeed they agree with results in VOA literature [86; 87]. If one changes \(f\) to be an element in the minimal nilpotent orbit, there will be only one fixed point on the fibre side, and this is also consistent with the fact that \(W_{-2}(D_{4},\min)\) is isomorphic to \(\mathbb{C}\)[87]. More examples on the bijection between fixed varieties and simple modules of non-admissible W-algebras are discussed in appendix A.
#### 5.1.3 Formula for the number of fixed varieties
Here we give a formula on the number of fixed varieties of a fibre which will also give the number of simple modules of the corresponding W-algebras under the bijection (5.1).
1. For the Hitchin system defined by \({}^{n}\hat{\mathfrak{j}}\), \(\nu=u/m\) and \(\mathbf{I}^{\vee}\), the corresponding VOA is \(L_{-h^{\vee}+\frac{1}{n\nu}}(\mathfrak{g})\) where \(\hat{\mathfrak{g}}\) is the Langlands dual of \({}^{\mathfrak{o}\mathfrak{j}}\) whose finite part is \(\mathfrak{g}^{\vee}\). Let \(a\) be the dimension of the cohomology of fixed varieties when \(u=1\), then that of the general \(u\) is [45; 46] \[au^{r}\] (5.17) with \(r\) being the rank of \(\mathfrak{g}\). In particular, when \(m\) is a regular elliptic number, \(a=1\), and there are only fixed points and so the number is \(u^{r}\). The value of \(a\) for the other cases can be found in [46].
2. For the Hitchin system defined by non twisted affine Lie algebra \(\hat{\mathfrak{j}}\), \(\nu=u/h^{\vee}\), and general \(\mathbf{P}^{\vee}\) which is given by the standard parabolic subalgebra, the corresponding VOA is a W-algebra at boundary admissible number, we show in [42] the number of
fixed points is \[\frac{(u-m_{1})(u-m_{2})\cdots(u-m_{j})}{(m_{1}+1)(m_{2}+1)\cdots(m_{i}+1)},\] (111) where the set \(\{m_{1},m_{2},\cdots,m_{i}\}\) is the set of exponents of the Weyl group of \(\mathfrak{l}\) of \(\mathfrak{p}\).
### Conformal weights and momentum map
The bijection (108) also maps geometric data on the Hitchin side to algebra data on the VOA side. On the Hitchin side, one can define a moment map of the \(\mathbb{C}^{*}\) action, and it was shown in several cases that the value of moment map on each fixed points is equal to the conformal weights of the corresponding module up to shift by a constant [9; 10]. We discuss a generalization of this correspondence in this section.
For simplicity, let us focus on the regular elliptic slope \(\nu\), so the W-algebra is \(W_{-h^{\vee}+\frac{h^{\vee}}{nu}}(\mathfrak{g},f)\). Given a fixed point \(\tilde{w}=t_{b}w\), the corresponding Higgs field is (up to gauge transformation)
\[\Phi_{\tilde{w}}(z)dz=\frac{dz}{z}\sum_{\alpha+l\delta\in S_{\nu},l>0}z^{l-(w^ {-1}b,\alpha)}e_{w\alpha}. \tag{112}\]
Following [9], one can define the moment map on the Hitchin moduli space 19
Footnote 19: We add a factor of \(\frac{1}{2}\) in the definition to better match with the conformal dimension of the simple module. We also set all parabolic weights \(\alpha_{i}\) to zero for simplicity.
\[\mu\equiv\frac{i}{2\pi}\int\mathrm{Tr}\left(\Phi\wedge\Phi^{\dagger_{h}}- \mathrm{Id}|z|^{2(u-h^{\vee})/h^{\vee}}dzd\bar{z}\right), \tag{113}\]
where \(\Phi^{\dagger_{h}}=h^{-1}\Phi^{\dagger}h\) is the Hermitian adjoint of \(\Phi\), and \(h\) is the Hermitian metric of the Higgs bundle. And we propose the following relation between the moment map of \(\Phi_{\tilde{w}}\) and the conformal dimension of \(H_{f}(L(\Lambda_{\tilde{w}}))\)
\[\boxed{h_{H_{f}(L(\Lambda_{\tilde{w}}))}=\mu(\Phi_{\tilde{w}}(z))-\left[\frac {u}{h^{\vee}}|\rho|^{2}-\frac{h^{\vee}}{u}|x|^{2}-2(x,\rho)\right].} \tag{114}\]
Here \(x\) should be chosen to be \(H/2\) of the standard triple \((X,Y,H)\) to match with the VOA corresponding to 4d theory. It is straightforward to check that when \(\mathfrak{g}=A_{N-1}\) and \(f=[N]\), equation (114) reproduces the result in [9] and when \(\mathfrak{g}=A_{1}\) and \(f=[1,1]\), equation (114) also gives the result in [10].
We provide a derivation of (114) for \(\mathfrak{g}=A_{N-1}\), which essentially follows from [9]. Fixed point then has the following matrix form
\[\Phi_{\tilde{w}}(z)=M\left(\begin{array}{cccc}0&z^{b_{1}}&&\\ &&\ddots&\\ &&&z^{b_{N-1}}\\ z^{b_{N}}&&&\end{array}\right)M^{-1}dz, \tag{115}\]
where \(M\) is a permutation matrix and
\[b_{i}=-(w^{-1}b,\alpha_{i})-1,\ 1\leq i\leq N-1,\quad b_{N}=u-1+(w^{-1}b,\theta). \tag{116}\]
The moment map at this fixed point can be computed explicitly using the definition (5.20)
\[\mu(\Phi_{(w,b)})=\frac{h^{\vee}}{2u}|\mathbf{a}|^{2}, \tag{5.24}\]
where coordinates of the \(N\) dimensional vector \(\mathbf{a}\) is related with the coordinates of the vector \(\mathbf{b}\) by
\[b_{i}-\frac{u-h^{\vee}}{h^{\vee}}=a_{i}-a_{i+1},\quad\sum_{i=1}^{N}a_{i}=0. \tag{5.25}\]
Here \(a_{N+1}\) is identified with \(a_{1}\). Using the definition (5.23) of \(b_{i}\) and \(\alpha_{i}=e_{i}-e_{i+1}\) in orthogonal basis, one get
\[a_{i}=(-w^{-1}b-\frac{u}{h^{\vee}}\rho,e_{i}). \tag{5.26}\]
Here \(\rho\) is the Weyl vector, and \((\rho,\alpha_{i})=1\). Therefore the value of moment map at the fixed point is
\[\mu(\Phi_{\widehat{w}}(z))=\frac{h^{\vee}}{2u}|\mathbf{a}|^{2}= \frac{h^{\vee}}{2u}\sum_{i}(-w^{-1}b-\frac{u}{h^{\vee}}\rho,e_{i})^{2}\] \[=\frac{h^{\vee}}{2u}|-w^{-1}b-\frac{u}{h^{\vee}}\rho|^{2}=\frac{ h^{\vee}}{2u}|b+\frac{u}{h^{\vee}}w\rho|^{2}, \tag{5.27}\]
Using the formula of the admissible weight \(\Lambda_{\widehat{w}}\)
\[\Lambda_{\widehat{w}}=t_{b}w.\left(-h^{\vee}+\frac{h^{\vee}}{u}\right)\Lambda _{0}, \tag{5.28}\]
the finite part \(\lambda_{\widehat{w}}\) of \(\Lambda_{\widehat{w}}\) is
\[\lambda_{\widehat{w}}=w\rho+\frac{h^{\vee}}{u}b-\rho. \tag{5.29}\]
Clearly we have
\[|b+\frac{u}{h^{\vee}}w\rho|^{2}=\frac{u^{2}}{(h^{\vee})^{2}}|\lambda_{\widehat {w}}+\rho|^{2}, \tag{5.30}\]
and the moment map can then be expressed in terms of \(\lambda_{\widehat{w}}\)
\[\mu(\Phi_{\widehat{w}}(z))=\frac{u}{2h^{\vee}}|\lambda_{\widehat{w}}+\rho|^{2}. \tag{5.31}\]
Comparing (5.31) with the formula (3.27) of the conformal dimension of \(H_{f}(L(\Lambda_{\widehat{w}}))\)
\[h_{H_{f}(L(\Lambda_{\widehat{w}}))}=\frac{u}{2h^{\vee}}(|\lambda_{\widehat{w} }+\rho|^{2}-|\rho|^{2})-\frac{h^{\vee}}{2u}|x|^{2}+(x,\rho), \tag{5.32}\]
we get the relation (5.21) between moment maps and conformal dimension.
### Modular properties
**Modular transformation and DAHA**: One important aspects of VOA is the modular property on the characters of the modules [43]. It is definitely interesting to see whether one can find similar modular transformation on the Hitchin side, which actually indeed exists. The cohomology of the Hitchin moduli space (which is related to the data of fixed varieties
by using Morse theory) is realized as a finite dimensional representation of double affine Hecke algebra [45; 89], and the \(PSL^{c}(2,\mathbb{Z})\) action on DAHA [47] will induce a \(PSL^{c}(2,\mathbb{Z})\) action on the cohomology of fixed varieties. It is then natural to compare above two sets of modular transformation. This relation will be proved in [42]. The relation between modular matrices of minimal W-algebras of \(A\) type and spherical DAHA of \(A\) type was studied in [48].
**Modular property for non-admissible W-algebras**: The cohomology group \(H^{*}(\mathcal{M}_{Hit})\) considered in this paper carries a DAHA action. In good cases there is also a natural \(PSL^{c}(2,\mathbb{Z})\)- action on \(H^{*}(\mathcal{M}_{Hit})\). Given the correspondence between the fixed varieties of Hitchin system and the modules of VOA, one would find interesting implication for the modular property of non-admissible W-algebra. A crucial fact is that in general the fixed varieties of \(\mathcal{M}_{Hit}\) corresponding to non-admissible W-algebra has **higher** dimensional components. This is in contrast with the admissible case where the fixed varieties are all of dimension zero.
Now in our correspondence, each irreducible component of fixed varieties gives a simple module (in the category \(\mathcal{O}\)) of the corresponding VOA. However, in the Morse theory each higher dimensional fixed variety would contribute more than one basis vector to the cohomology. So the above mismatch suggests that if one want to have the modular property for the VOA, one has to enlarge the set of VOA modules. For instance, one might need to add the logarithmic modules to have the modular property which is also observed in some non-admissible VOAs [90; 91]. In fact, our correspondence suggests the number of added module should be the same as the dimension for the cohomology from the fixed varieties.
**Modular data and Coulomb branch index**: One can define a Coulomb branch index \(\mathcal{I}_{\mathcal{T}}^{m}(t)\) (Hitchin character) of the 4d theory \(\mathcal{T}\) on \(L(m,1)\times S^{1}\) where \(L(m,1)\) is the Lens space [10; 49]. The Coulomb branch index has an expansion in terms of the fixed varieties, and the geometric data such as momentum map plays a crucial role in computing it. On the other hand, the Lens space Coulomb index \(\mathcal{I}^{m}(t)\) is deeply connected to the modular matrices (3.13) and (3.28) of the corresponding VOA [10; 11; 49], namely
\[\lim_{t\to e^{2\pi i}}\mathcal{I}_{\mathcal{T}}^{m}(t)=a(\mathbb{S} \mathbb{T}^{m}\mathbb{S})_{vac,vac}, \tag{5.33}\]
where \(a\) is a constant determined by \(\mathfrak{g}\), \(\mathbb{S}\) and \(\mathbb{T}\) are the modular matrices of characters of the VOA corresponding to theory \(\mathcal{T}\), and \((\mathbb{S}\mathbb{T}^{m}\mathbb{S})_{vac,vac}\) means the vacuum-vacuum component of the matrix \(\mathbb{S}\mathbb{T}^{m}\mathbb{S}\). In general, \(\mathcal{I}_{\mathcal{T}}^{m}\) is difficult to compute for the theories considered in this paper as most of them lack a Lagrangian description. However, when \(n=1\), the Lens space is just the 3-sphere \(S^{3}\), the Coulomb branch index on \(S^{3}\times S^{1}\) is completely determined by the Coulomb branch spectrum of the 4d theory which can be obtained using the method in [17; 19; 20], allowing us to check the relation (5.33) for \(m=1\).
**Example:** Consider 4d theory \(\mathcal{T}_{A_{N-1},\frac{u}{N},f=\text{trivial}}\) with \(\gcd(N,u)=1\) (section 2.1), the corresponding VOA is \(L_{-N+\frac{N}{u}}(A_{N-1})\). The Coulomb branch spectrum \(\text{CB}_{\mathcal{T}}\) can be found using the method in [19] and is the following set of rational numbers
\[\text{CB}_{\mathcal{T}}=\{i-j\frac{N}{u}\ |\ i,j\in\mathbb{Z},\ 2\leq i\leq N,1 \leq j\leq\lfloor(i-1)\frac{u}{n}\rfloor\}. \tag{5.34}\]
Here \(\lfloor x\rfloor\) is the maximal integer less or equal to \(x\). The Coulomb branch \({\cal I}_{\cal T}(t)\) on \(S^{3}\times S^{1}\) is then
\[{\cal I}_{\cal T}(t)=\prod_{d\in{\rm CB}_{\cal T}}\frac{1}{1-t^{d}}. \tag{102}\]
Because all elements in \({\rm CB}_{\cal T}\) are not integer, the limit \(t\to e^{2\pi i}\) of \({\cal I}_{\cal T}\) is not singular, comparing with the modular matrices of \(L_{-N+\frac{N}{u}}(A_{N-1})\) (101) we find that for the theory \({\cal T}_{A_{N-1},\frac{u}{N},{\rm trivial}}\),
\[\lim_{t\to e^{2\pi i}}{\cal I}_{\cal T}=e^{2i\pi(h_{\rm min}-\frac{c}{24})}( \mathbb{S}\mathbb{T}\mathbb{S})_{vac,vac}, \tag{103}\]
where \(h_{\rm min}=-\frac{\dim\mathfrak{g}}{24}\left(u-\frac{1}{u}\right)\) is the smallest conformal weights of all admissible modules of \(L_{-N+\frac{N}{u}}(A_{N-1})\). It would be nice to generalize this relation for lens space index \(L(m,1)\) in the future.
### Zhu's \(C_{2}\) algebra and the cohomology ring
For each VOA \(V\) there is a commutative algebra \(C_{2}(V)\) associated to \(V\) called Zhu's \(C_{2}\) algebra. In the following, we will present examples when \(C_{2}(V)\) is isomorphic to the cohomology ring of the corresponding Hitchin system.
Consider \(\mathfrak{g}=A_{N-1}\), \(\nu=\frac{u}{N}\) and \(f\) principal. The VOA is then the principal W-algebra \(W_{-N+N/u}(A_{N-1},{\rm prin})\) (i.e. \(W_{N}(N,u)\) minimal model). Motivated from the character of its vacuum module, \(C_{2}(W_{-N+N/u}(A_{N-1},{\rm prin}))\) is conjectured to be the same as the Jacobi algebra of an isolated hypersurface singularity [92]
\[\mathbb{C}[T_{2},T_{3},\cdots,T_{N}]/\frac{\partial f}{\partial T_{2}},\cdots,\frac{\partial f}{\partial T_{N}}\rangle. \tag{104}\]
Here \(T_{2},\cdots T_{N}\) are generators with degrees \(2,\cdots,N\), and \(f[T_{2},\ldots,T_{n}]\) is an isolated singularity with degree \(u+1\). The generators \(\{\frac{\partial f}{\partial T_{2}},\cdots,\frac{\partial f}{\partial T_{N}}\}\) of the ideal then have degree \(u+1-n,\cdots,u-2\). This construction ensures that the above algebra has the dimension \(\frac{(u-1)!}{n!(u-n)!}\), which is just the dimension for the Milnor algebra.
On the other hand, the cohomology ring for the corresponding Hitchin system is given by the following ring [89, 93]
\[\mathbb{C}[e_{2},e_{3},\cdots,e_{N}]/\langle g_{u+1-N},\cdots,g_{u-1}\rangle. \tag{105}\]
Here generators \(e_{2},\cdots,e_{N}\) also have degree \(2,\cdots,N\), and the generator \(g_{u-n+i}\) of the ideal is the coefficient of \(w^{u-n+i}\) in the Taylor expansion of
\[(1+e_{2}w^{2}+\ldots+e_{n}w^{n})^{u/n} \tag{106}\]
at \(w=0\). From the descriptions above, one can deduce that the ring (104) and the ring (105) are isomorphic. This relations has a similar flavor to the Hikita conjecture which relates the **coordinate ring** of some scheme coming from a conical symplectic singularity to the **cohomology ring** of a symplectic resolution of the dual conical symplectic singularity [94]. In our context, the coordinate ring is coming from Zhu's \(C_{2}\) algebra, which would indeed give the coordinate ring of the Higgs branch [95]. It would be interesting to further study this correspondence in more general setups.
### Generalization to arbitrary \(f\)
So far we assume the nilpootent element \(f\) which labels the regular singularity to be a regular (principal) nilpotent element in a Levi subalgebra of \(\mathfrak{g}\), however, there are many nilpotent element which is distinguished but not regular in any minimal Levi subalgebra containing it (distinguished but not regular for short). For example, when \(\mathfrak{g}\) is of type \(CDEFG\), any element \(f\) in the subregular nilpotent orbit is distinguished but not regular as the minimal Levi subalgebra containing \(f\) is \(\mathfrak{g}\) itself.
Given a 4d theory \(\mathcal{T}_{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ }}}}}}}}}}}} }\) or \(\mathcal{T}_{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}} }}\) with \(f\) distinguished but not regular, we should modify the definition of its corresponding \(\mathcal{M}_{Hit}\) in the following way. Adopting the same notation as in section 4.1 with the modification that \(\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ }}}}}}}}}}}}} }}\) is the minimal standard Levi subalgebra containing \(f\), we still consider the Higgs bundle \((E,\Phi)\) with a \(P^{\vee}\)-level structure at the regular singularity. However, \(\Phi\) should have a new boundary condition around the regular singularity (recall \(\Phi^{\prime}=z\Phi\))
\[\lim_{z\to 0}\Phi^{\prime}\in\overline{d(\mathcal{O}_{f}^{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ { \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ { \mathfrak{ \mathfrak{ { \mathfrak{ \mathfrak{ \mathfrak{ { \mathfrak{ \mathfrak{ { \mathfrak{ { \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ \mathfrak{ \mathfrak{{ \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ \mathfrak{ { \mathfrak{ { \mathfrak{{ \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{{ { \mathfrak{{ \mathfrak{ { { \mathfrak{{ { \mathfrak{ { \mathfrak{{ \mathfrak{{ { \mathfrak{ { { \mathfrak{{ { \mathfrak{{ { \mathfrak{{ { \mathfrak{{ { \mathfrak{{ { \mathfrak{{ { \mathfrak{{ { \mathfrak{ { \mathfrak{{ { \mathfrak{{ { { \mathfrak{{ { \mathfrak{ { { \mathfrak{ { { \mathfrak{ { { \mathfrak{ { { \mathfrak{ { \mathfrak{ { { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak{ { \mathfrak { { \mathfrak { \mathfrak { { \mathfrak { \mathfrak { { \mathfrak { \mathfrak { \mathfrak { { \mathfrak { \mathfrak { { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak { \mathfrak \mathfrak{ \mathfrak { \mathfrak { \mathfrak \mathfrak { \mathfrak { \mathfrak \mathfrak{ \mathfrak { \mathfrak {\mathfrak \mathfrak{ \mathfrak { \mathfrak \mathfrak{ \mathfrak {\mathfrak \mathfrak {\mathfrak \mathfrak{\mathfrak \mathfrak{\mathfrak{\mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak{ \mathfrak \mathfrak \mathfrak \mathfrak { \mathfrak \mathfrak \mathfrak{ \mathfrak{\mathfrak \mathfrak{\mathfrak \mathfrak{\mathfrak{ \mathfrak{ \mathfrak{{{{{{{\mathfrak }} \mathfrak \mathfrak \mathfrak \mathfrak \mathfrak { \mathfrak \mathfrak \mathfrak{ \mathfrak \mathfrak{ \mathfrak \mathfrak{\mathfrak \mathfrak{\mathfrak{\mathfrak{ \mathfrak \mathfrak \mathfrak \mathfrak{ \mathfrak \mathfrak{ \mathfrak{} \mathfrak{ \mathfrak{ \mathfrak \mathfrak{} \mathfrak{ \mathfrak{ \mathfrak \mathfrak{} \mathfrak{ \mathfrak{ \mathfrak{} \mathfrak{}\mathfrak{\mathfrak \mathfrak{ \mathfrak{} \mathfrak{\mathfrakmathfrak{} \mathfrak{}\mathfrak{ \mathfrakmathfrak{}\mathfrak{}\mathfrak{\mathfrak{} \mathfrak{\mathfrakmathfrak{}}\mathfrak{\mathfrak{{}^
where \(\{h_{1},h_{2},\cdots,h_{n}\}\) is the following set of integers
\[\{1,3,\cdots,2n-5,2n-4,n-1\}. \tag{108}\]
This is just the set of exponents of \(D_{n}\) with the maximal exponent \(2n-3\) subtracted by \(1\). The denominator \(2^{n-2}(n-2)!\) is also the order of the Weyl group of the centralizer of an element in \(\mathcal{O}_{min}\). Formula (108) and (109) together predict the number of simple modules of subregular W-algebra \(W_{-(2n-2)+\frac{2n-2}{u}}(D_{n},[2n-3,3])\). When \(u=2n-3\), the number of fixed points is \(n-2\) which is the same as the number of simple modules of \(W_{-(2n-2)+\frac{2n-2}{2n-3}}(D_{n},[2n-3,3])\) given in [81].
**Example 5.6**.: \(W_{-h^{\vee}+\frac{h^{\vee}}{u}}(E_{n},f_{subreg})\leftrightarrow\mathcal{M} _{Hit}(E_{n},\frac{u}{h^{\vee}},f_{min})\). Here \(n=6,\ 7,\ 8\) and \(\gcd(u,h^{\vee})=1\). The minimal Levi subalgebra containing \(f_{subreg}\) is again \(E_{n}\) itself. Numbers of extra fixed points comparing to the principal case are
\[E_{6}: \frac{1}{6!}(u-1)(u-4)(u-5)(u-7)(u-8)(u-10), \tag{109}\] \[E_{7}: \frac{1}{2^{5}6!}(u-1)(u-5)(u-7)(u-9)(u-11)(u-13)(u-16),\] \[E_{8}: \frac{1}{2903040}(u-1)(u-7)(u-11)(u-13)(u-17)(u-19)(u-23)(u-28).\]
Again \(h_{i}\)'s appear in the formula are exponents of \(E_{n}\) with the maximal one subtracted by \(1\), and the denominator is the order of the Weyl group of the centralizer of an element in \(\mathcal{O}_{min}\). For example, \(2903040\) is the order of Weyl group of \(E_{7}\) which is the centralizer of an element of the \(A_{1}\) orbit of \(E_{8}\). Formulae (109) and (109) together predict the number of simple modules of the subregular W-algebra \(W_{-h^{\vee}+\frac{h^{\vee}}{u}}(E_{n},f_{subreg})\). When \(u=h^{\vee}-1\), the number of fixed points matches the number of simple modules of \(W_{-h^{\vee}+\frac{h^{\vee}}{h^{\vee}-1}}(E_{n},f_{subreg})\) computed in [81].
It was also proved in [81] that W-algebras \(W_{-h^{\vee}+\frac{h^{\vee}}{h^{\vee}-1}}(\mathfrak{g},f_{subreg})\) with \(\mathfrak{g}\) being type \(D\) or \(E\) are rational with modular matrices of simple modules worked out explicitly. It would also be nice to match these data from the VOA side with geometric data from the Hitchin side. In general, W-algebras with distinguished \(f\) (distinguished W-algebras) play fundamental role among W-algebras. However, the representation theory of distinguished W-algebras that are not of regular type are largely unexplored. Our correspondence provides motivations to study the space \(Sp_{\gamma,\mathbf{P}^{\vee},f}\) and use the geometry to predict representation theories of distinguished W-algebras.
### Relation with 3d symplectic duality
When taking the limit that the radius of the circle to be \(0\), one can get a 3d \(\mathcal{N}=4\) SCFT \(\mathcal{T}^{3d}\). As mentioned in the introduction, the Higgs branch \(X\) of \(\mathcal{T}^{3d}\) is the same as the Higgs branch of 4d theory, which is identified as the associated variety of the corresponding VOA \(V(\mathcal{T})\). The Coulomb branch \(Y\) of \(\mathcal{T}^{3d}\) is also related to the Coulomb branch of the 4d on \(S^{1}\). In the massless limit both \(X\) and \(Y\) are hyper-Kahler cones.
In many cases, \(Y\) (resp. \(X\)) can also be realized as the Higgs (resp. Coulomb) branch of another 3d \(\mathcal{N}=4\) quiver gauge theory \(\mathcal{T}^{3d,mirror}\) which is called the mirror of \(\mathcal{T}^{3d}\) in physics literature [53]. Properties of \(Y\) can be quite different from its 4d counter part:
1. Usually there is no flavor symmetries acting on the 4d Coulomb branch. However, there are sometimes emergent global symmetries on \(Y\).
2. \(Y\) is not irreducible, i.e. it typically has a component described by free hypermultiplets in the mirror theory.
Since \(X\) and \(Y\) are Higgs or Coulomb branches of the same 3d theory, they form a symplectic pair. Actually, many familiar symplectic pairs arises this way:
**Example 5.7**.: Consider the 4d theory \(\mathcal{T}_{A_{N-1},\frac{u}{N},f=[1^{N}]}\) with \(\gcd(u,N)=1\) and \(u>N\). After reducing to 3d, the Higgs branch \(X\) is the associated variety of \(L_{-N+N/u}(A_{N-1})\) which is the nilpotent cone \(\mathcal{N}\) of \(A_{N-1}\)[97]20, while the Coulomb brach \(Y\) is given by the Higgs branch of the so-called \(T[SU(n)]\) theory [98] plus \(h_{u,N}=\frac{(N-1)(u-N-1)}{2}\) free hypermultiplets, which is \(\mathcal{N}\) plus the flat space \(\mathbb{C}^{h_{u,n}}\). The interacting part 3d theory is self-mirror meaning both its Higgs and Coulomb branch are the same (\(\mathcal{N}\)). Notice that when \(u>N\), 4d theories \(\mathcal{T}_{A_{N-1},\frac{u}{N},f=[1^{N}]}\) with different \(u\) give the same symplectic pairs.
Footnote 20: Associated varieties when \(u>N\) remain to be the same.
**Example 5.8**.: Next change \(f\) in the above example to be an element in arbitrary nilpotent orbit. Then \(X\) becomes \(S_{f}\cap\mathcal{N}\), and \(Y\) is the Higgs branch of \(T_{f}[SU(N)]\) theory plus \(h_{u,N}\) free hypermultiplets. So \(Y\) is \(\overline{\mathcal{O}}_{f^{\vee}}\) plus \(\mathbb{C}^{h_{u,n}}\). It is known that \(S_{f}\cap\mathcal{N}\) and \(\overline{\mathcal{O}}_{f^{\vee}}\) form a symplectic pair.
**Example 5.9**.: Now take \(N=2l+1\) to be an odd integer, \(u=2\) and \(f=[1^{2l+1}]\). The 3d mirror for this theory is given in [53], and \(X\) is now \(\overline{\mathcal{O}}_{[2^{l},1]}\) and \(Y\) is \(S_{[l+1,l]}\cap\mathcal{N}\).
In above examples, we see that different 4d theories (VOAs) can have the same Higgs branch (associated variety). Their 4d Coulomb branches are different, however, after reducing to 3d, their 3d Coulomb branch differ only by a \(\mathbb{C}^{h}\) factor. It seems that the 4d perspective is a more "refined" version of 3d symplectic pair. It would also be interesting to see if it can provide new in-sight on symplectic dualities.
Moreoever, one can get a finite W-algebra from the twisted Zhu's algebra of the associated VOA [54]. The finite W-algebra is precisely those found by doing quantization on the Higgs branch of 3d theory, so from the reduction of 4d theory, one not only get a pair of symplectic singularities, but also an algebra/geometry pair.
## 6 Conclusion and outlook
In this paper, we study the mirror symmetry for circle compactified 4d \(\mathcal{N}=2\) SCFTs. This symmetry involves an algebra object which is a VOA capturing the data on the Schur sector, and a geometric object which is the Coulomb branch of the effective 3d theory. We show that the representation theory of the VOA such as simple modules, modular
transformation, Zhu's algebra can be translated into geometric properties of the Coulomb branch. Various checks have been made in this paper when one can compute things on both sides, and one would get many interesting predictions on each side by using the mirror symmetry map.
Our mirror pair involves W-algebra and the Hitchin's moduli space, which all play important roles in various branches of physics and mathematics, and we hope that the mirror proposal in this paper would help understand them further. While there are many interesting matches in our mirror proposal, a further physical understanding of this mirror symmetry is definitely desirable. Hopefully the physical understanding would help us to construct VOA modules and their character. We mainly focus on regular elliptic slope and special nilpotent orbit in this paper (with a few studies on sub-regular elliptic slope), and detailed studies of other cases will be presented elsewhere.
The mirror symmetry involves an algebra defined using Higgs branch or its generalization and the effective Coulomb branch. Physically, it suggests that one can study following generalizations:
1. **Twisted W-algebras**: In this paper we encounter non-twisted affine Lie algebra on the VOA side. It is actually possible to find the mirror pair involving twisted affine Lie algebras and non-twisted Hitchin systems. Recall that one find the Coulomb branch as Hitchin's moduli space as follows: one get 3d theory by compactify 6d theory on \(\Sigma\times S^{1}\); If we first compactify 6d theory on \(\Sigma\), one get a 4d theory on \(S^{1}\), and if we first compactify it on \(S^{1}\), one get a 5d theory on \(\Sigma\) whose Higgs branch is just the Hitchin's moduli space defined on \(\Sigma\). The twisted theory is defined by turning on outer automorphism twist on \(\Sigma\). Now to get 5d theory with non-simply laced gauge group, one need to turn on outer automorphsim twist around the circle \(S^{1}\), and then one get a non-twisted Hitchin system on \(\Sigma\). On the left hand side of figure 1, one first compactify the theory on \(\Sigma\) and then on \(S^{1}\) with outer-automorphism twist, and it is naturally to expect that one should get a twisted W-algebra by doing outer automorphism twist. [42] establishes the correspondence for twsited AKM at boundary admissible level.
2. **Non-elliptic affine Springer fiber**: We mainly focus on the so-called elliptic affine Springer fiber in this paper. The correspondence can certainly be generalized to the non-elliptic case. The Hitchin system is well define and the corresponding VOA can be found using coset construction [18]. On the other, isomorphisms between W-algebras may predict isomorphisms between Hitchin systems. In certain cases, the non-elliptic Hitchin system is predicted to be isomorphic to elliptic Hitchin system using the isomorphism between W-algebras. **Example 6.1**.: Consider a 4d theory \(\mathcal{T}_{A_{1},1,u-1,f=[1^{2}]}\) which is also called the \((A_{1},D_{2u})\) AD theory. Hitchin system describing its Coulomb branch is given by the data \((A_{1},\nu=u,f^{\vee}=[2])\), and the corresponding affine Springer fibre is not elliptic because the denominator of \(\nu\) is 1. However, the same 4d theory can also be realized as \(\mathcal{T}_{A_{u},u+1,-1,[u-1,1^{2}]}\) by using an irregular singularity which is indeed elliptic and a
special nilpotent orbit [13]. The Hitchin data corresponding to \(\mathcal{T}_{A_{u},u+1,-1,[u-1,1^{2}]}\) is \((A_{u},\nu=\frac{u}{u+1},f^{\vee}=[3,1^{u-2}])\). The duality of the 4d theory implies the isomorphism between Hitchin moduli space \[\mathcal{M}_{Hit}(A_{1},u,[2])\simeq\mathcal{M}_{Hit}(A_{u},\frac{u}{u+1},[3,1^ {u-2}]).\] (6.1)
**Example 6.2**.: Consider a 4d theory whose spectral curve at SCFT point is 21
Footnote 21: One should be careful that if the spectral curve at SCFT point gives a non-isolated singularity, one need to specify the mass and relevant deformation for the theory, see [18] for the discussion on this point.
\[x^{n+n_{1}}+x^{n_{1}}y^{k}=0, \tag{6.2}\]
where \(n_{1}\), \(n\) and \(k\) are positive integers such that \(\gcd(n,k)=1\). It was proposed that dual descriptions of this theory lead to the following isomorphisms of W-algebras [18]
\[\begin{split}& W_{-(n_{1}(n+k)+n)+\frac{n_{1}(n+k)+n}{n+k}}( \mathfrak{sl}_{n_{1}(n+k)+n},[(n+k-1)^{n_{1}},n+n_{1}])\\ &\simeq W_{-k+\frac{k}{n+k}}(\mathfrak{sl}_{k},[k-n_{1},1^{n_{1} }])\end{split} \tag{6.3}\]
and isomorphisms of Hitchin moduli spaces
\[\mathcal{M}_{Hit}(\mathfrak{sl}_{n_{1}(n+k)+n},\frac{n+k}{n_{1}(n+k)+n},[(n+k- 1)^{n_{1}},n+n_{1}])\simeq\mathcal{M}_{Hit}(\mathfrak{sl}_{k},\frac{n+k}{k},[k -n_{1},1^{n_{1}}]). \tag{6.4}\]
More isomorphisms involving other Lie algebras are also proposed in [18, 72], it would be nice if one can prove these isomorphisms rigorously.
3. **Class S theory**: VOAs for general class \(\mathcal{S}\) theory has been found in [15, 71, 99], but little is known about their representation theory. On the other hand, the Coulomb branch for circle compactified theory is given by Hitchin system with regular singularities only. There are a lot of studies on the cohomology of the moduli space [100], and we might learn the representation theory of VOA by using results on Hitchin side.
4. **General \(\mathcal{N}=2\) theories**: We hope to push our mirror symmetry to more general \(\mathcal{N}=2\) SCFTs and use it to study physical properties of those theories. It was found recently that one can attach interesting configuration of curves on the central fiber [101], and their fixed points and cohomology could be interesting objects to study. One can also consider more general \(\mathcal{N}=2\) theories such as pure \(SU(N)\) gauge theory compactified on the circle, and study their related mirror symmetry.
5. **3d \(\mathcal{N}=4\) gauge theory with finite gauge coupling**: The typical feature of 3d \(\mathcal{N}=4\) SCFT is that its Coulomb branch can be given by the Higgs branch of the mirror quiver gauge theory. However, if one studies the gauge theory with finite gauge coupling, locally its Coulomb branch has the structure of \(\mathbb{R}^{3}\times T\) (ALF space) [8] which can no longer be given by the Higgs branch of a quiver gauge theory.
Instead, it can be given by the so-called bow diagram [102], which can be viewed as a four dimensional theory on a one dimensional space. To retain the mirror symmetry, the algebra side might also be changed.
6. **Generalization to five and six dimensional SCFTs**: One might also consider the \(T^{2}\) compactification of 5d \(\mathcal{N}=1\) SCFT or \(T^{3}\) compactification of 6d \((1,0)\) little string theory, and one would expect to have the similar mirror symmetric between an algebra and the Coulomb branch \(\mathcal{M}_{C}\) of the effective 3d theory. For the \(T^{2}\) compactification of rank one 5d theory, locally the effective 3d Coulomb branch would take the form of \(T^{3}\times\mathbb{R}\) (ALH space) [103]. For the \(T^{3}\) compactification of 6d little string theory, the total space of the effective 3d Coulomb branch would be compact [104]. See table 13 for a summary. We do not know what kind of algebra would be involved for the Higgs branch side yet, and we believe that there are lots of interesting mathematics and physics involved in these mirror symmetries.
###### Acknowledgements.
PS is supported by NSCF Grant No. 12225108. WY is supported by Yau Mathematical Science Center at Tsinghua University. DX and WY are supported by national key research and development program of China (NO. 2020YFA0713000), and NNSF of China with Grant NO: 11847301 and 12047502.
## Appendix A Rank one SCFT
In this section we discuss fixed varieties of the Coulomb branch of rank one SCFTs. The corresponding VOAs are all non-admissible.
\(E_{6}\) **theory**: \(L_{-3}(E_{6})\leftrightarrow\mathcal{M}_{Hit}(E_{6},\frac{1}{9},f^{\vee}=principal)\). The two sets are
\[L_{\nu}=\{\pm(\alpha_{1}+\alpha_{2}+2\alpha_{3}+2\alpha_{4}+2\alpha_{5}+\alpha _{6}-\delta)\},\] (A.1)
and
\[S_{\nu}=\{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},\alpha_{5},\alpha_{6}\} \cup\{\beta_{1},\beta_{2},\gamma\},\] (A.2)
where \(\beta_{1}=-\alpha_{1}-\alpha_{2}-2\alpha_{3}-2\alpha_{4}-\alpha_{5}-\alpha_{6 }+\delta\), \(\beta_{2}=-\alpha_{1}-\alpha_{2}-\alpha_{3}-2\alpha_{4}-2\alpha_{5}-\alpha_{6 }+\delta\) and \(\gamma=\alpha_{1}+\alpha_{2}+2\alpha_{3}+3\alpha_{4}+2\alpha_{5}+\alpha_{6}-\delta\). The set of fixed varieties are listed in table 14 which
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Physical theory & Coulomb branch & Mirror & Algebra \\ \hline
3d \(\mathcal{N}=4\) SCFT & \(ALE\) & Certain quiver variety & Finite W-algebra \\ \hline
3d gauge theory & \(ALF\) & Bow diagram &? \\ \hline
4d \(\mathcal{N}=2\) on \(S^{1}\) & \(ALG\) & Hitchin system & \(VOA\) \\ \hline
5d \(\mathcal{N}=2\) on \(T^{2}\) & \(ALH\) & Periodic monopole on \(T^{3}\) &? \\ \hline
6d \((1,0)\) on \(T^{3}\) & compact & \(K_{3}\) &? \\ \hline \end{tabular}
\end{table}
Table 13: Mirror symmetry for theories with eight supercharges.
matches simple modules of \(L_{-3}(E_{6})\) classified in [87]. The fixed variety with dimension 1 corresponds to the only module with dominant weight \(-\Lambda_{4}\).
\(E_{7}\) **theory**: \(L_{-4}(E_{7})\leftrightarrow\mathcal{M}_{Hit}(E_{7},\frac{1}{14},f^{\vee}=principal)\). Two sets are:
\[L_{\nu}=\{\pm(\alpha_{1}+2\alpha_{2}+2\alpha_{3}+3\alpha_{4}+3\alpha_{5}+2 \alpha_{6}+\alpha_{7}-\delta)\},\] (A.3)
and
\[S_{\nu}=\{\alpha_{i},\ i=1,2,\cdots,7\}\cup\{\beta_{1},\beta_{2},\gamma\},\] (A.4)
where \(\beta_{1}=-\alpha_{1}-\alpha_{2}-2\alpha_{3}-3\alpha_{4}-3\alpha_{5}-2\alpha_ {6}-\alpha_{7}+\delta\), \(\beta_{2}=-\alpha_{1}-2\alpha_{2}-2\alpha_{3}-3\alpha_{4}-2\alpha_{5}-2\alpha _{6}-\alpha_{7}+\delta\) and \(\gamma=\alpha_{1}+2\alpha_{2}+2\alpha_{3}+4\alpha_{4}+3\alpha_{5}+2\alpha_{6} +\alpha_{7}-\delta\).
There are one fixed variety of dimension 1 and seven fixed points of dimension 0 as summarized in 15. Again the fixed variety with dimension 1 corresponds to the simple module with dominant weight of \(V_{-4}(E_{7})\), while fixed points correspond to other simple modules [87].
\(E_{8}\) **theory**: \(L_{-6}(E_{8})\leftrightarrow\mathcal{M}_{Hit}(E_{8},\frac{1}{24},f^{\vee}=principal)\). Two sets of affine roots are
\[L_{\nu}=\{\pm(2\alpha_{1}+3\alpha_{2}+4\alpha_{3}+5\alpha_{4}+4\alpha_{5}+3 \alpha_{6}+2\alpha_{7}+\alpha_{8}-\delta)\},\] (A.5)
\begin{table}
\begin{tabular}{|c|c|c|} \hline Dim & \(\tilde{w}\) & \(t_{\beta}w.(k\Lambda_{0})\) \\ \hline
0 & 1 & \(-4\Lambda_{0}\) \\
0 & \(s_{0}\) & \(-3\Lambda_{1}+2\Lambda_{0}\) \\
0 & \(s_{1}s_{0}\) & \(\Lambda_{1}-2\Lambda_{3}\) \\
0 & \(s_{2}s_{3}s_{1}s_{0}\) & \(-2\Lambda_{2}\) \\
0 & \(s_{5}s_{3}s_{1}s_{0}\) & \(-2\Lambda_{5}+\Lambda_{6}\) \\
0 & \(s_{6}s_{5}s_{3}s_{1}s_{0}\) & \(-3\Lambda_{6}+2\Lambda_{7}\) \\
0 & \(s_{7}s_{6}s_{5}s_{3}s_{1}s_{0}\) & \(-4\Lambda_{7}\) \\ \hline
1 & \(s_{3}s_{1}s_{0}\) & \(-\Lambda_{4}\) \\ \hline \end{tabular}
\begin{tabular}{|c|c|c|} \hline Dim & \(\tilde{w}\) & \(t_{\beta}w.(k\Lambda_{0})\) \\ \hline
0 & 1 & \(-6\Lambda_{0}\) \\
0 & \(s_{0}\) & \(-5\Lambda_{8}+4\Lambda_{0}\) \\
0 & \(s_{8}s_{0}\) & \(-4\Lambda_{7}+3\Lambda_{8}\) \\
0 & \(s_{7}s_{8}s_{0}\) & \(-3\Lambda_{6}+2\Lambda_{7}\) \\
0 & \(s_{6}s_{7}s_{8}s_{0}\) & \(-2\Lambda_{5}+\Lambda_{6}\) \\
0 & \(s_{2}s_{5}s_{6}s_{7}s_{8}s_{0}\) & \(-2\Lambda_{2}\) \\
0 & \(s_{3}s_{5}s_{6}s_{7}s_{8}s_{0}\) & \(\Lambda_{1}-2\Lambda_{3}\) \\
0 & \(s_{1}s_{3}s_{5}s_{6}s_{7}s_{8}s_{0}\) & \(-3\Lambda_{1}\) \\ \hline
1 & \(s_{5}s_{6}s_{7}s_{8}s_{0}\) & \(-\Lambda_{4}\) \\ \hline \end{tabular}
\end{table}
Table 15: Left: Fixed varieities of \(E_{7},\frac{1}{14}\). Right: Fixed varieties of \(E_{8},\frac{1}{24}\).
\[S_{\nu}=\{\alpha_{i},\ i=1,2,\cdots,8\}\cup\{\beta_{1},\beta_{2},\gamma\},\] (A.6)
where \(\beta_{1}=-2\alpha_{1}-3\alpha_{2}-3\alpha_{3}-5\alpha_{4}-4\alpha_{5}-3\alpha_ {6}-2\alpha_{7}-\alpha_{8}+\delta\), \(\beta_{2}=-2\alpha_{1}-2\alpha_{2}-4\alpha_{3}-5\alpha_{4}-4\alpha_{5}-3\alpha _{6}-2\alpha_{7}-\alpha_{8}+\delta\) and \(\gamma=2\alpha_{1}+3\alpha_{2}+4\alpha_{3}+6\alpha_{4}+4\alpha_{5}+3\alpha_{6 }+2\alpha_{7}+\alpha_{8}-\delta\).
There are one fixed manifold of dimension 1 and eight fixed points of dimension 0 as show in table 15. Again the fixed manifold corresponds to the simple module with dominant weight of \(V_{-6}(E_{8})\), while fixed points correspond to other simple modules [87].
\(G_{2}\)**theory**: \(L_{-2}(G_{2})\leftrightarrow{\cal M}_{Hit}((\hat{D}_{4},\mathbb{Z}_{3}),\frac{ 1}{6},f^{\vee}=principal)\). The set of real affine roots of the twisted affine Lie algebra \({}^{3}D_{4}\) is \(\hat{\Delta}^{\vee}=\Phi_{s}^{re}\cup\Phi_{l}^{re}\) with
\[\Phi_{s}^{re} =\{\alpha+\frac{r}{3}\delta\ |\ r\in\mathbb{Z},\ \alpha\in\Phi_{s}^{0}\},\] \[\Phi_{l}^{re} =\{\alpha+r\delta\ |\ r\in\mathbb{Z},\ \alpha\in\Phi_{l}^{0}\}.\] (A.7)
Here \(\Phi_{0}\) denotes the root system of \(G_{2}\) which is also the finite part of \({}^{3}D_{4}\),
\[\Phi_{s}^{0} =\{\pm(\beta_{1}-\beta_{2}),\ \pm(\beta_{2}-\beta_{3}),\ \pm(\beta_{1}-\beta_{3})\}=\{\pm\alpha_{2},\ \pm(\alpha_{1}+2\alpha_{2}),\ \pm(\alpha_{1}+\alpha_{2})\},\] \[\Phi_{l}^{0} =\{\pm(-2\beta_{1}+\beta_{2}+\beta_{3}),\ \pm(\beta_{1}-2\beta_{2}+\beta_{3}),\ \pm(\beta_{1}+\beta_{2}-2\beta_{3})\}=\] \[=\{\pm\alpha_{1},\ \pm(\alpha_{1}+3\alpha_{2}),\ \pm(2\alpha_{1}+3 \alpha_{2})\}.\] (A.8)
Here \(\beta_{1},\beta_{2},\beta_{3}\) are orthogonal basis, and the simple roots are \(\alpha_{1}=-2\beta_{1}+\beta_{2}+\beta_{3}\) and \(\alpha_{2}=\beta_{1}-\beta_{2}\). The highest root is \(\theta_{l}=2\alpha_{1}+3\alpha_{2}\), and the highest short root is \(\theta_{s}=\alpha_{1}+2\alpha_{2}\). The set
\[L_{\nu}=\{\alpha+l\delta\in\hat{\Delta}^{\vee}\ |\ \frac{1}{6}\alpha(\rho^{ \vee})+l=0\}\to L_{\nu}=\{\pm(\alpha_{1}+\alpha_{2}-\frac{1}{3}\delta)\}\] (A.9)
and the set \(S_{\nu}\) are
\[S_{\nu}=\{\alpha+l\delta\in\hat{\Delta}^{\vee}\ |\ \frac{1}{6}\alpha(\rho^{ \vee})+l=\frac{1}{6}\}\to S_{\nu}=\{\alpha_{1},\alpha_{2},-\theta_{s}+\frac{ 2}{3}\delta,\ \theta_{s}-\frac{1}{3}\delta,-\theta_{l}+\delta,-\alpha_{2}+\frac{1}{3}\delta\}\] (A.10)
One can then use the graphic method to find the affine Weyl group elements corresponding to fixed varieties. This fibre has one fixed variety of dimension 1 labelled by \(s_{0}\), and two fixed varieties labelled by 1 and \(s_{1}s_{0}\)[46], and here \(s_{0}\) is given by the simple reflection of the affine root \(-\theta_{s}+\frac{1}{3}\delta\). Assuming the bijection is still true in the non-admissible case, we conjecture that there are three simple modules of \(L_{-2}(G_{2})\) with weights
\[-2\Lambda_{0},\quad s_{0}.(-2\Lambda_{0})=-\Lambda_{2},\quad(s_{1}s_{0}).(-2 \Lambda_{0})=-2\Lambda_{1}.\] (A.11)
It would be nice to check if this statement is true from VOA side.
\(SO(7)\)**theory**: \(L_{-2}(SO(7))\leftrightarrow{\cal M}_{Hit}((A_{5},\mathbb{Z}_{2}),\frac{1}{6},f ^{\vee}=principal)\). The set of real affine roots of the twisted Lie algebra \({}^{2}\hat{A}_{5}\) is \(\hat{\Delta}^{\vee}=\Phi_{s}^{re}\cup\Phi_{l}^{re}\) with
\[\Phi_{s}^{re} =\{\alpha+\frac{n}{2}\delta\ |\ n\in\mathbb{Z},\ \alpha\in\Phi_{s}^{0}\},\] \[\Phi_{l}^{re} =\{\alpha+n\delta\ |\ n\in\mathbb{Z},\ \alpha\in\Phi_{l}^{0}\}.\] (A.12)
Here the set of finite roots \(\Phi_{s}^{0}\cup\Phi_{l}^{0}\) is that of \(C_{3}\) Lie algebra:
\[\Phi_{s}^{0} =\{\pm\beta_{i}\pm\beta_{j},\ \ i,j=1,2,3\ \ i\neq j\}\] \[=\{\pm\alpha_{1}^{\vee},\pm\alpha_{2}^{\vee},\pm(\alpha_{2}^{\vee }+\alpha_{3}^{\vee}),\pm(\alpha_{1}^{\vee}+\alpha_{2}^{\vee}+\alpha_{3}^{\vee }),\pm(\alpha_{1}^{\vee}+\alpha_{2}^{\vee}),\pm(\alpha_{1}^{\vee}+2\alpha_{2}^ {\vee}+\alpha_{3}^{\vee})\},\] \[\Phi_{l}^{0} =\{\pm 2\beta_{i},\ \ i=1,2,3\}\] \[=\{\pm\alpha_{3}^{\vee},\pm(2\alpha_{2}^{\vee}+\alpha_{3}^{\vee }),\ \pm(2\alpha_{1}^{\vee}+2\alpha_{2}^{\vee}+\alpha_{3}^{\vee})\}. \tag{111}\]
Here \(\beta_{i}\) are the orthogonal basis. The set of simple roots are
\[\alpha_{1}^{\vee}=\beta_{1}-\beta_{2},\ \ \ \alpha_{2}^{\vee}=\beta_{2}-\beta_{3}, \ \ \ \ \ \alpha_{3}^{\vee}=2\beta_{3}, \tag{112}\]
which are simple coroots of \(B_{3}\). The highest root is \(\theta_{l}^{\vee}=2\alpha_{1}^{\vee}+2\alpha_{2}^{\vee}+\alpha_{3}^{\vee}\), and the highest short root is \(\theta_{s}^{\vee}=\alpha_{1}^{\vee}+2\alpha_{2}^{\vee}+\alpha_{3}^{\vee}\).
The set \(L_{\nu}\) is
\[L_{\nu}=\{\alpha+l\delta\in\hat{\Delta}\ |\ \frac{1}{6}\alpha(\rho)+l=0\} \to L_{\nu}=\{\pm(\alpha_{1}^{\vee}+\alpha_{2}^{\vee}+\alpha_{3}^{\vee}- \frac{1}{2}\delta)\} \tag{113}\]
and the set \(S_{\nu}\) is
\[S_{\nu} =\{\alpha+l\delta\in\hat{\Delta}\ |\ \frac{1}{6}\alpha(\rho)+l= \frac{1}{6}\}\rightarrow\] \[S_{\nu} =\{\alpha_{1}^{\vee},\alpha_{2}^{\vee},\alpha_{3}^{\vee},\theta_ {s}^{\vee}-\frac{1}{2}\delta,-\theta_{l}^{\vee}+\delta,-(\alpha_{1}^{\vee}+ \alpha_{2}^{\vee})+\frac{1}{2}\delta,-(\alpha_{2}^{\vee}+\alpha_{3}^{\vee})+ \frac{1}{2}\delta\} \tag{114}\]
There is one dimension 1 fixed variety labelled by \(s_{0}\), and three dimensional 0 fixed points labelled by 1, \(s_{1}s_{0}\) and \(s_{3}s_{0}\). We predict that the corresponding simple modules have weights
\[-2\Lambda_{0},\ s_{0}.(-2\Lambda_{0})=-\Lambda_{2},\ (s_{1}s_{0}).(-2\Lambda_{0})=-2 \Lambda_{1},\ (s_{3}s_{0}).(-2\Lambda_{0})=-2\Lambda_{3}. \tag{115}\]
\(F_{4}\) **theory**: \(L_{-3}(F_{4})\leftrightarrow\mathcal{M}_{Hit}((E_{6},\mathbb{Z}_{2}),\frac{1} {12},f^{\vee}=principal)\). The set of real affine roots of the twisted Lie algebra \({}^{2}\hat{E}_{6}\) is \(\hat{\Delta}^{\vee}=\Phi_{s}^{re}\cup\Phi_{s}^{re}\) with
\[\Phi_{s}^{re} =\{\alpha+\frac{n}{2}\delta\ |\ n\in\mathbb{Z},\ \alpha\in\Phi_{s}^{0}\},\] \[\Phi_{l}^{re} =\{\alpha+n\delta\ |\ n\in\mathbb{Z},\ \alpha\in\Phi_{l}^{0}\}. \tag{116}\]
Here the set of finite roots \(\Phi_{s}^{0}\cup\Phi_{l}^{0}\) is that of \(F_{4}\) Lie algebra. The simple roots are:
\[\alpha_{1}=\beta_{1}-\beta_{2},\ \alpha_{2}=\beta_{2}-\beta_{3},\ \alpha_{3}=\beta_{3},\ \alpha_{4}=\frac{1}{2}(-\beta_{1}-\beta_{2}-\beta_{3}+\beta_{4}). \tag{117}\]
Here \(\beta_{i}\) are orthogonal basis. The set of roots are
\[\Phi_{s}= \{\pm\alpha_{3},\pm(\alpha_{2}+\alpha_{3}),\ \pm(\alpha_{1}+\alpha_{2}+ \alpha_{3}),\ \ \pm(\alpha_{1}+2\alpha_{2}+3\alpha_{3}+2\alpha_{4})\}\] \[\cup\{\pm(\alpha_{2}+2\alpha_{3}+\alpha_{4}),\pm(\alpha_{1}+ \alpha_{2}+\alpha_{3}+\alpha_{4}),\pm(\alpha_{1}+\alpha_{2}+2\alpha_{3}+ \alpha_{4}),\pm(\alpha_{2}+\alpha_{3}+\alpha_{4})\}\] \[\cup\{\pm(\alpha_{3}+\alpha_{4}),\pm(\alpha_{1}+2\alpha_{2}+2 \alpha_{3}+\alpha_{4}),\pm(\alpha_{4}),\pm(\alpha_{1}+2\alpha_{2}+3\alpha_{3}+ \alpha_{4})\},\] \[\Phi_{l}= \{\pm\alpha_{1},\pm(\alpha_{1}+2\alpha_{2}+2\alpha_{3}),\pm( \alpha_{1}+\alpha_{2}),\pm(\alpha_{1}+\alpha_{2}+2\alpha_{3}),\ \pm(\alpha_{1}+2\alpha_{2}+2\alpha_{3})\] \[\cup\{\pm(2\alpha_{1}+3\alpha_{2}+4\alpha_{3}+2\alpha_{4}),\pm \alpha_{2},\pm(\alpha_{2}+2\alpha_{3}),\pm(\alpha_{1}+\alpha_{2}+2\alpha_{3}+ 2\alpha_{4})\}\] \[\cup\{\pm(\alpha_{1}+3\alpha_{2}+4\alpha_{3}+2\alpha_{4}),\pm( \alpha_{1}+2\alpha_{2}+2\alpha_{3}+2\alpha_{4}),\pm(\alpha_{1}+2\alpha_{2}+4 \alpha_{3}+2\alpha_{4})\}. \tag{118}\]
The highest root root is \(\theta_{l}=2\alpha_{1}+3\alpha_{2}+4\alpha_{3}+2\alpha_{4}\), and the highest short root is \(\theta_{s}=\alpha_{1}+2\alpha_{2}+3\alpha_{3}+2\alpha_{4}\). The set \(L_{\nu}\) is
\[L_{\nu}=\{\alpha+l\delta\in\hat{\Delta}^{\vee}\ |\ \frac{1}{12}\alpha(\rho^{\vee})+l=0 \}\to L_{\nu}=\{\pm(\alpha_{1}+2\alpha_{2}+2\alpha_{3}+\alpha_{4}-\frac{1}{2} \delta)\},\] (A.21)
and the set \(S_{\nu}\) is
\[S_{\nu}=\{\alpha+l\delta\in\hat{\Delta}^{\vee}\ |\ \frac{1}{12} \alpha(\rho^{\vee})+l=\frac{1}{12}\}\rightarrow\] \[S_{\nu}=\{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4},(\alpha_{ 1}+2\alpha_{2}+3\alpha_{3}+\alpha_{4})-\frac{1}{2}\delta,-\theta_{l}+\delta,\] \[-(\alpha_{1}+\alpha_{2}+2\alpha_{3}+\alpha_{4})+\frac{1}{2} \delta\}.\] (A.22)
The fixed variety of dimension \(1\) is labelled by \(s_{4^{\vee}}s_{0^{\vee}}\), while fixed points are labelled by \(1\), \(s_{0^{\vee}}\), \(s_{2^{\vee}}s_{4^{\vee}}s_{0^{\vee}}\), \(s_{1^{\vee}}s_{2^{\vee}}s_{4^{\vee}}s_{0^{\vee}}\) with \(s_{i^{\vee}}\) being the reflection of the simple root \(\alpha_{i}^{\vee}\) of \({}^{2}E_{6}\). Assuming the correspondence between fixed varieties and simple modules still holds, we predict that the simple modules are
\[\begin{split}& 1.(-3\Lambda_{0})=-3\Lambda_{0},\ s_{0}.(-3\Lambda_{0}) =-2\Lambda_{1}-\Lambda_{0},\ (s_{1}s_{0}).(-3\Lambda_{0})=-3\Lambda_{2},\\ &(s_{3}s_{1}s_{0}).(-3\Lambda_{0})=-2\Lambda_{3}+\Lambda_{4},\ (s_{4}s_{3}s_{1}s_{0}).(-3\Lambda_{0})=-3\Lambda_{4}.\end{split}\] (A.23)
There is a flip from \(1,2,3,4\) to \(4,3,2,1\) because our labelling of roots in \({}^{2}\hat{E}_{6}\) is such that short roots are still \(\alpha_{3}\) and \(\alpha_{4}\) so that the node of \(\alpha_{0}\) is connected to the node of \(\alpha_{4}\) in the Dynkin diagram of \({}^{2}\hat{E}_{6}\).
## Appendix B Twisted theory
In this section we give an example when the VOA side is the affine vertex algebra of a non-simply laced AKM. On the fibre side we need to consider the twisted affine Lie algebra.
**Example B.1**.: \(L_{-(2l-1)+\frac{2l-1}{u}}(B_{l})\leftrightarrow\mathcal{M}_{Hit}((A_{2l-1}, \mathbb{Z}_{2}),\frac{u}{2(2l-1)},f^{\vee}=principal)\). On the VOA side, the simple roots of \(B_{l}\) in orthogonal basis are
\[\Delta_{+}=\{\alpha_{1}=\beta_{1}-\beta_{2},\cdots,\alpha_{l-1}= \beta_{l-1}-\beta_{l},\alpha_{l}=\beta_{l}\},\] (B.1)
and the highest long root \(\theta=\beta_{1}+\beta_{2}\). Therefore, following the definition of \(S_{u}\), we have
\[\begin{split} S_{u}=&\{\alpha_{1}^{\vee},\cdots, \alpha_{l}^{\vee},-\theta_{l}^{\vee}+u\delta\}\\ =&\{\beta_{1}-\beta_{2},\cdots,\beta_{l-1}-\beta_{l },2\beta_{l},-\beta_{1}-\beta_{2}+u\delta\}.\end{split}\] (B.2)
The set of real roots of \(\hat{B}_{l}\) is
\[\hat{\Delta}=\{\alpha+n\delta\ |\ \alpha\in\Delta,\ n\in\mathbb{Z}\}.\] (B.3)
Here \(\Delta\) is the set of roots of \(B_{l}\).
One the fibre side, we need to consider the twisted affine Lie algebra \({}^{2}\hat{A}_{2l-1}\) which is the Langlands dual of \(\hat{B}_{l}\). The set of real roots of \({}^{2}\hat{A}_{2l-1}\) is \(\hat{\Delta}^{C}=\Phi^{re}_{s}\cup\Phi^{re}_{l}\) with
\[\Phi^{re}_{s} =\{\alpha+\frac{n}{2}\delta\ |\ \alpha\in\Phi^{0}_{s},\ n\in \mathbb{Z}\},\] \[\Phi^{re}_{l} =\{\alpha+n\delta\ |\ \alpha\in\Phi^{0}_{l},\ n\in\mathbb{Z}\}. \tag{114}\]
Here \(\Phi^{0}\) is the set of roots of \(C_{l}\) which is the finite part of \({}^{2}\hat{A}_{2l-1}\). In orthogonal basis
\[\Phi^{0}_{l}=\{\pm 2\beta_{i}\},\quad\Phi^{0}_{s}=\{\pm\beta_{i}\pm\beta_{j}, \ \ i,j=1,\cdots,l,\ \ i\neq j\}. \tag{115}\]
The set of simple roots of \(C_{l}\) are
\[\{\alpha^{\vee}_{1}=\beta_{1}-\beta_{2},\ \alpha^{\vee}_{2}=\beta_{2}-\beta_ {3},\cdots,\alpha^{\vee}_{l-1}=\beta_{l-1}-\beta_{l},\ \alpha^{\vee}_{l}=2\beta_{l}\}. \tag{116}\]
By definition there is a natural bijection between \(\hat{\Delta}^{\vee}\) and \(\hat{\Delta}^{C}\) which simply sends \(\alpha+n\delta\in\hat{\Delta}^{\vee}\) into \(\alpha+\frac{n}{2}\delta\in\hat{\Delta}^{C22}\).
To find the fixed points, we compute the following two sets
\[L_{\nu}=\{\alpha+l\delta\in\hat{\Delta}^{C}\ |\ \frac{u}{2(2l-1)}\alpha(\rho^{ \vee})+l=0\}\to L_{\nu}=\emptyset. \tag{117}\]
and
\[S_{\nu}=\{\alpha+l\delta\in\hat{\Delta}\ |\ \frac{u}{2(2l-1)}\alpha(\rho^{ \vee}))+l=\frac{u}{2(2l-1)}\}\to S_{\nu}=\{\alpha^{\vee}_{1},\alpha^{\vee}_{2 },\cdots,\alpha^{\vee}_{l},-\theta^{\vee}+\frac{u}{2}\delta\}. \tag{118}\]
Here \(\theta^{\vee}=\beta_{1}+\beta_{2}\) is the highest short root of \(C_{l}\) which is Langlands dual to the highest long root of \(B_{l}\) and has height \(2l-2\).
We see that the bijection between \(\hat{\Delta}\) and \(\hat{\Delta}^{C}\) also sends \(S^{\vee}_{u}\) into \(S_{\nu}\). Also using the fact that both the affine Weyl group and extended Weyl group of \(\hat{B}_{l}\) and \({}^{2}\hat{A}_{2l-1}\) are the same. One can see the natural isomorphisms between admissible modules and fixed points.
|
2307.02441 | Monic Inversion Principle and Complete intersection | Let $A$ be a regular ring of dimension $d$ essentially of finite type over an
infinite field $k$ of characteristic $\neq 2$. Let $P$ be a projective
$A$-module of rank $n$ with $2n\geq d+3$. Let $I$ be an ideal of $A[T]$ of
height $n$ and $\phi:P[T]\twoheadrightarrow I/I^2$ be a surjection. If
$\phi\otimes A(T)$ has a surjective lift $\theta :P[T]\otimes
A(T)\twoheadrightarrow IA(T)$, then $\phi$ has a surjective lift
$\Phi:P[T]\twoheadrightarrow I$. | M. K. Keshari, Soumi Tikader | 2023-07-05T17:11:36Z | http://arxiv.org/abs/2307.02441v1 | # monic inversion principle and complete intersection
###### Abstract.
Let \(A\) be a regular ring of dimension \(d\) essentially of finite type over an infinite field \(k\) of characteristic \(\neq 2\). Let \(P\) be a projective \(A\)-module of rank \(n\) with \(2n\geq d+3\). Let \(I\) be an ideal of \(A[T]\) of height \(n\) and \(\phi:P[T]\twoheadrightarrow I/I^{2}\) be a surjection. If \(\phi\otimes A(T)\) has a surjective lift \(\theta:P[T]\otimes A(T)\twoheadrightarrow IA(T)\), then \(\phi\) has a surjective lift \(\Phi:P[T]\twoheadrightarrow I\). The case \(P=A^{n}\) is due to Das-Tikader-Zinna [7].
2020 Mathematics Subject Classification:13C10, 13B25
## 1. Introduction
**Assumptions :** Rings are commutative noetherian with \(1\) and projective modules are finitely generated of constant rank.
The monic inversion principle is a recurring theme in the area of projective modules and complete intersection ideals over polynomial rings. Let \(A\) be a ring and \(\widetilde{P}\) be a projective \(A[T]\)-module. Let \(A(T)\) be the ring obtained by inverting all monic polynomials in \(A[T]\). Horrocks [8] proved that if \(A\) is local, then \(\widetilde{P}\otimes A(T)\) is free if and only if \(\widetilde{P}\) is free. Quillen [11] proved the local global principle that \(\widetilde{P}\) is extended from \(A\) if and only if \(\widetilde{P}\otimes A_{\mathfrak{m}}[T]\) is extended from \(A_{\mathfrak{m}}\) for all \(\mathfrak{m}\in Max(A)\). Using local global principle and Horrocks result, Quillen [11] proved that \(\widetilde{P}\otimes A(T)\) is free if and only if \(\widetilde{P}\) is free. Thus freeness property of projective \(A[T]\)-modules satisfies monic inversion principle.
We say \(\widetilde{P}\) has a unimodular element if there exist a surjection \(\widetilde{P}\twoheadrightarrow A[T]\) i.e. \(\widetilde{P}\simeq Q\oplus A[T]\) for some projective \(A[T]\)-module \(Q\). Roitman [12] proved the analogue of Horrocks that if \(A\) is local, then \(\widetilde{P}\otimes A(T)\) has a unimodular element if and only if \(\widetilde{P}\) has a unimodular element. We do not yet have complete analogue of Quillen though partial results are known. When \(A\) contains \(\mathbb{Q}\), then Bhatwadekar-Sridharan [3] proved that if \(rank(\widetilde{P})=dim(A)\), then \(\widetilde{P}\otimes A(T)\) has a unimodular element if and only if \(\widetilde{P}\) has a unimodular element. Further if \(A\) is a regular domain of dimension \(d\) containing a field and \(rank(\widetilde{P})\geq\frac{1}{2}(d+3)\), then Bhatwadekar-Keshari [2] proved that \(\widetilde{P}\otimes A(T)\) has a unimodular element if and only if \(\widetilde{P}\) has a unimodular element. See [9] for recent results on this problem.
Now we will discuss monic inversion principle for complete intersection ideals. Let \(A\) be a regular domain of dimension \(d\) containing a field \(k\) and \(P\) be a projective \(A\)-module of rank \(n\) with \(2n\geq d+3\). Let \(I\) be an ideal of \(A[T]\) of height \(n\) and \(\phi:P[T]\twoheadrightarrow I/I^{2}T\) be a surjection. Bhatwadekar-Keshari [2, Proposition 4.9] proved that \(\phi\otimes A(T)\) has a surjective lift \(\theta:P\otimes A(T)\twoheadrightarrow IA(T)\) if and only if \(\phi\) has a surjective lift \(\Phi:P[T]\twoheadrightarrow I\). If we further assume that \(k\) is infinite perfect and \(A\) is essentially of finite type over \(k\) i.e. \(A\) is a localization of an affine \(k\)-algebra, then any surjection \(\phi:P[T]\twoheadrightarrow I/I^{2}T\) always lifts to a surjection \(\Phi:P[T]\twoheadrightarrow I\)[2, Theorem 4.13]. This result answered a question of Nori and the proof
Introduction
Let \(A\) be a ring with \(1/2\in A\) and \(P\) be a projective \(A\)-module of rank \(n\). Let \(\mathbb{M}(P)=P\oplus P^{*}\oplus A\) be the quadratic space with quadratic form \(q(p,f,z)=f(p)+z^{2}\). We recall the definition of elementary orthogonal group \(EO(\mathbb{M}(P))\) and prove that \(EO(\mathbb{M}(A^{n}))=EO_{2n+1}(A)\). Using local global principle for \(EO(\mathbb{M}(P[T]))\) due to Ambily-Rao [1] and a result of Stavrova [14], we prove that when \(A\) is regular containing a field, then \(O(\mathbb{M}(P[T]),T)=EO(\mathbb{M}(P[T]),T)\) and derive the splitting principle that if \(b_{1},b_{2}\) are comaximal, then any \(\sigma\in EO(\mathbb{M}(P_{b_{1}b_{2}}))\) splits as \((\alpha_{1})_{b_{2}}\circ(\alpha_{2})_{b_{1}}\) where \(\alpha_{i}\in EO(\mathbb{M}(P_{b_{i}}))\).
The main theorem is proved as follows. Given a surjection \(\phi:P[T]\twoheadrightarrow I/I^{2}\), we get an element \((I,\phi)\in\mathcal{LO}(P[T])\) and hence an element \(H(T)\in\mathbb{Q}^{\prime}(P[T])\). Using a result of Mandal-Mishra [10], we show that \(\phi\) lifts to a surjection \(\Phi:P[T]\twoheadrightarrow I\) if and only if the elements \(H(T)\) and \((0,0,1)\) of \(\mathbb{Q}^{\prime}(P[T])\) are connected by an element of \(EO(\mathbb{M}(P[T]))\). Given that \(\phi\otimes A(T)\) has a surjective lift \(\theta:P[T]\otimes A(T)\twoheadrightarrow I\), the elements \(H(T)\otimes A(T)\) and \((0,0,1)\) of \(\mathbb{Q}^{\prime}(P[T]\otimes A(T))\) are connected by an element of \(EO(\mathbb{M}(P[T]\otimes A(T)))\). Using this we descend to the required.
The techniques and results of Mandal-Mishra [10] are crucially used.
## 2. Elementary orthogonal group
**Definition 2.1**.: Let \(A\) be a ring with \(1/2\in A\). The elementary orthogonal group \(EO_{2n+1}(A)\) is the subgroup of \(O_{2n+1}(A)\) generated by the following five types of elementary orthogonal transvections defined
as follows [7, Definition 2.2]. For \(1\leq i\neq j\leq n\) and \(\lambda\in A\), \((x_{1},\cdots,x_{n},y_{1},\cdots,y_{n},z)\in A^{2n+1}\) maps to
1. \((x_{1},\cdots,x_{i-1},x_{i}-\lambda^{2}y_{i}+2\lambda z,x_{i+1},\cdots,y_{n},z- \lambda y_{i})\).
2. \((x_{1},\cdots,y_{i-1},y_{i}-\lambda^{2}x_{i}+2\lambda z,y_{i+1},\cdots,y_{n},z -\lambda x_{i})\).
3. \((x_{1},\cdots,x_{i-1},x_{i}+\lambda x_{j},x_{i+1},\cdots,y_{j-1},y_{j}-\lambda y _{i},y_{j+1},\cdots,y_{n},z)\).
4. \((x_{1},\cdots,x_{i-1},x_{i}+\lambda y_{j},\cdots,x_{j}-\lambda y_{i},x_{j+1}, \cdots,y_{n},z)\).
5. \((x_{1},\cdots,y_{i-1},y_{i}+\lambda x_{j},\cdots,y_{j}-\lambda x_{i},y_{j+1}, \cdots,y_{n},z)\).
**Definition 2.2**.: Let \(A\) be a ring with \(1/2\in A\) and \(P\) be a projective \(A\)-module. Let \(\mathbb{M}(P)=\mathbb{H}(P)\perp A=P\oplus P^{*}\oplus A\) denote the projective module and also the quadratic space with quadratic form \(q:\mathbb{M}(P)\to A\) given by \(q(p,f,z)=f(p)+z^{2}\). Let \(O(\mathbb{M}(P))\) be the group of orthogonal transformations of \(\mathbb{M}(P)\).
For linear maps \(\alpha:A\to P\) and \(\beta:A\to P^{*}\) with dual maps \(\alpha^{*}:P^{*}\to A\) and \(\beta^{*}:P\to A\), define \(E_{\alpha},E_{\beta}\in O(\mathbb{M}(P))\) by
1. \(E_{\alpha}(p,f,z)=(p-\alpha\alpha^{*}(f)+2\alpha(z),f,z-\alpha^{*}(f))\).
2. \(E_{\beta}(p,f,z)=(p,f-\beta\beta^{*}(p)+2\beta(z),z-\beta^{*}(p))\).
The elementary orthogonal group \(EO(\mathbb{M}(P))\) is the subgroup of \(O(\mathbb{M}(P))\) generated by \(E_{\alpha}\) and \(E_{\beta}\) for all \(\alpha,\beta\). Note \(E_{-\alpha}=E_{\alpha}^{-1}\) and \(E_{-\beta}=E_{\beta}^{-1}\). We will define some commutator relations where \([g,h]=g^{-1}h^{-1}gh\).
1. If \(\beta^{*}(\alpha(1))=0\), then \(E_{\alpha,\beta}=[E_{\beta},E_{\alpha}]\in EO(\mathbb{M}(P))\) is defined as \[E_{\alpha,\beta}(p,f,z)=(p+2\alpha\beta^{*}(p),f-2\beta\alpha^{*}(f),z)\]
2. If \(\alpha_{1},\alpha_{2}:A\to P\), then \(E_{\alpha_{1},\alpha_{2}}=[E_{\alpha_{2}},E_{\alpha_{1}}]\in EO(\mathbb{M}(P))\) is defined as \[E_{\alpha_{1},\alpha_{2}}(p,f,z)=(p-2\alpha_{2}\alpha_{1}^{*}(f)+2\alpha_{1} \alpha_{2}^{*}(f),f,z)\]
3. If \(\beta_{1},\beta_{2}:A\to P^{*}\), then \(E_{\beta_{1},\beta_{2}}=[E_{\beta_{2}},E_{\beta_{1}}]\in EO(\mathbb{M}(P))\) is defined as \[E_{\beta_{1},\beta_{2}}(p,f,z)=(p,f-2\beta_{2}\beta_{1}^{*}(p)+2\beta_{1} \beta_{2}^{*}(p),z)\]
We note an identity which will be used in 2.5. For \(\alpha,\alpha^{\prime}:A\to P\) and \(\beta,\beta^{\prime}:A\to P^{*}\),
\[E_{\alpha+\alpha^{\prime}}=E_{-\alpha/2,\alpha^{\prime}}\circ E_{\alpha^{ \prime}}\circ E_{\alpha}\text{ and }E_{\beta+\beta^{\prime}}=E_{-\beta/2,\beta^{\prime}}\circ E_{\beta^{ \prime}}\circ E_{\beta}\]
**Remark 2.3**.: Roy [13] defined elementary orthogonal group as the group generated by \(E_{\alpha}\) and \(E_{\beta}\) where \(E_{\alpha}(p,f,z)=\big{(}p-\frac{1}{2}\alpha\alpha^{*}(f)+\alpha(z),f,z-\alpha^ {*}(f)\big{)}\) and \(E_{\beta}(p,f,z)=\big{(}p,f-\frac{1}{2}\beta\beta^{*}(p)+\beta(z),z-\beta^{*} (p)\big{)}\). We note that the two groups, the one defined by Roy and the one defined above, are same. They both preserve the bilinear form \(B\) defined by \(B((p,f),(p^{\prime},f^{\prime}))=q(p+p^{\prime},f+f^{\prime})-q(p,f)-q(p^{ \prime},f^{\prime})\), where \(q\) is the quadratic form of \(\mathbb{H}(P)\) defined by \(q(p,f)=f(p)\). The difference in the definition of generators is due to the fact that Roy uses bilinear form \(B\) to define the quadratic form whereas we use \(\frac{1}{2}B\).
**Lemma 2.4**.: _Let \(A\) be a ring with \(1/2\in A\) and \(P\) be a projective \(A\)-module. If \(\phi\in EO(\mathbb{M}(P))\), then there exist \(\Phi(T)\in EO(\mathbb{M}(P[T]))\) such that \(\Phi(0)=Id_{\mathbb{M}(P)}\) and \(\Phi(1)=\phi\) i.e. every element of \(EO(\mathbb{M}(P))\) is homotopic to identity._
**Proof**. It is enough to assume that \(\phi\) is one of the two types of generators of \(EO(\mathbb{M}(P))\). If \(\phi=E_{\alpha}\) for \(\alpha:A\to P\), then \(\Phi(T)=E_{\alpha(T)}\), where \(\alpha(T):A[T]\to P[T]\) is defined by \(\alpha(T)(1)=T\alpha(1)\). Similarly, if \(\phi=E_{\beta}\) for \(\beta:A\to P^{*}\), then \(\Phi(T)=E_{\beta(T)}\), where \(\beta(T):A[T]\to P^{*}[T]\) is defined by \(\beta(T)(1)=T\beta(1)\). \(\Box\)
**Lemma 2.5**.: _Let \(A\) be a ring with \(1/2\in A\). Then \(EO_{2n+1}(A)=EO(\mathbb{M}(A^{n}))\)._
**Proof**. For forward inclusion we will show that five type of generators of \(EO_{2n+1}(A)\) is a generator of \(EO(\mathbb{M}(A^{n}))\). Let \(\delta_{ij}\) be the Kronecker delta function.
1. For (I), take \(E_{\alpha}\), where \(\alpha:A\to A^{n}\) with \(\alpha(1)=\lambda e_{i}\) and \(\alpha^{*}:A^{n}\to A\) with \(\alpha^{*}(e_{k})=\lambda\delta_{ik}\).
2. For (II), take \(E_{\beta}\), where \(\beta:A\to A^{n}\) with \(\beta(1)=\lambda e_{i}\) and \(\beta^{*}:A^{n}\to A\) with \(\beta^{*}(e_{k})=\lambda\delta_{ik}\).
3. For (III), take \(E_{\alpha,\beta}\), where \(\alpha:A\to A^{n}\) with \(\alpha(1)=e_{i}\) and \(\beta^{*}:A^{n}\to A\) with \(\beta^{*}(e_{k})=\frac{1}{2}\lambda\delta_{jk}\).
4. For (IV), take \(E_{\alpha_{1},\alpha_{2}}\), where \(\alpha_{1},\alpha_{2}:A\to A^{n}\) with \(\alpha_{1}(1)=e_{i}\) and \(\alpha_{2}(1)=\frac{1}{2}\lambda e_{j}\).
5. For type (V), take \(E_{\beta_{1},\beta_{2}}\), where \(\beta_{1},\beta_{2}:A\to A^{n}\) with \(\beta_{1}(1)=e_{i}\) and \(\beta_{2}(1)=\frac{1}{2}\lambda e_{j}\).
For reverse inclusion we need to show that given \(\alpha:A\to A^{n}\) and \(\beta:A\to(A^{n})^{*}\), \(E_{\alpha},E_{\beta}\in EO_{2n+1}(A)\). We will prove for \(E_{\alpha}\) as \(E_{\beta}\) case is similar.
If \(\alpha(1)=a_{i}e_{i}\), then \(E_{\alpha}\) is of type (I). Let \(\alpha(1)=\sum a_{i}e_{i}\). Define \(\alpha_{i}:A\to A^{n}\) by \(\alpha_{i}(1)=a_{i}e_{i}\). Then \(\alpha=\sum_{1}^{n}\alpha_{i}=\alpha_{1}+\alpha^{\prime\prime}\), where \(\alpha^{\prime\prime}=\sum_{2}^{n}\alpha_{i}\). By induction on \(n\), \(E_{\alpha^{\prime\prime}}\in EO_{2n+1}(A)\). We have observed earlier that \(E_{\alpha_{1}+\alpha^{\prime\prime}}=E_{-\alpha_{1}/2,\alpha^{\prime\prime}} \circ E_{\alpha^{\prime\prime}}\circ E_{\alpha_{1}}\), where \(E_{-\alpha_{1}/2,\alpha^{\prime\prime}}=[E_{\alpha^{\prime\prime}},E_{-\alpha /2}]\) is the commutator. Thus \(E_{\alpha}\in EO_{2n+1}(A)\). \(\Box\)
The following local global principle follows from Ambily-Rao [1, Theorem 3.10].
**Theorem 2.6**.: _Let \(A\) be a ring with \(1/2\in A\) and \(P\) be a projective \(A\)-module of rank \(\geq 2\). Let \(\sigma(T)\in O(\mathbb{M}(P[T]),T)\) such that \(\sigma_{\mathfrak{m}}(T)\in EO(\mathbb{M}(P_{\mathfrak{m}}[T]))\) for all \(\mathfrak{m}\in Max(A)\). Then \(\sigma(T)\in EO(\mathbb{M}(P[T]),T)\)._
As a consequence of (2.6), we have the following.
**Corollary 2.7**.: _Let \(A\) be a ring with \(1/2\in A\) and \(P\) be a projective \(A\)-module of rank \(\geq 2\). Let \(\sigma(T)\in O(\mathbb{M}(P[T]))\) such that \(\sigma(0)\in EO(\mathbb{M}(P))\)._
1. _If_ \(\sigma_{\mathfrak{m}}(T)\in EO(\mathbb{M}(P_{\mathfrak{m}}[T]))\) _for all_ \(\mathfrak{m}\in Max(A)\)_, then_ \(\sigma(T)\in EO(\mathbb{M}(P[T]))\)_._
2. _Assume_ \(\sigma(0)=Id\) _and_ \(a,b\in A\) _be comaximal. If_ \(\sigma_{a}(T)\in EO(\mathbb{M}(P_{a}[T]),T)\) _and_ \(\sigma_{b}(T)\in EO(\mathbb{M}(P_{b}[T]),T)\)_, then_ \(\sigma(T)\in EO(\mathbb{M}(P[T]),T)\)_._
**Theorem 2.8**.: _Let \(A\) be a regular ring containing a field of characteristics \(\neq 2\) and \(P\) be a projective \(A\)-module of rank \(n\geq 2\). Then_
1. \(O(\mathbb{M}(P[T]),T)=EO(\mathbb{M}(P[T]),T)\)_._
2. _Let_ \(\sigma(T)\in O(\mathbb{M}(P[T]))\)_. Then_ \(\sigma(T)\in EO(\mathbb{M}(P[T]))\) _if and only if_ \(\sigma(0)\in EO(\mathbb{M}(P))\)_._
**Proof**. (1) Let \(\sigma(T)\in O(\mathbb{M}(P[T]),T)\). To show \(\sigma(T)\in EO(\mathbb{M}(P[T]),T)\), using local-global principle (2.6), we may assume \(A\) is a local ring and hence \(P=A^{n}\) is free. By Stavrova [14, Theorem 1.3], \(O(\mathbb{M}(A[T]^{n}),T)=O_{2n+1}(A[T],T)=EO_{2n+1}(A[T],T)\) and by (2.5), \(EO_{2n+1}(A[T])=EO(\mathbb{M}(A[T]^{n}))\). Thus \(\sigma(T)\in EO(\mathbb{M}(A[T]^{n}),T)\) and we are done.
(2) If \(\sigma(T)\in O(\mathbb{M}(P[T]))\) with \(\sigma(0)\in EO(\mathbb{M}(P))\), then \(\sigma(T)\circ\sigma(0)^{-1}\in O(\mathbb{M}(P[T]),T)=EO(\mathbb{M}(P[T]),T)\) by (1). Thus \(\sigma(T)\in EO(\mathbb{M}(P[T]))\).
**Lemma 2.9**.: _Let \(A\) be a regular ring containing a field of characteristic \(\neq 2\). Let \(a\in A\) be a non-zerodivisor and \(\sigma(T)\in EO(\mathbb{M}(P_{a}[T]),T)\). Then for all \(n\gg 0\), given \(c-d\in a^{n}A\), there exist \(\tau(T)\in EO(\mathbb{M}(P[T]),T)\) such that \(\tau(T)_{a}=\sigma(cT)\circ\sigma(dT)^{-1}\)._
**Proof**. Let \(R=End(P\oplus P^{*}\oplus A)\) be non-commutative ring. Since \(\sigma(T)\in(Id+TR_{a}[T])^{*},\) by Quillen [11, Lemma 1], there exist \(\tau(T)\in(Id+TR[T])^{*}\) such that \(\tau(T)_{a}=\sigma(cT)\circ\sigma(dT)^{-1}.\) Since \(\tau(T)_{a}\in EO(\mathbb{M}(P_{a}[T]))\), it preserves the quadratic form, therefore \(\tau(T)\in O((P[T]),T)=EO(\mathbb{M}P[T],T)\), by (2.8).
**Corollary 2.10**.: _Let \(A\) be a regular ring containing a field of characteristic \(\neq 2\) and \(P\) be a projective \(A\)-module of rank \(\geq 2\). Let \(a,b\in A\) be comaximal and \(\sigma(T)\in EO(\mathbb{M}(P_{ab}[T]),T)\). Then there exist \(\alpha(T)\in EO(\mathbb{M}(P_{a}[T]),T)\) and \(\beta(T)\in EO(\mathbb{M}(P_{b}[T]),T)\) such that \(\sigma(T)=\alpha(T)_{b}\circ\beta(T)_{a}\)._
**Proof**. By (2.9), for all \(n\gg 0\), given \(c-d\in a^{n}A\), there exist \(\tau(T)\in EO(\mathbb{M}(P_{b}[T]),T)\) such that \(\tau(T)_{a}=\sigma(cT)\circ\sigma(dT)^{-1}\). Similarly given \(c-d\in b^{n}A\), then there exist \(\theta(T)\in EO(\mathbb{M}(P_{a}[T]),T)\) such that \(\theta(T)_{b}=\sigma(cT)\circ\sigma(dT)^{-1}\).
If \(a^{n}s+b^{n}r=1\) for \(s,r\in A,\) then \(\sigma(T)=(\sigma(T)\circ\sigma(a^{n}sT)^{-1})\circ(\sigma(a^{n}sT)\sigma(0)^ {-1})=\alpha(T)_{b}\circ\beta(T)_{a}\) with \(\alpha(T),\beta(T)\) as required.
**Corollary 2.11**.: _Let \(A\) be a regular ring containing a field of characteristic \(\neq 2\) and \(P\) be projective \(A\)-module of rank \(\geq 2\). Let \(a,b\in A\) be comaximal and \(\sigma\in EO(\mathbb{M}(P_{ab}))\). Then there exist \(\alpha\in EO(\mathbb{M}(P_{a}))\) and \(\beta\in EO(\mathbb{M}(P_{b}))\) such that \(\sigma=\alpha_{b}\circ\beta_{a}\)._
**Proof**. By (2.4), there exist \(\tau(T)\in EO(\mathbb{M}(P_{ab}[T]),T)\) with \(\tau(1)=\sigma.\) By (2.10), \(\tau(T)=\alpha(T)_{b}\circ\beta(T)_{a}\) with \(\alpha(T)\in EO(\mathbb{M}(P_{a}[T]),T)\) and \(\beta(T)\in EO(\mathbb{M}(P_{b}[T]),T)\). Thus \(\sigma=\tau(1)=\alpha(1)_{b}\circ\beta(1)_{a}\) with \(\alpha(1)\in EO(\mathbb{M}(P_{a}))\) and \(\beta(1)\in EO(\mathbb{M}(P_{b}))\).
## 3. Removing perfect hypothesis on field from [2, Theorem 4.13]
**Theorem 3.1**.: _Let \(A\) be a regular domain of dimension \(d\) essentially of finite type over an infinite field \(k\). Let \(P\) be a projective \(A\)-module of rank \(n\) with \(2n\geq d+3\). Let \(I\) be an ideal of \(A[T]\) of height \(n\) and \(\phi:P[T]\twoheadrightarrow I/I^{2}T\) be a surjection. Then \(\phi\) has a surjective lift \(\Phi:P[T]\twoheadrightarrow I\)._
**Proof**. When \(A\) is a \(k\)-spot, then \(P=A^{n}\) and the result is proved in [6, Theorem 4.2]. In the general case, let \(\Sigma\) be the set of all \(s\in A\) such that \(\phi_{s}\) lifts to a surjection \(\Psi:P_{s}[T]\twoheadrightarrow I_{s}\). Then it is implicitely proved in [2, Theorem 4.13] that \(\Sigma\) is an ideal. Therefore \(\Sigma=A\) as the result is true for \(k\)-spots and the result follows.
## 4. Monic inversion principle for quadratic form
We recall some definitions from Mandal-Mishra [10].
**Definition 4.1**.: Let \(A\) be a ring with \(1/2\in A\) and \(P\) be a projective \(A\)-module.
* \(\mathbb{Q}(P)=\{(p,f,s)\in\mathbb{M}(P)|s(1-s)+f(p)=0\}\).
* \(\mathbb{Q}^{\prime}(P)=\{(p,f,z)\in(P)|z^{2}+f(p)=1\}\). We write \(\mathbb{Q}^{\prime}(A^{n})\) as \(\mathbb{Q}^{\prime}{}_{2n+1}(A)=\{(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n},z) \in\mathbb{M}(A^{n})|\sum x_{i}y_{i}+z^{2}=1\}\).
* \(O(\mathbb{M}(P))=\) the group of isometries of the quadratic form \(\mathbb{M}(P)\). We write \(O(\mathbb{M}(A^{n}))\) as \(O_{2n+1}(A)\).
* \(O(\mathbb{M}(P[T]),T)=\{\sigma(T)\in O(\mathbb{M}(P[T])):\sigma(0)=id\}\).
* A local \(P\)-orientation is a pair \((I,\omega)\), where \(I\) is an ideal of \(A\) and \(\omega:P\twoheadrightarrow I/I^{2}\) is a surjection. The surjection \(\omega\) is identified with the induced surjection \(\omega:P/IP\twoheadrightarrow I/I^{2}\).
* \(\mathcal{L}\mathcal{O}(P)=\) the set of all local \(P\)-orientations.
* Given a ring homomorphism \(\gamma:A\to B\), the functor \(-\otimes_{A}B\) gives natural maps \(\mathcal{L}\mathcal{O}(P)\to\mathcal{L}\mathcal{O}(P\otimes B)\), \(\mathbb{Q}(P)\to\mathbb{Q}(P\otimes B)\) and \(\mathbb{Q}^{\prime}(P)\to\mathbb{Q}^{\prime}(P\otimes B)\).
* Let \(F(P)\) be one of \(\mathcal{L}\mathcal{O}(P)\), \(\mathbb{Q}(P)\) or \(\mathbb{Q}^{\prime}(P)\). The homotopy orbit \(\pi_{0}(F(P))\) is defined by the pushout diagram in sets.
* Let \(\Phi_{0}=(p,f,s)\) and \(\Phi_{1}=(p^{\prime},f^{\prime},s^{\prime})\) be elements of \(\mathbb{Q}(P)\). Then \([\Phi_{0}]=[\Phi_{1}]\in\pi_{0}(\mathbb{Q}(P))\) if and only if there exist \(\Phi(T)\in\mathbb{Q}(P[T])\) such that \(\Phi(0)=\Phi_{0}\) and \(\Phi(1)=\Phi_{1}\). The element \([(0,0,0)]\in\mathbb{Q}(P)\) is the base point.
* Let \(\Phi_{0}=(p,f,s)\) and \(\Phi_{1}=(p^{\prime},f^{\prime},s^{\prime})\) be elements of \(\mathbb{Q}^{\prime}(P)\). Then \([\Phi_{0}]=[\Phi_{1}]\in\pi_{0}(\mathbb{Q}^{\prime}(P))\) if and only if there exist \(\Phi(T)\in\mathbb{Q}^{\prime}(P[T])\) such that \(\Phi(0)=\Phi_{0}\) and \(\Phi(1)=\Phi_{1}\). The element \([(0,0,1]\in\mathbb{Q}^{\prime}(P)\) is the base point.
* For local \(P\)-orientations \((I,\omega),(I^{\prime},\omega^{\prime})\in\mathcal{L}\mathcal{O}(P)\), \([(I,\omega)]=[(I^{\prime},\omega^{\prime})]\in\pi_{0}(\mathcal{L}\mathcal{O}(P ))\) if and only if there exist an ideal \(K\subset A[T]\) and a local \(P[T]\)-orientation \((K,\sigma(T))\in\mathcal{L}\mathcal{O}(P[T])\) such that \((K(0),\sigma(0))=(I,\omega)\) and \((K(1),\sigma(1))=(I^{\prime},\omega^{\prime})\).
We state a result from Mandal-Mishra [10, Lemma 2.4].
**Lemma 4.2**.: _Let \(A\) be ring with \(1/2\in A\) and \(P\) be a projective \(A\)-module. Then there is a base point preserving bijection \(\pi_{0}(\mathbb{Q}(P))\simeq\pi_{0}(\mathbb{Q}^{\prime}(P))\) defined by \((p,f,s)\mapsto(2p,2f,1-2s)\)._
**Lemma 4.3**.: _Let \(A\) be a ring with \(1/2\in A\) and \(P\) be a projective \(A\)-module of rank \(\geq 2\). Then the natural map \(\mu:\pi_{0}(\mathbb{Q}^{\prime}(P))\simeq\pi_{0}(\mathbb{Q}^{\prime}(P[T]))\) defined by \([v]\mapsto[v]\) is a bijection._
**Proof**. To see that \(\mu\) is well defined, let \(v,v^{\prime}\in\mathbb{Q}^{\prime}(P)\) with \([v]=[v^{\prime}]\) in \(\pi_{0}(\mathbb{Q}^{\prime}(P))\). Then there exist \(H(W)\in\mathbb{Q}^{\prime}(P[W])\) such that \(H(0)=v\) and \(H(1)=v^{\prime}\). Then \(H(W)\in\mathbb{Q}^{\prime}(P[T,W])\) gives that \([v]=[v^{\prime}]\) in \(\pi_{0}(\mathbb{Q}^{\prime}(P[T]))\).
Let \(v,v^{\prime}\in\mathbb{Q}^{\prime}(P)\) be such that \([v]=[v^{\prime}]\) in \(\pi_{0}(\mathbb{Q}^{\prime}(P[T]))\). Then there exist \(H(T,W)\in\mathbb{Q}^{\prime}(P[T,W])\) such that \(H(T,0)=v\) and \(H(T,1)=v^{\prime}\). Then \(G(W)=H(0,W)\in\mathbb{Q}^{\prime}(P[W])\) and also \(G(0)=v\) and \(G(1)=v^{\prime}\). Thus \(\mu\) is injective.
Let \(H(T)\in\mathbb{Q}^{\prime}(P[T])\). Then \(G(T,W)=H(TW)\in\mathbb{Q}^{\prime}(P[T,W])\) with \(G(T,0)=H(0)\) and \(G(T,1)=H(T)\). Thus \([H(T)]=[H(0)]\) in \(\pi_{0}(\mathbb{Q}^{\prime}(P[T]))\). Thus \(\mu\) is surjective.
Using (2.8), the next result is a restatement of Mandal-Mishra [10, Theorem 3.1, 3.4].
**Proposition 4.4**.: _Let \(A\) be a regular ring containing a field of characteristic \(\neq 2\) and \(P\) be a projective \(A\)-module of rank \(\geq 2\)._
1. _If_ \(H(T)\in\mathbb{Q}^{\prime}(P[T])\)_, then there exist_ \(\sigma(T)\in EO(\mathbb{M}(P[T]),T)\) _such that_ \(H(T)=\sigma(T)(H(0))\)_._
2. _Consider the action of_ \(EO(\mathbb{M}(P[T]),T)\) _on_ \(\mathbb{Q}^{\prime}(P)\) _given by_ \(\sigma(T)\cdot v=\sigma(1)(v)\)_. Then the natural map_ \(\mathbb{Q}^{\prime}(P)/EO(\mathbb{M}(P[T]),T)\to\pi_{0}(\mathbb{Q}^{\prime}(P ))\) _is a bijection._
In case \(P=A^{n}\) is free, the following monic inversion is proved in Das-Tikader-Zinna [7, Theorem 5.8]. We will closely follow their proof. For \(v,v^{\prime}\in\mathbb{Q}^{\prime}(P)\),\(v\equiv v^{\prime}\) mod \(EO(\mathbb{M}(P))\) means \(v^{\prime}=v\sigma\) for some \(\sigma\in EO(\mathbb{M}(P))\).
**Theorem 4.5**.: _Let \(A\) be a regular domain containing a field \(k\) of characteristic \(\neq 2\) and \(P\) be a projective \(A\) module of rank \(\geq 2\). Let \(H(T)\in\mathbb{Q}^{\prime}(P[T])\) with \(H(T)_{g}\equiv(0,0,1)\) mod \(EO(\mathbb{M}(P[T]_{g}))\) for some monic polynomial \(g\in A[T]\). Then \(H(T)\equiv(0,0,1)\) mod \(EO(\mathbb{M}(P[T]))\)._
**Proof**. By 4.4(1), \(H(T)\equiv H(0)\) mod \(EO(\mathbb{M}(P[T]),T)\). Since \(H(0)=H(1)\) in \(\pi_{0}(\mathbb{Q}^{\prime}(P))\), by 4.4(2), \(H(0)\equiv H(1)\) mod \(EO(\mathbb{M}(P))\). Thus \(H(T)\equiv H(1)\) mod \(EO(\mathbb{M}(P[T]))\). It is given that \((0,0,1)\equiv H(T)_{g}\equiv H(0)\equiv H(1)\) mod \(EO(\mathbb{M}(P[T]_{g}))\).
In case \(g=T\), putting \(T=1\) gives \([(0,0,1)]\equiv H(1)\) mod \(EO(\mathbb{M}(P))\). Thus \(H(T)\equiv H(1)\equiv[(0,0,1)]\) mod \(EO(\mathbb{M}(P[T]))\).
For general \(g\), since \(H(T)_{g}\equiv H(1)\equiv(0,0,1)\) mod \(EO(\mathbb{M}(P[T]_{g}))\), there exist \(\sigma\in EO(\mathbb{M}(P[T]_{g}))\) with \(H(1)\sigma=(0,0,1)\).
Let \(g^{*}=T^{-\deg g}g\in A[T^{-1}]\). Then \(g^{*}(T^{-1}=0)=1\) and \(A[T^{-1},T]_{g^{*}}=A[T,T^{-1}]_{g}\). Applying (2.11) for comaximal elements \(g^{*}\) and \(T^{-1}\) in \(A[T^{-1}]\), we get
\[\sigma_{T}=(\sigma_{1})_{T^{-1}}\circ(\sigma_{2})_{g^{*}}\]
where \(\sigma_{1}\in EO(\mathbb{M}(P[T^{-1}]_{g^{*}}))\) and \(\sigma_{2}\in EO(\mathbb{M}(P[T,T^{-1}))\). Now \((H(1)\sigma_{1})_{T^{-1}}=((0,0,1)\sigma_{2}^{-1})_{g^{*}}\). Patch \(H(1)\sigma_{1}\) and \((0,0,1)\sigma_{2}^{-1}\) to obtain \(w\in\mathbb{Q}^{\prime}(P[T^{-1}])\) such that \(w_{T^{-1}}=(0,0,1)\sigma_{2}^{-1}\) in \(A[T,T^{-1}]\) and \(w_{g^{*}}=H(1)\sigma_{1}\) in \(A[T^{-1}]_{g^{*}}\). In particular \(w_{T^{-1}}\equiv(0,0,1)\) mod \(EO(\mathbb{M}(P[T,T^{-1}]))\). By first case, we get \(w\equiv(0,0,1)\) mod \(EO(\mathbb{M}(P[T^{-1}]))\).
As \(g^{*}(T^{-1}=0)=1\) and \(w_{g^{*}}=H(1)\sigma_{1}\), we get \(w(T^{-1}=0)=H(1)\sigma_{1}(T^{-1}=0)\equiv(0,0,1)\) mod \(EO(\mathbb{M}(P))\) i.e. \(H(1)\equiv(0,0,1)\) mod \(EO(\mathbb{M}(P))\). Thus \(H(T)\equiv H(1)\equiv(0,0,1)\) mod \(EO(\mathbb{M}(P[T]))\). This completes the proof.
## 5. Monic inversion and lifting of surjections
Let \((I,\omega_{I})\in\mathcal{LO}(P)\) i.e. \(\omega_{I}:P\to I/I^{2}\) is a surjection. If \(f:P\to I\) is a lift of \(\omega_{I}\), then there exist \(s\in I\) such that \(I=(f(P),s)\) with \(s(1-s)\subset f(P)\). If \(s(1-s)=f(p)\) for \(p\in P\), then \((p,f,s)\in\mathbb{Q}(P)\). The map \(\chi:\mathcal{LO}(P)\to\pi_{0}(\mathbb{Q}(P))\) defined by \(\chi(I,\omega_{I})=[(p,f,s)]\in\pi_{0}(\mathbb{Q}(P))\) is well defined [10, Theorem 2.7]. Compositing \(\chi\) with the isomorphism in (4.2), we get a map \(\zeta:\mathcal{LO}(P)\to\pi_{0}(\mathbb{Q}^{\prime}(P))\).
The next result is proved in Mandal-Mishra [10, Theorem 4.3] when \(k\) is infinite perfect. This condition on \(k\) was stated to use [2, Theorem 4.13]. Now we can use (3.1).
**Theorem 5.1**.: _Let \(A\) be a regular domain of dimension \(d\) essentially of finite type over an infinite field \(k\) of characteristic \(\neq 2\). Let \(P\) be a projective \(A\)-module of rank \(n\) with \(2n\geq d+3\). Let \(I\) be an ideal of \(A\) of height \(\geq n\) and \((I,\omega_{I})\in\mathcal{LO}(P)\). Then \(\omega_{I}\) lifts to a surjection \(P\twoheadrightarrow I\) if and only if \(\zeta(I,\omega_{I})=[(0,0,1)]\in\pi_{0}(\mathbb{Q}^{\prime}(P))\)._
**Theorem 5.2**.: _Let \(A\) be a regular domain of dimension \(d\) essentially of finite type over an infinite field \(k\) of characteristic \(\neq 2\). Let \(P\) be a projective \(A\)-module of rank \(n\) with \(2n\geq d+3\). Let \(I\) be an ideal of \(A[T]\) of height \(\geq n\) and \((I,\omega_{I})\in\mathcal{LO}(P[T])\). Then \(\omega_{I}\) lifts to a surjection \(P[T]\twoheadrightarrow I\) if and only if \(\zeta(I,\omega_{I})=[(0,0,1)]\in\pi_{0}(\mathbb{Q}^{\prime}(P[T]))\)._
**Proof**. If \(\zeta(I,\omega_{I})=[(0,0,1)]\in\pi_{0}(\mathbb{Q}^{\prime}(P[T]))\), then \(\zeta(I(0),\omega_{I(0)})=[(0,0,1)]\in\pi_{0}(\mathbb{Q}^{\prime}(P))\). By (5.1), the surjection \(\omega_{I(0)}:P\twoheadrightarrow I(0)/I(0)^{2}\) lifts to a surjection \(\psi:P\twoheadrightarrow I(0)\) such that \(\psi\otimes A/I(0)=\omega_{I(0)}\). Thus \(\omega_{I}\) lifts to a surjection \(\Phi:P[T]\twoheadrightarrow I/I^{2}T\)[4, Remark 3.9]. By [2, Theorem 4.13], \(\Phi\) has a surjective lift \(P[T]\twoheadrightarrow I\) which is a lift of \(\omega_{I}\).
For converse, consider the following commutative diagram
If \(\omega_{I}\) lifts to a surjection \(P[T]\twoheadrightarrow I\), then \(\omega_{I(0)}\) also lifts to a surjection \(P\twoheadrightarrow I(0)\). By (5.1), \(\zeta(I(0),\omega_{I(0)})=[(0,0,1)]\in\pi_{0}(\mathbb{Q}^{\prime}(P))\). Since the right vertical map is bijective, we conclude that \(\zeta(I,\omega_{I})=[(0,0,1)]\in\pi_{0}(\mathbb{Q}^{\prime}(P[T]))\).
Now we will prove our main result.
**Theorem 5.3**.: _Let \(A\) be a regular domain of dimension \(d\) essentially of finite type over an infinite field \(k\) of characteristic \(\neq 2\). Let \(P\) be a projective \(A\)-module of rank \(n\) with \(2n\geq d+3\). Let \(I\) be an ideal of \(A[T]\) of height \(n\) and \(\phi:P[T]\twoheadrightarrow I/I^{2}\) be a surjection. Assume \(\phi\otimes A(T)\) lifts to a surjection \(\theta:P[T]\otimes A(T)\twoheadrightarrow IA(T)\). Then \(\phi\) lifts to a surjection \(\Phi:P[T]\twoheadrightarrow I\)._
**Proof**. Given \((I,\phi)\in\mathcal{LO}(P[T])\). Let \(\zeta(I,\phi)=[H(T)]\in\pi_{0}(\mathbb{Q}^{\prime}(P[T]))\). Since \(\phi\otimes A(T)\) has a surjective lift \(\theta:P[T]\otimes A(T)\twoheadrightarrow IA(T)\) and dim \(A(T)=d\), by (5.1), \(\zeta(IA(T),\phi\otimes A(T))=[(0,0,1)]\) in \(\pi_{0}(\mathbb{Q}^{\prime}(P[T]\otimes A(T))\). Thus there exist a monic polynomial \(g\in A[T]\) such that \(\zeta(I_{g},\phi_{g})=[H(T)_{g}]=[(0,0,1)]\in\pi_{0}(\mathbb{Q}^{\prime}(P[T]_ {g}))\). By (4.5), \([H(T)]=[(0,0,1)]\) in \(\pi_{0}(\mathbb{Q}^{\prime}(P[T]))\). By (5.2), \(\phi\) lifts to a surjection \(\Phi:P[T]\twoheadrightarrow I\).
**Corollary 5.4**.: _Let \(A\) be a regular domain of dimension \(d\) essentially of finite type over an infinite field \(k\) of characteristic \(\neq 2\). Let \(P=Q\oplus A\) be a projective \(A\)-module of rank \(n\) with \(2n\geq d+3\). Let \(I\) be an ideal of \(A[T]\) of height \(n\) containing a monic polynomial \(f\). Then any surjection \(\phi:P[T]\twoheadrightarrow I/I^{2}\) lifts to a surjection \(\Phi:P[T]\twoheadrightarrow I\)._
**Proof**. Since \(I_{f}=A[T]_{f}\), \(I_{f}/I_{f}^{2}=0\), thus the surjection \(pr_{2}:P[T]_{f}\twoheadrightarrow I_{f}\) is a lift of \(\phi_{f}\). By (5.3), \(\phi\) lifts to the required surjection. \(\square\)
**Acknowledgement**
We would like to thank Arvind Asok, A. Stavrova and A.A. Ambily for replying to our queries.
|
2304.14773 | Synergy of Machine and Deep Learning Models for Multi-Painter
Recognition | The growing availability of digitized art collections has created the need to
manage, analyze and categorize large amounts of data related to abstract
concepts, highlighting a demanding problem of computer science and leading to
new research perspectives. Advances in artificial intelligence and neural
networks provide the right tools for this challenge. The analysis of artworks
to extract features useful in certain works is at the heart of the era. In the
present work, we approach the problem of painter recognition in a set of
digitized paintings, derived from the WikiArt repository, using transfer
learning to extract the appropriate features and classical machine learning
methods to evaluate the result. Through the testing of various models and their
fine tuning we came to the conclusion that RegNet performs better in exporting
features, while SVM makes the best classification of images based on the
painter with a performance of up to 85%. Also, we introduced a new large
dataset for painting recognition task including 62 artists achieving good
results. | Vassilis Lyberatos, Paraskevi-Antonia Theofilou, Jason Liartis, Georgios Siolas | 2023-04-28T11:34:53Z | http://arxiv.org/abs/2304.14773v1 | # Synergy of Machine and Deep Learning Models for Multi-Painter Recognition
###### Abstract
The growing availability of digitized art collections has created the need to manage, analyze and categorize large amounts of data related to abstract concepts, highlighting a demanding problem of computer science and leading to new research perspectives. Advances in artificial intelligence and neural networks provide the right tools for this challenge. The analysis of artworks to extract features useful in certain works is at the heart of the era. In the present work, we approach the problem of painter recognition in a set of digitized paintings, derived from the WikiArt repository, using transfer learning to extract the appropriate features and classical machine learning methods to evaluate the result. Through the testing of various models and their fine tuning we came to the conclusion that RegNet performs better in exporting features, while SVM makes the best classification of images based on the painter with a performance of up to 85%. Also, we introduced a new large dataset for painting recognition task including 62 artists achieving good results.
Vassilis Lyberatos, Paraskevi-Antonia Theofilou, Jason Liartis and Georgios Siolas National Technical University of Athens, Athens, Greece Painter recognition, Deep learning, Machine learning, Feature extraction
## 1 Introduction
Art as an integral part of human culture has entered the digital age, following the trend of recent years to digitize our world. Large collections of artworks, mainly from museums and galleries, but also private collections, are digitized, published and made available on the internet with the aim not only of preserving and presenting them as creations of human expression, but also as data for processing and analysis using machine learning methods.
The large volume of this type of data, as well as the advanced tools of artificial intelligence impose the need to automate the work of analysis, extraction of features and categorization of certain components of artworks.. Art historians, curators and art experts, in general, are able to identify the particular characteristics of a painting, its creator, the genre, the art movement and the artistic style to which it belongs. Their ability is based both on their perception and memory, as well as on their artistic experience that allows them to connect the various artistic features with each other and to draw relevant conclusions. Therefore, the aim is to imitate the human ability, experience and knowledge to automate the above tasks.
We are interested in the problem of automatic painter recognition. Approaching this problem, we consider that the artworks of a painter present common features related to the style and preferences of each artist. However, it has been observed that an artist does not follow from the beginning to the end of his course the same art movement, the same genre, the same theme, the same style or color palette, as well as may have been influenced by other creators, for example by their teachers, in which case their paintings have common features. Therefore, the present work is called to deal with the complexity of recognizing a painter from a painting by finding appropriate methods in machine learning.
Recognizing the creator, the genre, the art movement and the style of a painting are problems of increasing complexity, the solution of which is a research challenge in the field of machine learning. Deep learning techniques are mainly employed, but also classical machine learning techniques are used to deal with these complex problems. On the one hand, classical methods of image processing are used for feature extraction and combined with typical machine learning classifiers. On the other hand, feature extraction and classification is done by an end-to-end training of convolutional neural networks and a use of transfer learning. We combined these two approaches in order to yield better results to the painter recognition task. Having image data from the WikiArt repository and knowing the metadata that accompanies the painter in each painting, we fine-tuned pre-trained deep neural networks to extract the appropriate features from images and then used machine learning classifiers on those features to get our predictions.
The advantage of our method lies in its application to a new large-scale dataset of 62 artists. There are no other studies, as we know, that have approached the specific problem to such an extent, achieving such good results without the use of end-to-end deep neural networks. Thus, we managed to approach the task of painter recognition using a different from the state-of-the-art methodologies that gives satisfactory results for usual datasets of about 20 artists and excellent for the problem of increased complexity of 62 artists.
## 2 Background
Automation in the processing, analysis and categorization of artworks has been a major challenge in recent decades and machine learning has made a significant contribution to progress in this field. There are many studies regarding the distinction of the type and style of a painting, as well as the identification of its artist.
Early studies addressed these problems with traditional machine learning methods based on low-level features and a relatively small number of artworks [1, 2, 3, 4, 5, 6, 7]. These studies use features that capture shape, texture, edge and color properties and are extracted with the use of classical computer vision methods, like SIFT, GIST, HOG, GLCM, and HSV color histograms. Then, they train classical classifiers of machine learning, like SVM, K-NN, Random Forest, MLP, to make the predictions.
Following the studies, CNN was introduced as extractors of features. The first large-scale such study [8] has shown that features derived from the layers of a CNN pre-trained in non-artistic images achieve high performance in the tasks of art classification. Then, many studies [9, 10, 11, 12, 13, 14, 15, 16] have confirmed the effectiveness of the methods based on the extraction of features using CNN, as well as the combination of them with the handmade features of the images.
Regarding the problem of painter recognition from a painting, in which we are interested in this work, in 2013, Cetinic et al. [17] studied the style of individual artists by extracting specific features, like color, light, texture, and then several classifiers, such as MLP, SVM, Naive Bayes, Random Forest and Adaboost, were applied. In 2015, Saleh et al. [12] investigated the applicability of metric learning approaches and performance of different visual features (feature fission) coupled with SVM for learning similarity between artistic items. In 2016, Tan et al. [18] used CNN as feature extractor and SVM for classification or CNN as an end-to-end fine-tuned architecture. In 2017, Viswanathan [19] trained from scratch a ResNet18 to resolve the problem of painter recognition. In 2018, Cetinic et al. [20], also, focused on fine-tuned networks based on VGG, Resnet, Googlenet and CaffeNet models. In 2019, Kelek et al. [21] used pre-trained architectures such as GoogleNet, Inceptionv3, ResNet50, ResNet101 and DenseNet. Zhong et al. [22] proposed a dual path classification scheme, including RBG and brush stroke information channels, based on architecture of ResNet131. In 2020, Choundhury [23] compared the results of feature extraction methods to train Random Forest and SVM classifiers and deep convolutional networks with transfer learning, such as basic CNN, ResNet18 and ResNet50. In 2021, Comert et al. [24] focused on the fine-tuning of Mobile v2, ResNet, Inception v2 and NasNet to identify the painter of an artwork. The same year, Zhao et al. [25] compared the architectures ResNet, RegNet and EfficientNet for painter identification. Finally, Nevo et al. [26] proposed a novel dual-stream architecture, based on pretrained EfficientNet model, for capturing in parallel both global elements and local structures in painting's images. It is important to mention that most of the related work has used datasets derived form WikiArt.
## 3 Dataset and Features
Our data were retrieved from Kaggle1. Most of these digitized images were taken from the WikiArt repository. In total, 103.250 paintings by 2.319 artists are available in this dataset. It is noted that for our convenience in managing our data and in the application of the various techniques and algorithms that we have chosen, we built two different datasets. We did that in order to test our algorithms in different scales and difficulty of the problem. The first one corresponds to 20 distinct painters with 9,986 paintings in total (_Medium Dataset_) and the second one to 62 with 26,263 (_Large Dataset_). The selection of data was based on the minimum number of paintings for each artist in order to have adequate number of artworks per artist. For this reason, we selected for the study those painters who have at least 270 artworks in the original dataset. In addition, in order to do equal comparison with state-of-the-art works, we applied our experiments to the dataset used by the state-of-the-art (SOTA) methods, which contains 19,050 paintings for 23 painters 2 and is derived from WikiArt repository. From this point we will refer to this dataset as _SOTA Dataset_.
Footnote 1: [https://www.kaggle.com/c/painter-by-numbers/overview](https://www.kaggle.com/c/painter-by-numbers/overview)
Footnote 2: [https://github.com/cs-chan/ArtGAN](https://github.com/cs-chan/ArtGAN)
Furthermore, for the best management of this two dimensional data, resizing of the images (3x256x256) was applied, so that the dimensions of the input to our algorithms are specific. For the better generalization of our models we performed data-augmentation, we used random cuts and reflections of the images. We also applied normalization to the data based on the mean value and the covariance of the features, which helped to accelerate the training of our neural networks.
## 4 Methodology
Regarding the feature extraction from images, none of the classic techniques were applied, but it was preferred to use SOTA pre-trained convolutional neural networks that are distinguished for their ability to extract high-level features from images. This is also a challenge of our study, as it is interesting to investigate to what extent convolutional networks can offer to classification networks those appropriate features that will help to solve the problem of painter recognition as effectively as possible.
To construct a deep-neural-network-based feature extractor we conducted many experiments with different SOTA architectures and datasets. We tried different sizes of ResNet
[27] and RegNet [28] architectures in different datasets. We started by experimenting on the _Medium dataset_ in order to select an architecture. The RegNet architecture outperformed the ResNet, so we continued experimenting with it on the _Large dataset_.
Further experiments focused on determining the best hyper-parameters for the RegNet architecture. We experimented with different ways of training. We tried different values of _model depth_ and _learning rate_ in order to a model of the right capacity. Additionally we tried adding a _dropout_ layer. Also, we used new techniques such as _label smoothing_[29], _warm-up layers_ and _frozen layers_ in order to avoid over-fitting and catastrophic forgetting [30].
After choosing the best architecture for each sub-task we applied various machine learning algorithms. We applied heuristic algorithms for finding the best combination of hyper-parameters for each model and ended up selecting the best one. We followed the same procedure for the _Large Dataset_ and we tried one extra SOTA classifier, _XGBoost_. Also in order to compare with SOTA model's we applied our best model on the _SOTA Dataset_. Our aim was to use classifiers that don't follow the standard MLP model with fully connected layers.
A crucial factor for achieving good results was the hyper-parameter tuning. Specifically for our best classifier _SVM_ we tuned the hyper-parameters: \(C\), \(\gamma\) and _kernel-type_. We performed gradually grid-search in order to find the best combination for our model, using the sklearn python package.
## 5 Experiments
Our models managed to be comparable to SOTA models in the painter recognition task, especially for the works due to 2021, where started approaching the problem with different manner using dual-stream architectures for feature extraction. In Table 2 the performance of our best models is given compared to previous works. For the implementation of our experiments we provide a GitHub repository 3. Below our experiments are described analytically.
Footnote 3: [https://github.com/jlairtis/art-recognition](https://github.com/jlairtis/art-recognition)
### Recognition of 20 artists
The sub-task of recognizing paintings from 20-artists was based in the _Medium Dataset_. We conducted a number of experiments in order to choose the best architecture for feature extraction. We split the dataset in three parts: 20% validation set, 20% test set and 60% train set. We experimented with different sizes of the layers of ResNet (34, 101, 152) and of the number of parameters of RegNet (400mf, 800mf). We had approximately 20 experiments and ended up choosing RegNet_Y_800MF and ResNet-152. After choosing the two best architecture we did fine tuning to choose one of them. RegNet_y_800MF with _learning rate_ 1e-5, trained in 20 _epochs_ with 3 _frozen layers_ had the best results achieving 0.84 accuracy in the validation set, so we chose it for our feature extractor.
After using the extractor to retrieve the desired features, we applied 6 different ML classifiers: _Logistic Regression_, _Naive Bayes_, _K-Nearest Neighbors_ (K-NN), _Decision Trees_, _AdaBoost_ and _Support Vector Machines_ (SVM). We applied cross-validation with 5 _folds_. In order to compare them fairly, we applied hyper-parameter optimization to all of them, using the gridsearch algorithm. After conducting all the experiments _SVM_ prevailed all the others reaching 85% accuracy, while _Decision Trees_ having the worst results. An overview of the results is shown in Table 1.
From Fig. 1, we can conclude that in general the performance of our method was good for all classes. We had the worst results for the painter Ilya Repin and the best for the painter Battista Piranesi. It is also noted that there was a confusion between the painters Boris Kustodiev and Ilya Repin. This confusion may be due to the fact that both artists belong to the same artistic movement, Realism. In addition, both are from Russia and studied at the same time at the Imperial Academy of Arts of Russia.
### Recognition of 62 artists
The second sub-task with the 62-artists is based on the _Large Dataset_. We split the dataset in three parts: 20% validation set, 20% test set and 60 % train set. Since the best model in our previous experiments (see Section 5.1) was RegNet architecture we chose it a priori. In order to chose the best hyper-parameters for RegNet we had 30 different experiments. We tried various sizes of the model's parameters (400mf, 800mf, 1.6gf) and ended up choosing RegNet_Y_1_6GF. The hyper-parameters of the best model that achieved 0.8 accuracy in the validation set are: 1000 _epochs_, 128 _batch size_, 4 _warm up layers_, 2 _freeze layers_, 0.015 _label smoothing_ and 1.6 _learning rate_.
Since we chose the best feature extractor for this task, we tried four machine learning classifiers for the task: _Random Forest_, _K-Nearest Neighbors_, _Support Vector Machines_ and _XGBoost_. We used cross-validation for the training and tun
\begin{table}
\begin{tabular}{l c c} Classifier & _Medium Dataset_ & _Large Dataset_ \\ \hline Plain RegNet & - & 79\% \\ Decision Trees & 63\% & - \\ Random Forest & - & 69\% \\ AdaBoost & 79\% & - \\ Naive Bayes & 82\% & - \\ Logistic Regression & 84\% & - \\ K-NN & 84\% & 72\% \\ XGBoost & - & 75\% \\
**SVM** & **85\%** & **84\%** \\ \end{tabular}
\end{table}
Table 1: Comparing our ML classifiers
ing of each classifier with 5 _folds_. For the hyper-parameter tuning of the classifiers we used Optuna [31], a python library. The best classifier was proven to be _SVM_ achieving 84% accuracy and the worst was _Random Forest_ achieving 69%. The results are displayed in Table 1. It is important to point that plain RegNet achieved 79%, which means that SVM boosted our performance with 5 percentage points.
### Comparison with state-of-the-art models
In order to compare our methodology with SOTA [12, 20, 25, 26], we applied our experiments to the same dataset, the performance in which can be a mark for the efficiency of our approaching system. We used our RegNet model that was fine-tuned on the _Large Dataset_ for extracting representative features of the dataset and then performed gridsearch for SVM models.
We achieved results close to SOTA model's performance as shown in Table 2. Our approach deviates from the so far proposed end-to-end deep convolutional network architectures to solve the task of painter recognition and proves that traditional classifiers have still the potential to have comparable results to pure deep-learning models.
We consider that that the complexity of the problem lies in the increase in the number of artists to be recognized. Regarding Fig. 2 we observe that our methodology achieves appreciable performance for the ordinary datasets and much better for the extended dataset of 62 artists. The references of Fig. 2 are briefly descibed in Section 2.
## 6 Conclusions and Future Work
In this work, we had a large number of experiments trying a variety of SOTA deep learning and machine learning architectures achieving very good results in the painters recognition task. A new methodology was applied, as a synergy of old and new techniques, in order to reach high accuracy in a large-scale and complex multi-class classification problem, like this of 62 artists. It is proven that traditional machine learning classifiers can have close to SOTA results, assembling more versatile model-structures. The use of a pre-trained deep neural network for feature extraction and an SVM for classification instead of the use of an end-to-end deep neural network has advantages in the performance and can lead to a different comparable approach with SOTA ones.
Future work will be the extension of our experiments in other art recognition tasks in order to success the generalization of our model. One other interesting future direction, in order to tackle the problem of generalization, is to try multi-task learning. Finally, it will be useful to include explainability to make the artist identification process more understandable and detect possible bias of model's parameters.
\begin{table}
\begin{tabular}{l l l c} \hline \hline Work & Year & Methodology & Accuracy \\ \hline Saleh et al. [12] & 2015 & Feature frussion \& SVM & 63\% \\ Cetinic et al. [20] & 2018 & CaffeNet & 82\% \\ Zhao et al. [25] & 2021 & EfficientNet & 92\% \\ Nevo et al. [26] & 2022 & Dual-stream (EfficientNet) & 94\% \\
**Our approach** & **2023** & **RegNetY-1.6MF \& SVM** & **85\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparing our results with previous state-of-the-art models on _SOTA Dataset_
Figure 1: Confusion matrix for the best SVM trained in a subset of the _Medium Dataset_
Figure 2: Scatter plot for comparing the change in accuracy with the number of artists (blue points represent previous works and red ones ours - all points are annotated with the corresponding references) |
2305.03126 | Optimizing SMS Reminder Campaigns for Pre- and Post-Diagnosis Cancer
Check-Ups using Socio-Demographics: An In-Silco Investigation Into Bladder
Cancer | Timely pre- and post-diagnosis check-ups are critical for cancer patients,
across all cancer types, as these often lead to better outcomes. Several
socio-demographic properties have been identified as strongly connected with
both cancer's clinical dynamics and (indirectly) with different individual
check-up behaviors. Unfortunately, existing check-up policies typically
consider only the former association explicitly. In this work, we propose a
novel framework, accompanied by a high-resolution computer simulation, to
investigate and optimize socio-demographic-based SMS reminder campaigns for
cancer check-ups. We instantiate our framework and simulation for the case of
bladder cancer, the 10th most prevalent cancer today, using extensive
real-world data. Our results indicate that optimizing an SMS reminder campaign
based solely on simple socio-demographic features can bring about a
statistically significant reduction in mortality rate compared to alternative
campaigns by up to 5.8%. | Elizaveta Savchenko, Ariel Rosenfeld, Svetlana Bunimovich-Mendrazitsky | 2023-05-04T19:55:50Z | http://arxiv.org/abs/2305.03126v1 | Optimizing SMS Reminder Campaigns for Pre- and Post-Diagnosis Cancer Check-Ups using Socio-Demographics: An In-Silco Investigation Into Bladder Cancer
###### Abstract
Timely pre- and post-diagnosis check-ups are critical for cancer patients, across all cancer types, as these often lead to better outcomes. Several socio-demographic properties have been identified as strongly connected with both cancer's clinical dynamics and (indirectly) with different individual check-up behaviors. Unfortunately, existing check-up policies typically consider only the former association explicitly. In this work, we propose a novel framework, accompanied by a high-resolution computer simulation, to investigate and optimize socio-demographic-based SMS reminder campaigns for cancer check-ups. We instantiate our framework and simulation for the case of bladder cancer, the 10th most prevalent cancer today, using extensive real-world data. Our results indicate that optimizing an SMS reminder campaign based solely on simple socio-demographic features can bring about a statistically significant reduction in mortality rate compared to alternative campaigns by up to 5.8%.
**Keywords:** Cancer; Check-Up Reminders; Socio-clinical dynamics; Healthcare policy management; Bladder cancer.
## 1 Introduction
Cancer is a generic name for a wide range of diseases in which cells in the human body grow and reproduce uncontrollably, resulting in a broad spectrum of clinical conditions and complications, commonly resulting in low life quality and early death [1]. In addition to its potentially deadly clinical consequences, cancer is also associated with poor quality of life and a substantial economic burden on patients, their families, and the entire healthcare systems [2, 3]. The exact causes of cancer are yet to be fully understood, but a combination of genetic and environmental factors, including socio-demographic ones, are known to be strongly linked with the onset and progression of the disease [4].
There are many different types of cancer and each one may have a different set of risk factors and causes. Nonetheless, it is generally acknowledged that the early detection of the disease (via pre-diagnosis check-ups) and its appropriate monitoring for recurrence (via post-diagnosis check-ups) are pivotal in determining treatment options, reducing treatment costs, improving quality of life, and, arguably most important, lowering mortality rates across all patient groups [5, 6, 7, 8]. Thus, developing and implementing proper cancer check-up policies, both Pre- and Post-Diagnosis (PPD), is crucial [9]. Unfortunately, determining an optimal PPD check-up policy for a given individual is still an open, yet active, area of research [10, 11, 12, 13]. For example, [14] reviewed multiple policies for breast cancer PPD check-ups and found that the current policies lack clinical or economic
evidence as to their effectiveness from a healthcare service provider (HSP) perspective. In a similar manner, [15] performed an evaluation of 20 check-up policies for breast cancer by considering the various costs associated with implementing these policies and the potential subsequent medical costs. The authors found that policies that are specifically tailored to different age groups result in significantly better outcomes. Accordingly, in order to derive socio-demographic-based PPD check-up policies, researchers commonly use mathematical models, a practice which has proved to be very powerful [16, 17, 18, 19, 20]. That is, researchers rely on data-driven models, usually trained with machine learning algorithms, and optimization techniques to derive approximated or optimal PPD check-up policies in a fast, secure, and affordable manner [21, 22, 23, 24].
Unfortunately, an optimized PPD check-up policy need not necessarily be followed by all individuals alike [25, 26]. That is, the real-world effectiveness of a PPD check-up policy strongly depends on individual compliance [27] which, in turn, is known to be strongly linked to one's socio-demographic characteristics [28]. For example, [29] showed compliance to colorectal cancer screening is significantly higher in women than in men and changes non-linearly with age. In order to increase individual compliance, particularly in high-risk patient groups, various stakeholders such as HSPs and governmental agencies have been implementing diverse compliance-increasing strategies such as health education programs, taxation, discount offers, and SMS reminder campaigns [30, 31]. These strategies differ in their effectiveness, costs, and operational overhead. However, SMS reminder campaigns are often considered to be very effective, cheap, flexible, and operationally simple to implement compared to the mentioned alternatives. For example, [32] reviewed seven research projects concerning SMS reminder campaigns in Africa, concluding that vaccination reminder has led to improvements in vaccination uptakes under various metrics, whether through the increase in vaccination coverage, decrease in dropout rates, increase in completion rate or decrease in delay for vaccination. In particular, [33] showed that the SMS campaign gain similar clinical benefits to other approaches such as home visits while being significantly cheaper and much more scalable. [34] shown that SMS reminders can be used to reduce health and social inequity while providing better clinical outcomes for patients suffering from Human Immunodeficiency Virus. Unfortunately, to the best of our knowledge, existing cancer PPD check-up policies are currently accompanied by a naive "one-size-fits-all" SMS reminder campaigns, where all patients are treated the same.
In this work, we propose a novel framework, accompanied by a high-resolution computer simulation, to investigate and optimize a socio-demographic-based SMS reminder campaign for cancer PPD check-ups. Our framework can be instantiated to any type of cancer and PPD policy, for which an optimal socio-demographic-based SMS reminder campaign is approximated through a Monte Carlo optimization technique. Fig. 1 shows a schematic view of the proposed model's structure, input, and objective.
We instantiate our framework and provide an in-depth _in silico_ investigation into Bladder Cancer (BC). BC is the 10th most common cancer worldwide, with more than half a million new cases yearly and 200 thousand associated deaths in 2018 [35]. A more recent report reveals 34 thousand BC-related deaths during 2021 and 90 thousand new cases in the United States alone [36], indicating a growing trend in both metrics. Similar trends are found in many other types of cancer as well [36]. BC is also associated has a high recurrence rate, invasive surveillance strategies, and high treatment costs which combine to make it the single most expensive cancer to manage in both England and the United States [37]. As such, BC is a prime candidate for the implementation of compliance-increasing strategies such as SMS reminder campaigns.
The rest of the paper is organized as follows: Section 2 formally presents the framework, followed by Section 3 which outlines its implementation to the case of BC. Finally, in section 4, we analyze and discuss the results as well as propose possible future work directions.
## 2 Framework
Our proposed framework consists of several interconnected components which are detailed and discussed below. First, we define the clinical dynamics of cancer's onset and progression in the context of PPD check-ups and treatment. Then, we formalize the challenge of determining a PPD check-up policy as a resource-bounded optimization task. Based on these two components, we formulate the SMS reminder campaign optimization task and propose a Monte Carlo optimization technique that is shown to coverage to a near-optimal solution given enough computational resources. Then, we propose a fitting procedure to set the parameters of an instance of the framework using
historical data and facilitates the fitting of unavailable parameters' values that agree with realistic scenarios. Last, we detail how the different components are assembled together into a single framework.
### Clinical dynamics
Individuals are categorized into one of 10 clinical-oncological statuses (denoted by their \(\alpha\) parameter): healthy \((H)\), sick at phase \(j\) (\(S_{j}\) such that \(j\in\{1,2,3,4\}\)) Recovered from phase \(j\) (\(R_{j}\) such that \(j\in\{1,2,3,4\}\)), and dead \((D)\) such that \(N=H+S_{1}+S_{2}+S_{3}+S_{4}+R_{1}+R_{2}+R_{3}+R_{4}+D\) where \(N\) is the population's size at a given point in time. Individuals in the first (healthy) status were never diagnosed with cancer (\(H\)). If the individual never gets sick with cancer, it eventually naturally dies after \(\gamma\) steps in time and transforms to the dead (\(D\)) status. Healthy individuals can perform a pre-diagnosis check-up by either following the existing pre-diagnosis policy or due to symptoms. We assume that if the individual suffers from symptoms, s/he will choose to perform a pre-diagnosis check-up regardless of the PPD policy. A policy-based pre-diagnosis check-up will result in one of three outcomes: either indicating that the individual is healthy (\(H\)) or s/he has cancer of phase 1 or 2 (\(S_{1},S_{2}\)). Note that cancer of phase 3 must include significant symptoms that are assumed to be noticeable by the medically-unprofessional patient such as extreme pain. In a similar manner, a check-up due to symptoms will either result in a non-cancer diagnosis (i.e., healthy (\(H\)) from a cancer perspective - potentially indicating a non-oncological disease), or a cancer diagnosis with either phase 2, 3, or 4 (\(S_{2},S_{3},S_{4}\)). Once an individual is diagnosed, treatment takes place immediately. Each individual has a personal duration and probability to recover, according to their socio-demographic properties and cancer phase. If the individual dies during the treatment, it transforms into the dead (\(D\)) status, and mortality due to the disease is recorded. Otherwise, the individual recovers and transforms to the corresponding recovery phase \((S_{j}\to R_{j})\). Similar to healthy individuals, any other individual eventually naturally dies after \(\gamma\) steps in time and transforms into the dead (\(D\)) status. Here, we assume that getting sick with cancer does not affect the individual's life expectancy if s/he recovers from it. Similar to healthy individuals, recovered individuals can perform post-diagnosis check-ups following the post-diagnosis policy and/or due to symptoms. Here, both check-up types may result in a healthy outcome (i.e., the individual remains in the same clinical status) or in a cancer outcome (i.e., recurrent cancer) of phase \(k\in[1,2,3,4]\)[38, 39]. During the illness, the transition from one phase to the consecutive one, until death, follows a socio-demographic-based dynamics indicated by \(T_{1\to 2}\), \(T_{2\to 3}\), \(T_{3\to 4}\), and \(T_{4\to D}\). Fig. 2 provides a schematic view of the clinical statuses and the flow between them as a result of PPD check-ups.
Figure 1: A schematic view of the framework’s input and objective and how it interacts with the current components in the dynamic.
### PPD check-up policy
Each individual in the population \(p\in P\) is represented by a timed finite state machine [40] as follows: \(p:=(\alpha,\tau,\mu,\rho,e,g,s,\gamma)\) where \(\alpha\in\{H,S_{1},S_{2},S_{3},S_{4},R_{1},R_{2},R_{3},R_{4},D\}\) is the current clinical status of the individual, \(\tau\in\mathbb{N}\) is the time passed from the last change of the clinical status \((\alpha)\), \(\mu\in[0,1]\) is the probability that the individual will naturally comply with the pre-diagnosis check-up (without any compliance-increasing strategy), \(\rho\in\mathbb{R}^{+}\) is the individual's degree of openness or susceptibility to compliance-increasing strategies, \(e\in\mathbb{N}\) is the individual's age, \(g\in\{male,\ female\}\) is the individual's gender, \(s\in[1,2,\ldots,10]\) is the relative socio-economic tenth percentile of the individual, and \(\gamma\in\mathbb{N}\) the time steps until the individual naturally dies.
From a socio-demographic perspective, the model uses \(12\) parameters as indicated by the tuple \((e,g,s)\): \(T_{1\to 2},T_{2\to 3},T_{3\to 4},T_{4\to D},b,\gamma,\psi_{i},\psi_{i}^{j},\delta_{i},\delta_{r}\) where \(\psi_{i}\in[0,1]\) is the probability that an individual would get sick with cancer for the first time, \(\psi_{r}^{j}\in[0,1]\) is the probability that the individual would get sick with recurrent cancer after recovering from phase \(j\in\{1,2,3,4\}\), \(\delta_{i}\) is the duration delta between two consecutive pre-diagnosis check-ups recommended to that individual, and \(\delta_{r}\) is the duration delta between two consecutive post-diagnosis check-ups recommended to that individual. We assume that the life expectancy of an individual, \(\gamma\), is dependent on the socio-economic status (\(s\)) and gender (\(g\)) alone.
It is important to note that we consider PPD check-up policies to be mere _recommendations_ that cannot be enforced on any individual.
### SMS reminder campaign
SMS (Short Message Service) reminder campaigns are a popular compliance-increasing strategy in healthcare [32, 34]. The implementing agency (e.g., HSP) can send any number of reminders to any subset of patients in order to encourage them to comply with the PPD policy [41]. Formally, an SMS reminder campaign is a function, \(\Phi\), that accepts the population, represented by a set of finite state machines, over time and returns when and how many SMSs each socio-demographic group (defined by the \(e,g\), and \(s\) parameters of each individual) should get. Each SMS increases an individual's likelihood to follow the PPD policy as proposed by [32]:
\[\mu\leftarrow\mu+\rho\Big{(}c_{1}+c_{2}\big{(}log_{10}(n)-log_{10}(n-1)\big{)} \Big{)},\]
where \(c_{1}\) and \(c_{2}\) are the SMS effectiveness coefficients and \(n\) is the number of SMSs the individual received thus far. In addition, each SMS has a fixed cost \(b\in\mathbb{R}^{+}\). Since the implementing agency is limited by some budget
Figure 2: A schematic view of the clinical statuses and the transitions between them due to PPD check-ups and clinical deterioration. The optimization components of the framework are highlighted in blue.
\(B\in\mathbb{R}^{+}\) for a fixed duration \([t_{0},t_{f}]\), deriving an SMS reminder campaign can be formulated as the following resource-bounded optimization task:
\[\min_{\Phi}MR_{[t_{0},t_{f}]}(\Phi)\text{ s.t. }cost(\Phi)\leq B, \tag{1}\]
where \(MR_{[t_{0},t_{f}]}\) is a function that returns the average mortality rate during \([t_{0},t_{f}]\) and \(cost(\Phi)\) is a function that returns the total cost of an SMS reminder campaign \(\Phi\). Note that \(\Phi\) makes decisions in a discrete manner. Since \(\Phi\) is not able to pick a specific individual from each socio-demographic group as the \(\mu\) and \(\rho\) parameters of each individual are not available to \(\Phi\) in realistic cases, an SMS is sent in random inside each socio-demographic group in an equally distributed manner.
In order to solve the SMS reminder campaign optimization task (Eq. (1)), we used a Monte Carlo approach [42]. Namely, we sample the SMS reminder campaign parameter space. It takes a form of a four-dimensional tensor with one temporal dimension and three dimensions representing the age, gender, and socio-economic status of each socio-demographic group. Each value in the resulting matrix represents the relative part of the entire budget (\(B\)) allocated to SMS distribution among each socio-demographic group at each step in time. After the parameter values are set, we run the model for \(t_{f}-t_{0}\) rounds and calculate the average mortality rate. If a configuration resulted in an average mortality rate smaller than any previous parameter configuration, we declare this configuration to be the best one so far. Since the parameter configuration space is finite, this computational procedure is guaranteed to converge to the optimal solution as the number of samples goes to infinity [43]. Overall, for the case of stochastic processes with random functions that are piecewise convex and a discrete state space, such as the case here, it has been proven elsewhere that this optimization process has a probability to reach the optimal solution that approaches one exponentially fast with an increase in the number of simulations [44].
### Fitting procedure
In order to obtain the parameters that best fit historical records, we use the gradient descent (GD) method for the parameters' space following [45]. Formally, given the model's initial condition, the parameter space, historical data, and a loss function \(d\) we use the GD method [46] to find the parameters that minimize \(d\) on a fixed and finite duration in time \([t_{0},t_{f}]\) such that \(t_{0}<t_{f}\). Formally, let us denote the parameter space by \(\mathbb{P}\in\mathbb{R}^{\epsilon}\) where \(\epsilon\in\mathbb{N}\) is the number of parameters in the implemented framework. In addition, a specific parameter configuration is denoted by \(P\in\mathbb{P}\). We also denote the parameter configuration of the \(i_{th}\) iteration of the GD by \(P_{i}\). Since the GD is computed on the parameter space with respect to a loss function \(d\), the gradient is numerically obtained by following the five-point stencil numerical scheme [47]:
\[\nabla P_{i}:=\forall j\in[1,\ldots\epsilon]:\frac{-d(P_{i}(j,-2))+8d(P_{i}(j,1))-8d(P_{i}(j,-1))+d(P_{i}(j,2))}{12h}\]
where \(P_{i}(j,k):=P_{i}(x_{1},\ldots,x_{j}+kh,\ldots,x_{\epsilon})\) and \(h\in\mathbb{R}^{+}\) is the step's size.
\[P_{i+1}\gets P_{i}+\nabla P_{i}.\]
For our case, let us assume an instance of the framework, \(M_{P}\), with the parameter configuration \(P\). In order to fit on historical data, we define a metric \(d\) between the prediction of the mortality rate and the historically recorded mortality rate:
\[d(M_{C},H):=\Sigma_{t=t_{0}}^{t_{f}}|MR(M_{C},t)-H(t)|, \tag{2}\]
where \(H\) is the historical recorded mortality rate, \(MR\) is a function that gets the simulation's state as defined by the distribution of the individuals' clinical state \(M_{P}\), and \(t\in\mathbb{N}\) is a point in time and returns the average predicted mortality rate of the framework at this point in time. We define \(d\) to be the mean absolute error of the predicted mortality rate since this is the metric that one, presumably, wishes to minimize using a designated SMS reminder campaign.
### Assembling the components into a single framework
The framework has a global and discrete clock that all the individuals in the population follow. Namely, let us define each step in time as a round \(t\in[1,\ldots,T]\), where \(T<\infty\). In the first round (\(t=1\)), the population is created to satisfy a pre-define co-distribution of socio-demographic and clinical distributions. The PPD check-up policy is defined and fixed at this stage as well. Then, at each round \(t\geq 1\), each individual is following the PPD policy with its personal probability \(\mu\), according to its clinical status \(\alpha\). Afterward, the clinical dynamics are executed for each individual in the population, in a random order. Right after, the SMS reminder campaign is activated based on the population's state, sending, if any, SMSs to each socio-demographic group according to \(\Psi\). In addition, the population naturally grows at a rate, \(r\in\mathbb{R}\) which corresponds to its current size. All born individuals assume to keep the population's socio-demographic co-distribution as identical as possible and set to be healthy (\(\alpha\gets H\)). Fig. 3 shows a schematic view of the framework's components and the interactions between them.
## 3 Investigation Into Bladder Cancer
The following analysis consists of three parts: First, we outline the implementation of the above generic framework for the case of BC in the United States. Then, we propose five candidate SMS reminder campaigns targeted at minimizing the expected mortality rate. Last, we explore the characteristics of the best-performing SMS reminder campaign and its statistical relationship to its underlying socio-demographic characteristics.
### BC Implementation
For realising the proposed framework, several parameters have to be set. Since most of the relevant data is available for the United States, it is the focus of our analysis. We rely on the following sources [48, 49, 50, 51, 52, 53, 54, 55] which are accrued, integrated, pre-processed for our needs and made available as a data file in the supplementary material. Specifically, in order to obtain the population's growth rate (\(b\)), we use the data of the United States population's growth between 1950 and 20201, fitting a standard exponential smoothing time-series forecasting model [56]. The window size for this forecasting model is obtained using the grid search method, ranging between
Figure 3: A schematic view of the framework’s components and the interactions between them. Recall that the socio-demographic groups are defined by the individual’s age (\(e\)), gender (\(g\)), and socio-economic status (\(s\)). These properties, along with clinical risk, determine the PPD check-up policy and the accompanying SMS reminder campaign. All personal features together are reflected in the clinical dynamics.
2 and 25 years [57] and aiming to minimize the mean absolute error. For the life expectancy, \(\gamma\), we use the United States average life expectancy as reported by the United Nations2, ranging between 1950 and 2020. In order to get the life expectancy divided into gender and socioeconomic status, we used life expectancy gender differences reported by [58] and the socioeconomic status differences reported by [59]. When the socioeconomic status is divided by tenths, this division results in 20 time-series with 12 constraints - 10 for the socioeconomic status, one for the gender differences, and one to agree with the average reported life expectancy. Since there are more relationships than constraints, there is an infinite number of possible solutions to this computational task. We find a feasible solution using the least mean square method [60], obtaining a time-series function for the life expectancy for each gender and socioeconomic status separately in the same way the population's growth rate was obtained.
Footnote 2: [https://www.macrotrends.net/countries/USA/united-states/life-expectancy](https://www.macrotrends.net/countries/USA/united-states/life-expectancy)
As the underlying PPD check-up policy, we consider the recommendation of the American Cancer Society3. Specifically, individuals are encouraged to perform a pre-diagnosis check-up once a year following the age of 45 (none before that) and post-diagnosis check-ups once a year for those recovered from phase 1 or 2, or twice a year for those recovered from phase 3 or 4 _regardless of age_. That is, people of all ages may be encouraged to perform check-ups and thus get SMS reminders. In order to find the SMS effectiveness coefficients (\(c_{1},c_{2}\)), we used the data reported by [32] and fitted it using the least mean square method [60]. In addition, we averaged the SMS sending cost of five leading SMS providers in the US, as manually sampled in 2023, obtaining an average SMS cost of 0.049 US dollars.
Footnote 3: [https://www.cancer.org/](https://www.cancer.org/)
For initialization, we used the age, gender, and socioeconomic data from the US in 2022 as reported by the US Census Bureau4. Since the data is not provided as the cross of all three properties (i.e., the number of individuals for each combination of age, gender, and socioeconomic status), we assume the age and gender distributions for each socioeconomic group are identical. This assumption is known to be false but it is commonly adopted due to a lack of finer-grained data at a publicly available level [61]. Overall, we include 333 million individuals in the initial condition, divided into 140 socio-demographic groups (i.e., two gender groups, seven age groups, and ten socioeconomic groups). In addition, we assume a budget of ten million dollars. We set \(t_{0}=0\) and \(t_{f}=365\) and each round, \(t_{i}\), to be a single day.
Footnote 4: [https://www.census.gov/topics/population/age-and-sex/data/tables.html](https://www.census.gov/topics/population/age-and-sex/data/tables.html)
### Parameter fitting
Given the framework's initial condition and available parameter values (see Section 3.1), we used the proposed fitting procedure (see Section 2.4) in order to obtain the remaining parameter values that best align with historical data. Notably, as part of this process, we find the values of \(\mu\) and \(\rho\) across the population. This is important as these values are not readily available and can only be approximated by fitting a model that describes the dynamics of the observed historical data. Fig 4 shows the model's MAE from the historical data (red line), as defined by Eq. (2), and the coefficient of determination (\(R^{2}\)) (green line). Due to the stochastic nature of the framework, the results are shown as the mean of \(n=50\) simulations. We fitted both signals using the SciMed symbolic regression model [62], obtaining: \(MAE(i)=329i^{-0.384}\) and \(R^{2}(i)=0.517+0.104ln(i)\) with \(R^{2}=0.969\) and \(R^{2}=0.967\), respectively, where \(i\in[10,20,\ldots,290,300]\) is the optimization step's index. One can notice that the optimization process coverage around an MAE of \(9.5\cdot 10^{7}\) with an \(R^{2}\) of \(0.85\).
### SMS reminder campaigns
We compare five SMS reminder campaigns with increasing levels of sophistication: "None", Naive, Greedy, Naive Monte Carlo, and Socio-demographic Monte Carlo. The first, the "None" campaign, indicates that there are no SMS reminders sent at all. The Naive campaign treats all individuals as if they belong to the same (single) socio-demographic group. As such, the Naive campaign treats individuals differently only based on the PPD check-up policy and their clinical state (\(\alpha\)). The Greedy campaign takes into consideration the mortality rate associated with each socio-demographic group such that the allocated budget to each group is proportional to its relative contribution to the overall mortality rate in the entire population. Namely, the Greedy campaign optimizes the SMS reminder campaign in every single step in time, ignoring the need to optimize for the entire duration \([t_{0},t_{f}]\).
Finally, we evaluate two variants of the Monte Carlo optimized SMS reminder campaigns - a Naive one, which does not consider the socio-demographic characteristics of each individual and thus it only optimizes for the timing and volume of reminders sent to each individual based on their clinical statuses (like the Naive campaign); and a socio-demographic one which applies the socio-demographic-based Monte Carlo optimization technique (as formally described in Section 2.3). Both variants were trained for 10,000 instances before being applied in the following analysis.
Fig. 5 presents the comparison of these five SMS reminder campaigns. The results are shown as the mean \(\pm\) standard deviation of \(n=100\) simulations. One can notice that the Monte Carlo socio-demographic campaign (the right-most column) results in the lowest average mortality rate. In order to validate this outcome statistically, we computed ANOVA test [63] with a one-sided T-test post-hoc correction [64], obtaining that, indeed, this campaign is statistically better compared to the alternative campaigns with \(p<0.01\).
### Campaign analysis
We further explore the best-performing SMS reminder campaign and analyze the relative amount of resources invested in each socio-demographic group. Fig. 6 shows the average yearly number of SMSs an individual in each socio-demographic group would get based on the Monte Carlo socio-demographic SMS reminder campaign, divided into male and female heatmaps. The figure shows that, generally speaking, older individuals with higher socio-economic status require more resources. In particular, a sharp shift in resource allocation is observed around
Figure 4: The fitting process. The MAE (red line) decreases as a function of the simulation steps while the \(R^{2}\) (green line) is increasing. The results are shown as the mean of \(n=50\) simulations.
age 45 and the 40% percentile of socio-economic statuses. In addition, for most of the age and socio-economic statuses, females get more reminders. Similar results were obtained when testing for two, five, fifteen, and eighteen million dollars budget with less than a five percent difference in the relative number of SMS an individual in each socio-demographic group gets between the two and eighteen million dollars budget.
## 4 Discussion
In this study, we proposed a novel framework and simulation that allows for the optimization and investigation of SMS reminder campaigns for cancer check-ups. Considering BC in the US as a representative example, we implemented the framework based on real-world historical data, and derived a Monte Carlo optimized SMS reminder campaign based on individuals' socio-demographic characteristics. The resulting campaign is shown to favorably compare with sensible alternatives.
Our results first demonstrate the presumed adequacy of our proposed framework, at least in the case of BC. As can observe in Fig. 4, after fitting on historical data, the implemented framework can explain up to \(85\%\) of the BC-related mortality variance. Using the currently practiced PPD check-up policy, a near-optimal socio-demographic SMS reminder campaign is found to be superior to several alternatives as presented in Fig. 5. This result generally agrees with the previous research on this subject [25, 26]. Namely, taking socio-demographic data into consideration leads to better performance of an SMS reminder campaign. Moreover, campaigns that operate
Figure 5: Average mortality rate across the five examined SMS reminder campaigns. The results are shown as the mean \(\pm\) standard deviation of \(n=100\) simulations.
for a long duration and consider the population's clinical distribution seem to operate more favorably compared to greedy approaches which utilize the SMS reminder campaign's budget quickly, raising much awareness yet in a short amount of time.
The derived campaign was also analyzed to identify the more targeted sub-populations. Fig. 6 suggests that the resource distribution is biased towards females and older individuals with a higher socioeconomic status. First, a sharp increase in the average yearly number of SMSs an individual gets occurred crossing the age of 45 years old. This shift can be easily explained by the practiced PPD check-up policy which indicates that pre-diagnosis check-ups are recommended only for individuals older than 45 years old. As such, up to this age group, the SMS reminder campaign is only required for post-diagnosis testing reminders. In addition, one can notice that the number of SMSs increases with age, as the number of cancer-recovered individuals is also increasing with age following more new cases as well as more individuals that recovered and stay healthy since. The slight increase towards higher socio-economic statuses can be associated with the \(\mu\) and \(\rho\) parameters of these subpopulations. Namely, individuals of higher socio-economic status may be associated with higher alternative costs to come for a check-up and therefore require more SMSs on average, as also suggested by [65]. Taken jointly, our results suggest that socio-demographic-aware SMS reminder campaigns for cancer PPD check-ups could prove extremely valuable even under strict budget constraints.
This study has important limitations which offer fruitful avenues for future research. First, it is assumed that the PPD check-ups are perfect and produce 100% accurate and reliable results as to an individual's clinical status. Unfortunately, this is not true for most clinical tests in general, and oncological tests in particular [66]. Thus, our presented results should be treated as slightly over-optimistic of the expected realistic outcomes. In future work, we intend to tackle this shortcoming by integrating PPD check-up accuracy data. Second, alternative optimization techniques could be implemented to derive optimal SMS campaigns or potentially reduce the computational burden of obtaining a near-optimal one. Similarly, the lack of self-explainability of the resulting campaign should be tackled in order to promote its acceptance by stakeholders [67, 68, 69, 70, 71, 72]. Finally, when considering BC specifically, an individual's occupation, smoking habits, and other contextual characteristics are known to be strong indicators for the risk of developing the disease [73, 74]. Thus, integrating these and similar features to the age, gender, and socioeconomic status which are currently integrated into the framework, might help to obtain even better outcomes.
Figure 6: The average yearly number of SMSs an individual in each socio-demographic group would obtain based on the Monte Carlo socio-demographic SMS reminder campaign.
## Declarations
### Funding
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
### Conflicts of interest/Competing interests
None.
### Data availability
The data that has been used in this study is available by a formal request from the authors.
|
2310.00699 | Pianist Identification Using Convolutional Neural Networks | This paper presents a comprehensive study of automatic performer
identification in expressive piano performances using convolutional neural
networks (CNNs) and expressive features. Our work addresses the challenging
multi-class classification task of identifying virtuoso pianists, which has
substantial implications for building dynamic musical instruments with
intelligence and smart musical systems. Incorporating recent advancements, we
leveraged large-scale expressive piano performance datasets and deep learning
techniques. We refined the scores by expanding repetitions and ornaments for
more accurate feature extraction. We demonstrated the capability of
one-dimensional CNNs for identifying pianists based on expressive features and
analyzed the impact of the input sequence lengths and different features. The
proposed model outperforms the baseline, achieving 85.3% accuracy in a 6-way
identification task. Our refined dataset proved more apt for training a robust
pianist identifier, making a substantial contribution to the field of automatic
performer identification. Our codes have been released at
https://github.com/BetsyTang/PID-CNN. | Jingjing Tang, Geraint Wiggins, Gyorgy Fazekas | 2023-10-01T15:15:33Z | http://arxiv.org/abs/2310.00699v1 | # Pianist Identification Using Convolutional Neural Networks
###### Abstract
This paper presents a comprehensive study of automatic performer identification in expressive piano performances using convolutional neural networks (CNNs) and expressive features. Our work addresses the challenging multi-class classification task of identifying virtuoso pianists, which has substantial implications for building dynamic musical instruments with intelligence and smart musical systems. Incorporating recent advancements, we leveraged large-scale expressive piano performance datasets and deep learning techniques. We refined the scores by expanding repetitions and ornaments for more accurate feature extraction. We demonstrated the capability of one-dimensional CNNs for identifying pianists based on expressive features and analyzed the impact of the input sequence lengths and different features. The proposed model outperforms the baseline, achieving 85.3% accuracy in a 6-way identification task. Our refined dataset proved more apt for training a robust pianist identifier, making a substantial contribution to the field of automatic performer identification. Our codes have been released at [https://github.com/BetsyTang/PID-CNN](https://github.com/BetsyTang/PID-CNN).
performer identification, expressive piano performance, deep neural networks
## I Introduction
Performers, with their individual phrasing, dynamics, and interpretive choices, bring their personal aristry to each piece they play, resulting in distinguishable styles. Researchers who focus on studying expressive musical performances have been investigating computational models for performer identification [1, 2, 3, 4, 5]. A reliable pianist identifier holds great potential for not only studying the styles of different performers, but also various applications in music education, music information retrieval and smart musical instruments [6]. As an illustration, a pianist identification model could aid piano students wishing to emulate the performances of virtuoso pianists. With an upsurge in embedded devices, the vision of smart musical systems--ones that can discern different performers or styles and provide real-time feedback or adjustments--becomes closer to reality. Imagine a smart piano capable of tailoring its settings to mirror the nuances of iconic pianists, or a wearable accessory that offers pianists instant feedback, juxtaposing their performance against the masterpieces of legendary artists. Networked musical instruments could use style information or the features extracted by the proposed system in educational, retrieval or networked performance contexts, similar to those proposed by Turchet et.al. in [7]. These groundbreaking applications will not only resonate with the principles of Internet of Musical Things (IoMusT) [8] and the Internet of Audio Things (IoAuT) [9] but also elevate their potential, transforming basic devices into dynamic musical instruments with intelligence in the context of the Internet of Sounds (IoS) [10].
Automatic performer identification is usually regarded as a multi-class classification task where the system is designed to infer the performer of the given music performance. Early studies [1, 2] mainly applied traditional machine learning algorithms such as K-means clustering, decision trees, and discriminant analysis to this task. More recent research [3, 4] calculated the KL-divergence between performers' feature distributions and identified performer by performing similarity estimation based on the KL-divergence. Zhao et al. [11] utilised transfer learning for classifying violinists, adopting pre-trained models for music tagging and singer identification. With the emergence of large-scale expressive piano performance datasets [12, 13], two projects [5, 11, 12] recently applied deep learning techniques to pianist identification task. Rafee et al. [5] proposed a RNN-based hierarchical neural network for pianist identification. Zhang et al. [12] has applied convolutional neural networks (CNNs) to a 16-way pianist identification task, achieving less than 50% accuracy. However, this work paid insufficient attention to extracting expressive features which have been proven effective for deep neural networks that model expressiveness and performance styles of pianists [5, 14].
This paper details our exploration of the potential of CNNs in identifying virtuoso pianists using various expressive features. We obtained a subset consisting of both performance and score midis from the ATEPP dataset, refining the scores by extending the repetitions and ornaments in the corresponding midis, thus generating the most comprehensive and accurate dataset currently available for pianist identification. We conducted experiments to investigate the effectiveness of different expressive features and the impact of input sequence
lengths. The proposed one-dimensional CNN surpassed the baseline model [5], attaining an 85.3% accuracy for a 6-way identification task. In addition, our dataset was shown to be more suitable for training a robust pianist identifier compared to the one proposed previously [5].
The rest of this paper is organised as follows: Section II elaborates on the methodology, providing details of the dataset, the feature extraction process, and the model architecture. Section III outlines the experiment set-ups employed for model training. Section IV discusses the experiment results and the ensuing discussions. Lastly, Section V concludes the paper.
## II Methodology
### _Dataset_
As discussed by Rafee et al. [5], the lack of large datasets containing multiple performances of the same compositions by different pianists results in the lack of investigation in deep neural networks for pianist identification. However, the recent proposed expressive piano performance midi dataset, ATEPP [12], enabled us to create subsets which are balanced in the number of performances for six virtuoso pianists including Alfred Brendel, Claudio Arrau, Daniel Barenboim, Friedrich Gulda, Sviatoslav Richter, and Wilhelm Kempf. In our research, we consider two subsets as shown in the Table I:
1. _ID-400_: we created an updated version of the proposed subset by Rafee et al. [5] by removing corrupted transcription results as well as repeated performances following the latest version of the ATEPP dataset1. Footnote 1: [https://github.com/BetsyTang/ATEPP](https://github.com/BetsyTang/ATEPP)
2. _ID-1000_: we chosen a larger subset containing more compositions and performances by the same pianists to increase robustness and verify the capability of our model.
All movements in both subsets are by Beethoven or Mozart. Each movement corresponds to at least one performance by each pianist, making it possible to compare the differences in performance style of each individual performer. In order to maintain similar data distributions in training, validation, and testing sets, we divided the datasets alongside the number of performances of a composition by each pianist. To achieve a 8:1:1 train-valid-test split, we followed the Algorithm 1 to assign performances to _Train_, _Valid_ and _Test_ subsets. The Algorithm 1 is designed to guarantee that each split contains at least one performance of the composition by a performer, especially when there are fewer than 10 performances by that performer.
```
# Let \(C\) be the set of compositions, \(P\) be the set of pianists. # Info returns composition and pianist of a performance \(i\). # Count gives the number of performances in a set \(S\). # RandomSplit randomly splits a set \(S\) of size \(n\) into subset \(a\) of size \(rn\) and subset \(b\) of size \((1-r)n\), where \(r\in[0,1)\). # Random generates a number between \([0,1)\) following the uniform distribution. # \(\leftarrow\) means "assigned to". for(\(c\), \(p\)) in (\(C\), \(P\))do \(n=Count(I)\), where if \(i\in I\), Info\((i)=(c,p)\) if\(n\leq 1\)then\(Train\gets I\) elseif\(n=2\)then\(a,b=\)RandomSplit\((I,r=1/n)\) \(m=\)Random\(()\), \(Train\gets b\) if\(m\leq 0.5\)then\(Valid\gets a\) elseif\(m>0.5\)then\(Test\gets a\) endif elseif\(3\leq n\leq 9\)then\(a,b=\)RandomSplit\((I,r=\frac{1}{n})\) \(b,c=\)RandomSplit\((b,r=\frac{1}{n-1})\) \(Valid\gets a\), \(Test\gets b\), \(Train\gets c\) elseif\(10\leq n\)then\(a,b=\)RandomSplit\((I,r=\frac{4}{5})\) \(b,c=\)RandomSplit\((b,r=\frac{1}{2})\) \(Train\gets a\), \(Valid\gets b\), \(Test\gets c\) endif endfor
```
**Algorithm 1** Data Splitting
### _Score and Performance Alignment_
Inspired by previous research [1, 3, 5] focusing on pianist identification, we used an alignment algorithm proposed by Nakamura et al. [15] to establish correspondences between performance midi data and score midi data, which allowed us to extract performance-related features. While the algorithm exhibited promising results in most cases, it demonstrated limited capability in handling annotated repetitions and ornaments found in the scores. To address this limitation, we manually expanded the repetitions and added ornament notes to the score midi files, thereby enhancing the accuracy of the alignment results. The improved alignment results more accurately captured the nuances of performances, aiding in distinguishing among performers.
After performing the alignments, we proceeded to filter out two types of discrepancies: _missing notes_ (representing notes present in the scores but not successfully aligned to performances) and _extra notes_ (representing notes present in performances but not successfully aligned to scores). Then we quantified the extent of information loss caused by the
alignment algorithm for each performance, as captured by Equation 1:
\[\textit{Loss of Information}=\frac{N_{e}}{N_{p}}\times 100\%, \tag{1}\]
where \(N_{e}\) denotes the number of extra notes and \(N_{p}\) refers to the total number of notes in the performance.
The distributions of information loss in the datasets _ID-400_ and _ID-1000_ are presented in Fig. 1. Our analysis reveals that more than 95% of performances in both datasets exhibit less than 15% information loss.
### _Feature Extraction_
After aligning the performances and scores, we extracted input features following the process outlined in the study by Rafee et al. [5]. We derived deviations between the scores and performance for note-wise features, encompassing aspects such as timing and velocity. Beyond considering feature deviations, we also incorporated the original note-wise features as part of our input data. A full list of features used for our experiments are summarised in the Table II-C. Two note-level features are defined as follows: _Inter-onset Interval_ (IOI), representing the temporal duration between the onset times of two consecutive notes, and _Offset Time Duration_ (OTD), signifying the time interval between the offset time of a note and the onset time of its subsequent note. To process the features into suitable input for our model, we organized them into sequences, preserving the order of the notes. These sequences were then stacked together to create the final input. The resulting shape of the input would be (_batch size_, _sequence length_, _number of features_), as shown in the Fig. 2 at the left side.
To examine the performance of our model under circumstances of limited information, we divided the sequences into segments of varying lengths respectively. This allowed us to gauge the model's capacity to manage scenarios with limited data availability, detailed further in Section IV.
### _Model Architecture_
In light of the promising performance demonstrated by Convolutional Neural Networks (CNNs) in various classification tasks across different domains, we proposed a novel one-dimensional CNN model to address the pianist identification task. The architecture was determined through an empirical grid search, focusing on structural hyperparameters such as the number of layers and kernel size. The model architecture, depicted in Fig. 2, encompasses five convolutional layers followed by one dense layer, strategically designed to efficiently process the input data. All convolution layers are followed by a ReLU activation and a batch normalization layer. Dropout layers are added in order to avoid overfitting problem.
## III Experiments
We implemented our model using PyTorch [16], and monitored and recorded the experimental progress through the use of Wandb [17]. To achieve optimal model performance, we conducted an extensive hyperparameter tuning process using grid search. We specifically focused on parameters such as learning rate, weight decay, batch size, and the number of training epochs. This process was enhanced by leveraging the powerful capabilities of Wandb Sweeps. Consequently, our model underwent training with a batch size of 16 for a total of 1500 epochs, employing the Adam optimizer with an initial learning rate set to 8e-5 and a weight decay rate of 1e-7.
Our proposed model, which has only 6.1 million trainable parameters, showcasing remarkable efficiency. On average, a single experiment on a GeForce RTX 2080 Ti GPU takes
Fig. 1: Boxplots of information loss caused by the alignment process in _ID-400_ and _ID-1000_ datasets
Fig. 2: Model architecture of the proposed one-dimensional CNN
approximately 1.2 hours. This duration stands in stark contrast to the significantly lengthier training times encountered in the context of RNN-based hierarchical models, as proposed by Rafee et al. [5].
## IV Results
To thoroughly evaluate our proposed CNN model in addressing the pianist identification task, we conducted three studies. These studies examined the impacts of variable input sequence lengths, the diverse expressive features, and the datasets on the model's performance. To ensure a reliable assessment of the model, each experiment was repeated three to five times under consistent experimental settings. For a more straightforward comparison with the state-of-the-art [5], both Study I and II were conducted using the _ID-400_ dataset.
### _Study I: Effect of Varying Input Music Sequence Lengths_
The reliable identification of a pianist necessitates stable performance regardless of variations in the length of the musical input. We embarked on a series of experiments using all the features delineated in Section II-C to train our model. Experiments were conducted on complete musical pieces and segments of varying lengths, utilizing the _ID-400_ dataset. Mean values along with standard deviations pertaining to accuracy and F1-score for each experiment are tabulated in Table III. As inferred from the outcomes, our model demonstrated uniform high performance when dealing with sequences comprising 1000 notes or less. However, incorporating the full scope of performances substantially bolstered the model's performance as opposed to relying solely on performance segments. Furthermore, our model surpassed the benchmark set by the state-of-the-art RNN-based Hierarchical model [5] when we integrated more features into the training at both piece-wise and segment-wise levels. Our model attained a commensurate level of accuracy when trained with the same number of features as their study.
### _Study II: Effect of Different Input Features_
In order to investigate the impact of various input features, we elected five feature combinations and executed corresponding experiments on each group. These combinations are displayed in Table IV, where **D** symbolizes the usage of the deviation feature as a replacement for the original note-wise feature.
**C1** embodies 7 original note-wise features; **C2** omits the singular frequency-based feature, pitch, from **C1**; **C3** comprises only deviation features; **C4** replicates the same combination used in the study [5]; while **C5** incorporates all available features. Experiments were conducted on the _ID-400_ dataset utilizing music segments of 1000 notes. The mean accuracy from five iterations along with the standard deviation for each feature combination experiment is detailed in Table V.
The results highlight negligible differences when employing either note-wise features or deviation features in isolation for training the model. Incorporation of all the features collectively yields the optimal performance, suggesting that the model is more adept at identifying a performer's style when given the full set of related features. The comparison between **C1** and **C2** groups suggests that the frequency-based feature does not make a significant contribution to the identification process. Concurrently, the outcomes provide further evidence that the combination of velocity, duration, and IOI deviations proves to be a more reliable choice when solely utilizing deviation features, as discussed in [5].
### _Study III: Comparison between ID-400 with ID-1000_
Despite the implementation of a carefully designed data splitting algorithm, the relatively small size of the subset _ID-400_ impedes the creation of training, testing, and validation sets that maintain similar data distributions. Employing the
same algorithm, we generated five varied data splits for both the _ID-400_ and _ID-1000_ datasets, each of which underwent model testing. The Table VI presents the average test accuracy across all data splits, alongside the highest accuracy achieved by the best models on both datasets. Experiments were conducted using sequences of 1000 notes and 13 features. The outcomes, as outlined in the Table VI and Fig. 3, reveal in data splits, thereby improving the robustness in identifying the six pianists.
## V Conclusion
We presented our investigation of the application of convolutional neural networks to the pianist identification task. Our proposed convolutional neural network model shows promising results in identifying virtuoso pianists. Three studies were conducted, analysing the effects of varying input sequence lengths, the utilization of diverse expressive features, and the impacts of different datasets on the model's performance. Our findings suggest that our model performs best when handling complete musical performances rather than fragments, outperforming the state-of-the-art with 85.3% accuracy when integrating a larger set of features into the training phase. Our model uses less computational resource, leading to significant time savings during the training process compared with the state-of-art. In addition, training on our proposed larger _ID-1000_ dataset resulted in a model less sensitive to alterations in data splits, thereby improving the robustness in identifying the six pianists.
Our model serves as an exemplar for embedded systems that aspire to decode and respond to nuanced musical cues. Just as voice-operated devices discern users' vocal nuances, our proposed model distinguishes pianists based on their expressive nuances. There are numerous further applications of the technology in the IoS and IoMusT contxts, including the population of music related ontologies [18, 19, 20, 21] with performer identity or style related information.
Future work could extend these findings, utilising the proposed model to develop identifiers for more pianists. Such extensions will offer a more comprehensive understanding of pianist-specific performance characteristics, and enrich the applications of the current system. It would also be beneficial to evaluate the model's generalization abilities on unseen compositions.
|
2307.03987 | A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of
LLMs by Validating Low-Confidence Generation | Recently developed large language models have achieved remarkable success in
generating fluent and coherent text. However, these models often tend to
'hallucinate' which critically hampers their reliability. In this work, we
address this crucial problem and propose an approach that actively detects and
mitigates hallucinations during the generation process. Specifically, we first
identify the candidates of potential hallucination leveraging the model's logit
output values, check their correctness through a validation procedure, mitigate
the detected hallucinations, and then continue with the generation process.
Through extensive experiments with GPT-3.5 (text-davinci-003) on the 'article
generation task', we first demonstrate the individual efficacy of our detection
and mitigation techniques. Specifically, the detection technique achieves a
recall of ~88% and the mitigation technique successfully mitigates 57.6% of the
correctly detected hallucinations. Importantly, our mitigation technique does
not introduce new hallucinations even in the case of incorrectly detected
hallucinations, i.e., false positives. Then, we show that the proposed active
detection and mitigation approach successfully reduces the hallucinations of
the GPT-3.5 model from 47.5% to 14.5% on average. We further demonstrate the
effectiveness and wide applicability of our approach through additional studies
including performance on different types of questions (multi-hop and false
premise questions) and with another LLM from a different model family (Vicuna).
In summary, our work contributes to improving the reliability and
trustworthiness of large language models, a crucial step en route to enabling
their widespread adoption in real-world applications. | Neeraj Varshney, Wenlin Yao, Hongming Zhang, Jianshu Chen, Dong Yu | 2023-07-08T14:25:57Z | http://arxiv.org/abs/2307.03987v2 | _A Stich in Time Saves Nine_: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation
###### Abstract
Recently developed large language models have achieved remarkable success in generating fluent and coherent text. However, these models often tend to 'hallucinate' which critically hampers their reliability. In this work, we address this crucial problem and propose an approach that actively detects and mitigates hallucinations during the generation process. Specifically, we first identify the candidates of potential hallucination leveraging the model's logit output values, check their correctness through a validation procedure, mitigate the detected hallucinations, and then continue with the generation process. Through extensive experiments with GPT-3.5 (text-davinci-003) on the 'article generation task', we first demonstrate the individual efficacy of our detection and mitigation techniques. Specifically, the detection technique achieves a recall of \(\sim 88\%\) and the mitigation technique successfully mitigates \(57.6\%\) of the correctly detected hallucinations. Importantly, our mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives. Then, we show that the proposed active detection and mitigation approach successfully reduces the hallucinations of the GPT-3.5 model from \(47.5\%\) to \(14.5\%\) on average. We further demonstrate the effectiveness and wide applicability of our approach through additional studies including performance on different types of questions (multi-hop and false premise questions) and with another LLM from a different model family (Vicuna). In summary, our work contributes to improving the reliability and trustworthiness of large language models, a crucial step en route to enabling their widespread adoption in real-world applications.
## 1 Introduction
Recently developed large language models such as GPT-3 Brown et al. (2020), InstructGPT Ouyang et al. (2022), PaLM Chowdhery et al. (2022), LLaMA Touvron et al. (2023), and several others Taori et al. (2023); Scao et al. (2022); Wei et al. (2022); Wang et al. (2022) have achieved remarkable performance on a wide range of language understanding tasks. Furthermore, they have been shown to possess an impressive ability to generate fluent and coherent text. Despite all these abilities, **their tendency to 'hallucinate' critically hampers their reliability and limits their widespread adoption in real-world applications**.
Hallucination in the context of language models refers to the generation of text or responses that seem syntactically sound, fluent, and natural but are factually incorrect, nonsensical, or unfaithful to the provided source input Maynez et al. (2020); Holtzman et al. (2020); Ji et al. (2023); Koehn and Knowles (2017). These hallucinations can lead to serious consequences such as spreading of misinformation and violation of privacy. Thus, **in this work, we focus on the crucial problem of '_addressing_' large language models' hallucinations.**
We propose to actively 'detect' and'mitigate' hallucinations during the generation process. This
Figure 1: Comparing percentage of hallucinations (on the ‘article generation task’) in the output of GPT-3.5 (text-davinci-003) and our proposed active detection and mitigation approach.
is crucial as we show that when a generated sentence is hallucinated, the chances of hallucination in the subsequently generated sentences increase. Thus, actively detecting and mitigating hallucinations is also important to prevent the propagation of hallucinations in the subsequently generated sentences. We divide our approach into two stages, Detection and Mitigation.
In the **hallucination detection** stage, we first identify the candidates of potential hallucination, i.e., the key 'concepts' of the generated sentence. Next, leveraging the logit output values of the model, we calculate model's 'uncertainty' on the identified concepts. We demonstrate that this uncertainty provides a signal for hallucination. However, we note that this is an additional signal and not a necessary requirement for our approach. Then, we check the correctness of the 'uncertain' concepts through a validation procedure where we: (a) create a query that tests the correctness of the information pertaining to the concept, (b) retrieve knowledge relevant to the validation question, (c) answer the validation question leveraging the retrieved knowledge, and verify the corresponding information in the generated sentence to detect hallucinations.
This is followed by the **hallucination mitigation** stage in which we'repair' the potentially hallucinated sentence using the retrieved knowledge as evidence. Figure 2 illustrates the key steps of our approach. Furthermore, we conduct a systematic and wide study exploring multiple techniques to achieve the objective of each of the steps. Importantly, we show that simply instructing the model achieves the corresponding objectives of these steps.
We design an experimental setup where we prompt the model to write about topics from diverse domains such as sports, politics, music, literature, etc. Then, we annotate the correctness of the first five generated sentences for each topic. We first demonstrate the individual efficacy of our detection and mitigation techniques. Specifically, the **detection technique achieves a recall of \(\sim\) 88\(\%\)** and the **mitigation technique successfully mit
Figure 2: Illustration of our proposed approach for addressing LLMs’ hallucination problem. Given an input, we iteratively generate sentences from the model and actively detect and mitigate hallucinations. In the detection stage, we first **identify the important concepts, calculate model’s uncertainty** on them, and then **validate the correctness** of the uncertain concepts **by retrieving relevant knowledge**. In the mitigation stage, we **repair the hallucinated sentence** using the retrieved knowledge as evidence. Finally, we append the repaired sentence to the input (and previously generated sentences) and continue generating the next sentence. We show that this procedure not only mitigates current hallucination but also prevents its propagation in the subsequently generated sentences.
gates \(57.6\%\) of the correctly detected hallucinations**. Importantly, our mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives. Then, we show that **the proposed active detection and mitigation approach successfully reduces the hallucinations of the GPT-3.5 (text-davinci-003) model from \(47.5\%\) to \(14.5\%\) on average** (Figure 1). We further demonstrate the effectiveness and wide applicability of our approach in addressing hallucinations through **three additional studies**: (1) Using another LLM from a different model family (Vicuna-13B), (2) Adapting the approach to answer multi-hop questions, and (3) Assessing it on the 'false premise questions'.
## 2 Approach
### Overview
We propose to actively detect hallucinations and mitigate them during the generation process. This is crucial as we show that **a generated sentence is hallucinated more often when the model has already hallucinated in its previously generated sentences for the input** (Section 3.1.1). Similarly, a generated sentence is relatively less often hallucinated when the model has not hallucinated in its previously generated sentences. Thus, actively detecting hallucinations and mitigating them is also important to prevent the propagation of further hallucinations in subsequently generated sentences. To this end, we iteratively generate sentences through the model and actively detect and mitigate hallucinations. Figure 2 illustrates the key steps of our approach.
In section 2.2, we detail the steps of our hallucination detection approach, i.e., identifying the important 'concepts' of the generated sentence, i.e., the candidates of potential hallucination (2.2.1), calculating model's uncertainty on the concepts using the logit output values (2.2.2), and checking the correctness by creating validation query (2.2.3), finding relevant knowledge (2.2.4), and verifying information leveraging the retrieved knowledge (2.2.5). We describe various techniques to achieve the objective of each of these steps and also elaborate on several important points such as using a'self-inquiry' method to answer validation questions without using an external knowledge source and trade-off between executing the validation procedure in parallel for all the concepts and in sequential order based on their 'uncertainty'. For each step, we also **indicate the most preferred technique with (*)** and provide our justification.
In section 2.3, we detail our hallucination mitigation approach. Specifically, we'repair' the hallucinated sentence by removing or substituting the hallucinated information leveraging the retrieved knowledge as evidence, and can also utilize the retrieved knowledge as context (prepended to the input) to generate the next sentence.
### Hallucination Detection
#### 2.2.1 Identify Key Concepts
In the first step, we identify the important concepts from the generated sentence. We identify these concepts because validating the correctness of the entire sentence at once is infeasible; this is because a sentence may contain a number of different facets all of which can not be validated at once. On the other hand, individually validating the correctness corresponding to the concepts provides opportunities for accurately detecting hallucinations. Thus, the objective of this step is to identify the candidates of potential hallucination. We note that a concept or keyphrase is essentially a span of text consisting of one or more words. We study the following techniques to identify the concepts:
**Entity Extraction:** Entities are usually an important part of a sentence, thus, we use an off-the-shelf entity extraction model to identify the concepts. A limitation of this method is that a concept need not necessarily be an entity and can be a non-entity span also. We address this limitation with a keyword extraction model.
**Keyword Extraction:** To also identify the non-entity concepts, we explore an off-the-shelf keyword extraction model1. This model uses Keyphrase Boundary Infilling with Replacement (KBIR) as its base model and fine-tunes it on the KPCrowd dataset (Kulkarni et al., 2021).
Footnote 1: [https://huggingface.com/ml6team/keyphrase-extraction-kbir-kpcrowd](https://huggingface.com/ml6team/keyphrase-extraction-kbir-kpcrowd)
***Instructing the Model*:** Since state-of-the-art language models perform remarkably well on a wide range of tasks, in this technique, we directly instruct the model to identify the important concepts from the generated sentence. An important characteristic of this technique is that it doesn't require calling a task-specific tool (entity or keyword extraction model) for this task.
Table 7 (in Appendix A.1) illustrates examples of concepts identified using the three techniques. It shows that the entity extraction model misses many important concepts while the keyword extraction model identifies a lot of insignificant concepts also. In contrast, instruction technique successfully identifies all the important concepts. Moreover, it doesn't require calling a task-specific tool. Thus, we represent this technique with (*), our preferred technique for this step.
#### 2.2.2 Calculate Model's Uncertainty
GPT-3 Brown et al. (2020) and several other publicly available models also provide logit output values in their prediction response. Thus, we study if these logit output values can be utilized to detect hallucinations. However, we note that this is an additional source of information and not a necessary requirement for our hallucination detection method as some models that are available only via API calls do not provide these logit output values.
Recall that a concept can consist of more than one token also (note that the model provides logit output values at the level of tokens); thus, we study three different techniques for calculating a **probability score** for a concept. Consider a concept consisting of \(n\) tokens and having the maximum softmax probabilities as \(p_{1},p_{2},p_{3},...,p_{n}\) for the \(n\) token positions respectively. We obtain these probabilities by applying the softmax function over the logit values for each token position. We study the following techniques:
Average of Token Probabilities: In this technique, we simply take the average of the probabilities of the tokens corresponding to the concept:
\[\text{score}=\text{AVG}(p_{1},p_{2},...,p_{n})\]
Normalized Product of Token Probabilities: Here, we take a normalized product of the probabilities of the tokens:
\[\text{score}=(p_{1}\times p_{2}\times...\times p_{n})^{1/n}\]
*Minimum of Token Probabilities*: Here, we take the minimum of probabilities as the score.
\[\text{score}=\text{MIN}(p_{1},p_{2},...,p_{n})\]
This is our preferred technique for this step as the other techniques average out the effect of model's uncertainty on the tokens while low probability in even one token of the concept provides a strong evidence of the model being uncertain. For example, if the model is uncertain on the name of the USA president then its uncertainty on the first token ('Joe') would be high but on the next token ('Biden') would be very low as the token 'Joe' is frequently followed by the token 'Biden'. Thus, averaging or normalizing the probabilities will have a limited capability to capture this signal.
Through our experiments (Section 3.1.2), we show that this score (especially 'MIN') indeed provides a signal for hallucination, i.e., the more uncertain a model is on a concept (low probability score), the more likely it is to be hallucinating about that concept. However, we note that this score is just a signal for hallucination and in no way provides a guarantee for presence of hallucinations. We utilize this signal and check for hallucinations with respect to the uncertain concepts using our validation procedure (2.2.3-2.2.5).
In the absence of logit output values:For models that do not provide the logit output values, all or some heuristically selected concepts (depending on the computational and latency budget of the system) can be passed to the validation stage for detecting hallucinations.
#### 2.2.3 Create Validation Question
We start the validation procedure for a concept by creating a question that tests the correctness of the information (in the generated sentence) pertaining to the concept. We create **Yes/No Questions**, i.e., questions for which the answer is either a 'Yes' or a 'No'. Table 8 shows examples of validation questions. For creating these questions, we explore the following two techniques:
Question Generation Tool:Here, we use an off-the-shelf answer-aware question generation model.
*Instructing the Model*:Here, we directly instruct the model to create a validation question checking the correctness of the information about the selected concept. For the same reason as in the concept identification step, this is our preferred technique as it does not require calling a task-specific tool.
We note that instead of Yes/No questions, **Whquestions** can also be used for validation. We prefer Yes/No questions as it is relatively easier to check the answer for these questions. We explore Wh-questions for a case study for answering multi-hop questions (Section 4.2).
#### 2.2.4 Find Relevant Knowledge
*Web Search*:In order to answer the validation question, we retrieve knowledge relevant to it which serves as additional context. For generality and wide coverage, we use web search (via Bing search API) for retrieving this knowledge. However, we note that any other search API or knowledge corpus can also be utilized for this purpose.
Self-Inquiry:We also explore a self-inquiry technique where we directly prompt the model to answer the validation question. In this technique, the model relies on its parametric knowledge to answer the validation question. This technique has several drawbacks as compared to web search such as lack of a reliable strategy to extract the parametric knowledge from the model and staleness of the parametric knowledge.
Note that the proposed knowledge retrieval step in our approach has several benefits, such as (a) it does not retrieve knowledge when it is not required, i.e., when the model is already sufficiently confident (since we show it is less likely to hallucinate in such scenarios), (b) it individually retrieves knowledge pertinent to the concept(s) on which the calculated probability score is low thus providing it sufficient and relevant context for accurate validation / mitigation.
#### 2.2.5 Answer Validation Question
In this step, we prompt the model to answer the validation question (leveraging the retrieved knowledge as context) and verify its response. If the validation procedure succeeds for all the uncertain concepts then we continue generating the next sentence; otherwise, we interrupt the generation process, mitigate the potential hallucination in the sentence, and then continue generation.
Order of Validation of Concepts:Validation of different concepts can be done in a sequence (in ascending order of their calculated probability score) or in parallel. However, running this in parallel would require starting multiple threads which may not be supported by all machines. Thus, in this work, we study only the sequential validation strategy but note that it can be made more efficient by running it in parallel. We regard this sequential validation as a greedy exiting strategy as we proceed to the mitigation stage on detection of the first potential hallucination.
### Hallucination Mitigation
For mitigating the hallucination in the generated sentence, we instruct the model to repair the generated sentence by either removing or substituting the hallucinated information using the retrieved knowledge as evidence. Table 6 shows the instructional prompts for different steps of our approach.
**Note:** We note that the result of the validation procedure is contingent on the retrieved knowledge and the model's ability to leverage that knowledge in answering the validation question. Thus, a case is plausible in which the validation procedure reports hallucination even though the sentence is actually not hallucinated. However, in Section 3.2, we show that our approach performs fairly well on this task. Moreover, it achieves a very high recall demonstrating its efficacy at detecting hallucinations. Moreover, in Section 3.3, we show that our mitigation approach does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives.
### Design Decisions
Why the task of addressing hallucinations is broken down into several steps?We note that dealing with the hallucination problem is a complex task and prior work has shown that breaking down a complex task into simpler sub-tasks helps the model in solving the task (Wei et al., 2022b; Zhou et al., 2023; Khot et al., 2023). Thus, we break down this task into individual sub-tasks which are considerably easier for the model. For the same reason, we also break down the validation procedure into several steps.
Why validation is done using the web search?Our preferred technique for retrieving knowledge is web search because the web is more likely to contain the updated knowledge in comparison to a knowledge corpus whose information can become stale, outdated, and obsolete.
Why "active" detection & mitigation and not "post-hoc" after complete response generation?We note that our detection and mitigation techniques can also be applied in a "posthoc" manner after complete response generation. However, it has several limitations which are addressed by our "active" approach. The "active" approach prevents the propagation of hallucinations in the subsequently generated sentences, i.e., if hallucination is detected in the initially generated sentences then
it would be mitigated and course correction would be done for the subsequently generated sentences. However, the "post-hoc" approach does not provide such an opportunity of course correction. In other words, in the "active" approach, the model sees the mitigated / corrected sentences while generating the subsequent sentences; thus, its output will be more correct, coherent, and fluent. In contrast, in the "posthoc" approach, the generated sentences are based on the initially generated previous sentences and thus the mitigated sentence will not be able to influence the generation of subsequent sentences; thus, the output would not be as coherent and fluent as the active approach.
Impact on Inference Cost:Our approach results in improvements in the form of reduced hallucinations and thus makes the model more reliable; however, it comes at the expense of increased inference cost. However, we believe that at current time, to enable the widespread adoption of LLMs, it is more important to address their reliability and trustworthiness concerns because computational advancements are ongoing at a rapid pace. Moreover, even larger models with multi-fold times more parameters such as PaLM (540B) Chowdhery et al. (2022), Gopher (280B) Rae et al. (2021), and MT-NLG (530B) Smith et al. (2022) are also being developed which have even higher inference cost showcasing a larger focus of the community on developing better performing systems. However, we note that our approach can be made more efficient by various techniques discussed before such as validating concepts in parallel and executing these intermediate steps using a smaller low-cost model.
## 3 Experiments and Results
In this section, we first demonstrate the two findings that motivate our approach (3.1.1 and 3.1.2). Then, we show the individual efficacy of our hallucination detection and mitigation techniques in 3.2 and 3.3, respectively. Finally, in 3.4, we show the effectiveness of the proposed active detection and mitigation approach in addressing hallucinations.
Data and Annotation:In our experimental setup, we prompt the large language model (GPT-3.5: text-davinci-003) to write about various topics. Specifically, we use a total of \(150\) topics from diverse domains. Figure 3 shows the distribution of different domains in our topic set. In each domain, we include different kinds of topics; for instance, Sports domain consists of sports persons, administrators, teams, and games, Music consists of musicians, songs, music labels, and bands, Politics includes politicians, political parties, and elections, Film & TV includes actors, TV personalities, shows, and movies, History includes historians and events, etc. For selecting the names of people, we use randomly sampled names from the top 20% of longest articles in WikiBio dataset Lebret et al. (2016) as done in Manakul et al. (2023). Similarly, for the other topics, we randomly sample from the longest Wikipedia articles. This is done to ensure that no obscure or ambiguous concept is selected.
Equipped with the list of topics, we give the following input prompt to the model: "Write an article about <topic>" for each topic. Following this, we (the authors) manually annotate the correctness of the first five sentences generated by the model for each topic. For annotating the correctness, we look at search results from the web to find the relevant knowledge that either supports or contradicts the information present in the generated sentence. In some cases, multiple web searches were required to check the correctness of different facets of a sentence. Furthermore, in a small number of cases, we could not find information supporting or contradicting the information in the generated sentence, we mark it as a case of extrinsic hallucination. We opt for this expert annotation strategy because despite our annotation task being a simple binary classification task, it requires considerable effort in checking the correctness of a given sentence which can not reliably be
Figure 3: Distribution of instances across different domains in our topic set.
collected via crowdsourcing. In addition to this sentence-level annotation, we also annotate correctness at the concept-level that we will detail in 3.1.2. We release both sentence-level and concept-level hallucination annotations that will also facilitate a systematic future research in this direction.
### Motivating Findings
#### 3.1.1 Hallucination Causes Further Hallucination
Recall that we consider the first five sentences generated by the model for each topic and annotate their correctness. Since the sentences are sequentially generated, we investigate the relationship between 'hallucination in a generated sentence' and 'hallucination in the previously generated sentences' for an input. Since there are two binary variables, there exist four possibilities in this relationship, i.e., a sentence is hallucinated and there was hallucination in the previously generated sentences **(A)**, the sentence is not hallucinated and there was hallucination in the previously generated sentences **(B)**, the sentence is hallucinated and there was no hallucination in the previously generated sentences **(C)**, the sentence is not hallucinated and there was no hallucination in the previously generated sentences **(D)**. For illustration, consider a sample case for sentence 3, the two binary variables are whether sentence 3 is hallucinated and whether there was hallucination in the previously generated sentences (i.e. in sentence 1 OR sentence 2). Figure 4 demonstrates this relationship for sentences 2, 3, 4 and 5 aggregated over all the topics in our data. We do not show this for sentence 1 as there is no previously generated sentence for it.
From this figure, we draw the following inferences:
(a) **A > B**: Cases A and B correspond to the scenario when there is hallucination in the previously generated sentences. It can be observed that A is considerably greater than B which implies that _when there is hallucination in the previously generated sentences, a sentence is hallucinated more often_. Moreover, the gap keeps increasing as the sentence number increases.
(b) **A > C**: Cases A and C correspond to the scenario when a generated sentence is hallucinated. It can be observed that A is greater than C which implies that _a generated sentence is hallucinated more when there is hallucination in the previously generated sentences as compared to when there is no previous hallucination_.
(c) **D > C**: Cases C and D correspond to the scenario when there is no hallucination in the previously generated sentences. Here, D is greater than C which implies that _when there is no hallucination in the previously generated sentences, a generated sentence is more often not hallucinated_.
Figure 4: Demonstrating relationship between ‘hallucination in a generated sentence’ and ‘hallucination in the previously generated sentences’. Bars A, B, C, and D correspond to the four possibilities of the relationship between the two binary variables. On the right, we mention our four inferences from the figure.
(d) **D > B**: Cases B and D correspond to the scenario when a generated sentence is not hallucinated. D is greater than B which implies that _a generated sentence is not hallucinated more when there is no previous hallucination as compared to when there is previous hallucination_.
This shows that hallucination in a sentence often results in further hallucinations in the subsequently generated sentences and thus **actively detecting and mitigating hallucinations can not only fix the current hallucination but can also prevent its propagation in the subsequently generated sentences**.
Next, we demonstrate the utility of logit output values in detecting hallucinations.
#### 3.1.2 Logit Output Values Provide a Signal for Hallucination
In this subsection, we first show the trend of hallucination with the probability score. Note that this score is calculated using the logit output values. Then, we demonstrate the benefit of identifying concepts from the generated sentence in detecting hallucinations. Finally, we compare the efficacy of different probability calculation techniques in detecting hallucinations.
Hallucination vs Probability Score:In order to study the relationship between logit output values and hallucination, we annotate correctness at concept-level also (in addition to sentence-level annotations described earlier). Specifically, for each identified concept, we mark whether the information about it in the generated sentence is hallucinated or not. This can be different from sentence-level annotation as it focuses only on the correctness of the information about the concept in the sentence. Table 9 shows examples of both sentence-level and concept-level annotations.
Figure 5 shows the trend of hallucination with our calculated probability scores at both sentence and concept levels. For a sentence, we use the minimum across tokens of all its identified concepts as the probability score and for a concept, we use the minimum across all its tokens as the probability score. It can be observed that **as the probability score increases (or uncertainty decreases), tendency to hallucinate decreases.** This shows that these probability values can be utilized as a signal for hallucination, i.e., the low probability concepts in a generated sentence can be considered as candidates of potential hallucination and their correctness in the generated sentence can be validated for detecting hallucinations. On average, we observe an absolute difference of \(\sim 0.15\) between the probabilities of concepts when the model is hallucinating vs when it is not hallucinating.
**Benefit of Identifying Concepts from a Sentence:** Now, we demonstrate the benefit of identifying concepts from a sentence and leveraging the logit output values corresponding to their tokens for detecting hallucinations. To this end, we plot precision-recall curves for the hallucination detection task corresponding to two methods that use the probabilities calculated from the logit output values. The blue curve corresponds to the technique in which we use the minimum probability across **all tokens** of the sentence and the orange curve is for the technique in which we use the minimum over **only the
Figure 5: Trend of hallucination with the calculated probability score (Minimum technique) at both sentence and concept level. **As the probability increases, the model’s tendency to hallucinate decreases**.
tokens of the identified concepts**. Figure 6 shows the two curves. The orange curve achieves higher area under the precision-recall curve implying that utilizing **the probabilities of the concept tokens provides a stronger signal for hallucination** as compared to the probabilities corresponding to all the tokens.
**Comparing Probability Calculation Techniques:** Figure 7 shows the Precision-Recall curves for the hallucination detection task (at concept-level) using the three probability calculation techniques, i.e., Minimum, Average, and Normalized (described in 2.2.2). **The 'Minimum' technique achieves the highest area under the curve and hence is better at the hallucination detection task.**
### Hallucination Detection Performance
In this subsection, we demonstrate the hallucination detection performance of various techniques at both sentence and concept-levels.
Self-Inquiry vs Web Search:Table 0(a) and 0(b) show the hallucination detection performance of the self-inquiry and web search techniques at sentence-level and concept-level, respectively. For sentence-level results, we predict the sentence to be hallucinated if the validation procedure fails on any identified concept. Note that in these results, we do not leverage the uncertainty score to select concepts for validation, instead we validate all the identified concepts. We study the relationship of recall with probability thresholds in Figure 12 (in Appendix). From the tables, it can be observed that the web-search technique achieve considerably high recall in detecting hallucinations.
Here, we emphasize on the high'recall' of web-search technique as we show that our mitigation approach does not introduce any new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives (3.3). Figure 12 shows the recall of hallucination detection vs Probability threshold plot for Self Inquiry and web search techniques at both sentence-level and concept-level. **Web-search is consistently and considerably better than self-inquiry.**
### Hallucination Mitigation Performance
On sentences where our validation procedure (using Web search) reports hallucinations, we apply our mitigation technique. We note that a sentence
\begin{table}
\end{table}
Table 1: Hallucination detection performance of self-inquiry and web-search techniques at both sentence and concept levels. It also shows separate precision and recall on both hallucinated and non-hallucinated instances.
Figure 6: Demonstrating the benefit of identifying concepts from a sentence for detecting hallucinations. The figure shows precision-recall curves for the sentence level hallucination detection task corresponding to two methods that use the probabilities calculated from the logit output values. The blue curve corresponds to the technique in which we use the minimum probability across **all tokens** of the sentence and the orange curve is for the technique in which we use the minimum over **only the tokens of the identified concepts**.
Figure 7: Precision-Recall curves for the hallucination detection task (at concept-level) using the three probability calculation techniques. **The ‘Minimum’ technique achieves the highest AUC.**
which is reported as hallucination can either be actually hallucinated or not hallucinated, i.e., it could also be a false positive. Table 2 shows the result of our method. It successfully mitigates the hallucination on \(57.6\%\) of the correctly detected hallucinations (True Positives); we refer to this metric as'success'. Furthermore, it achieves this at minimal 'deterioration' (\(3.06\%\)), i.e., it incorrectly converts a minimal \(3.06\%\) of the non-hallucinated instances to hallucinated. Table 10 (in Appendix) shows examples where our mitigation technique successfully mitigates the hallucinations.
**Analyzing Failures in Mitigating Hallucinations:** Table 11 (in Appendix) shows examples where our mitigation technique fails to mitigate the hallucinations. We observe that in many of the failure cases, our technique fixes some hallucinated content of the sentences but fails to fix ALL the hallucinated content from them. For instance, example 1 and 2 in Table 11 correspond to type of failure.
Furthermore, in some of the failure cases, our technique results in a sentence which is no longer hallucinated but it not completely related to the topic. For instance, the fourth example in Table 11 about the topic 'Harry S. Kennedy'; the model generates "_Harry S. Kennedy was an American politician who served as the 35th President of the United States from 1961 to 1963._" which is wrong and our mitigation technique modifies it to "_John F. Kennedy was an American politician who served as the 35th President of the United States from 1961 to 1963._" which is factually correct but not related to the topic 'Harry S. Kennedy'. This happens because the output of the mitigation step is contingent on the information in the retrieved knowledge.
### Active Detection and Mitigation
The two findings in Section 3.1 motivate our approach of addressing hallucinations in which we actively detect hallucinations leveraging the logit output values and mitigate them during the generation process to prevent their propagation. Specifically, using the calculated probability scores, we identify the uncertain concepts and check their correctness using our validation procedure. We generate one sentence at a time and when our detection method reports hallucination, we fix it using our mitigation approach and continue generating the next sentence. We demonstrated separate detection and mitigation efficacy in 3.2 and 3.3, respectively. Figure 1 compares the percentage of hallucination in the output of GPT-3.5 model and our active detection and mitigation approach. Our approach reduces the percentage of hallucinations from \(47.4\%\) to \(14.53\%\). In Figure 8, we demonstrate this comparison for different categories of hallucination. It shows that our approach reduces hallucinations for all categories.
To further demonstrate the effectiveness and wide applicability of our approach, we present three interesting additional studies. In the first study (Section 4.1), we experiment with **another large language model, Vicuna-13B**[11] and show that our approach performs well with this model also and considerably reduces the hallucinations it its output (Figure 9). We select this model since it is widely popular and publicly available to use. In the second and third studies, we adapt our approach to two different types of questions and show its effectiveness. Specifically, in Section 4.2, we adapt it to answer **multi-hop questions** and in Section 4.3, we show experiment with the **false premise questions**.
\begin{table}
\begin{tabular}{c c|c} \hline \hline
**Before** & **After** & **Percentage** \\ \hline Hallucinated & Not Hallucinated & 40.81\% \\ Hallucinated & Hallucinated & 30.04\% \\ Not Hallucinated & Not Hallucinated & 28.26\% \\ Not Hallucinated & Hallucinated & 14.56\% \\ \hline \hline \end{tabular}
\end{table}
Table 2: Result on modifying the reported hallucinations. Our approach successfully mitigates hallucinations on \(57.6\%\) of the correctly detected hallucinations while deteriorating a minimal \(3.06\%\) of the incorrectly detected hallucinations i.e. false positives.
Figure 8: Comparing number of hallucinations across different categories for sentences generated using GPT-3.5 and our active detection and mitigation approach.
## 4 Additional Experiments
### Efficacy with LLM from Another Family
Figure 9 compares the percentage of hallucinations (on the 'article generation task') in the output of Vicuna-13B and our proposed active detection and mitigation approach. It shows that our approach considerably reduces the hallucinations similar to the case with GPT-3.5 model. This study is conducted on \(10\) randomly sampled topics (i.e, \(50\) generated sentences) from the topic set described in Section 3. We note that similar to the setup with the GPT-3.5 model where we used instructional prompts with GPT-3.5 for all the steps of the approach (such as identifying key concepts and creating validation questions), here, in this setup, we use the Vicuna-13B model itself for all those steps. This result demonstrates the generality and applicability of our approach for other models also.
### Multi-hop Questions
In this study, we show that our approach can be adapted to improve the performance on multi-hop questions. Table 12 shows examples of these questions. Recall that our approach works by mitigating hallucination / incorrectness in the sentences generated by the model. Thus, if we can enable the model to answer these multi-hop questions step by step, then our active detection and mitigation approach can be applied to these steps, leading to correct predictions. To this end, we prompt the model and provide in-context examples demonstrating it to answer a given multi-hop question step by step. Table 13 (in Appendix) shows the prompt with in-context examples used for this purpose. Specifically, for a new question, the model generates the answer in multiple steps (one step at a time) and for each step, we apply our technique in which we first identify the low probability concepts from the sentence, validate their correctness using web search results, mitigate the hallucination (if detected), and then proceed to generate the next step. In our case study, we sample \(50\) multi-hop bridge questions from the validation set of HotpotQA Yang et al. (2018) and evaluate the performance.
Table 3 shows examples of responses generated using our approach. Figure 10 shows the performance achieved by different methods on this task. First, it shows the performance of the GPT-3.5 model; the model answers \(54\%\) of the questions incorrectly. Then, it shows the performance of the GPT-3.5 model with in-context examples; it results in a slight improvement over the zero-shot performance. Then, it shows the performance of the model on leveraging the knowledge retrieved from the web (using the question as the search query) as context to answer the question. As expected, the model's performance improves, i.e., it results in fewer incorrect predictions. Finally, we show the performance of our active detection and mitigation approach which results in considerably fewer hallucinations (26%), i.e., higher percentage of correct answers. This demonstrates the effectiveness of our approach in improving the performance on multi-hop questions.
### False Premise Questions
Motivation and Experimental Setup:We perform this experiment because LLMs have already
Figure 10: Comparing % of hallucination on the Multi-hop Questions for GPT-3.5 model, GPT-3.5 with in-context examples, GPT-3.5 leveraging knowledge (retrieved via web search) and our approach.
Figure 9: Comparing percentage of hallucinations (on the ‘article generation task’) in the output of **Vicuna-13B** model and our proposed approach.
been shown to perform remarkably well on a wide range of 'correct' questions, i.e., questions that are factually correct and make the right assumptions (Khashabi et al., 2020; Brown et al., 2020; Zhang et al., 2022; Lourie et al., 2021; Chowdhery et al., 2022; Rae et al., 2021). However, users in real-world applications often ask questions that are based on false premises / pre-suppositions such as "Why energy is absorbed in exothermic reactions?" and "Why do floppy disks have higher storage capacity than USB drives?".
We observe that state-of-the-art models often struggle to appropriately respond to such questions; thus, such questions serve as another challenging evaluation setting. To this end, we conduct a case study and compile a set of \(50\) such adversarial questions, i.e., we compile questions on which GPT-3.5 model generates an incorrect response. This is done to create a challenging experimental setup for evaluation as the model generates incorrect output for such questions. Furthermore, we also create a true premise question corresponding to each false premise question. Table 14 (Appendix) shows examples of these question pairs. For this task, we evaluate correctness at the complete answer level as this is a question answering task and the entire answer needs to be correct for the answer to be considered as correct.
Approach:We note that an ideal response from a system for such questions depends on the application. For instance, some applications may require identifying such questions and then abstaining on them like the selective prediction systems (Varshney and Baral, 2023; Kamath et al., 2020; Xin et al., 2021; Varshney et al., 2022). Some applications may additionally require suggesting a'rectified' question and providing response to that rectified question such as the search engines. Our approach supports these requirements by using the validation and mitigation step on the given input question.
Specifically, we first retrieve the relevant knowledge (via Bing Search using the question as query). Then, conditioned on the retrieved knowledge, we prompt the model to respond 'Yes' if the question makes factually correct assumptions, otherwise re
Figure 11: Comparing % of hallucination on the ‘False Premise Questions’ for GPT-3.5 model, GPT-3.5 leveraging knowledge (retrieved via web search) and our approach with question rectification (lower is better).
\begin{table}
\begin{tabular}{l} \hline \hline
**Question**: _Kent Dairy Round Barn is near the city in what Montana county?_ \\
**Response**: The Kent Dairy Round Barn is located near Red Lodge, Montana. Red Lodge is located in Carbon County, Montana. So, the answer is
\begin{tabular}{} \end{tabular} \\ \hline
**Question**: _What political party was the 6th governor of Hawaii that passed bills to help with global warming in Hawaii?_ \\
**Response**: The 6th governor of Hawaii was Linda Lingle. Linda Lingle was a member of the Republican Party and passed bills to help with global warming in Hawaii. So, the answer is
\begin{tabular}{} \end{tabular} \\ \hline
**Question**: _In what city is the mortgage lending company, founded by Dan Gilbert, headquartered?_ \\
**Response**: The mortgage lending company founded by Dan Gilbert is Quicken Loans. Quicken Loans is headquartered in Detroit, Michigan. So, the answer is
\begin{tabular}{} \end{tabular} \\ \hline
**Question**: _Actor David Lee Stenstrom played the character Waldo the inventor in a TV show that ran on Nickelodeon during what veras?_ \\
**Response**: The TV show was Out of Control (1984). David Lee Stenstrom played the character Waldo the inventor in the show. The show ran on Nickelodeon from 1984 to 1985. So, the answer is
\begin{tabular}{} \end{tabular} \\ \hline
**Question**: _What music school did the singer of The Search for Everything: Wave One attend?_ \\
**Response**: The singer of The Search for Everything: Wave One is John Mayer. John Mayer attended the Berklee College of Music in Boston, Massachusetts. So, the answer is
\begin{tabular}{} \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 3: Examples of responses generated using our approach for multihop bridge questions.
spond 'No'. If the response to this prompt is No, then we proceed to modify the question using the mitigation step. Table 15 shows both the instructional prompts used for identifying and rectifying a potentially false premise question. This step enables identifying false premise questions and also rectify them to facilitate the system in providing an appropriate response. Importantly, we also show that our approach does not incorrectly modify a true premise question. This is crucial because if the user's question is correct then the system's response must be pertinent to that and not to its modified variant.
Performance Analysis:Recall that the false premise questions in our evaluation set are adversarially collected, i.e., GPT-3.5 gives an incorrect response to all of these questions. First, we evaluate the performance of GPT-3.5 model when relevant knowledge (retrieved via bins search using the question as the search query) is given as context to answer the question. We find that even with the retrieved knowledge, GPT-3.5 manages to answer only \(24\%\) false premise questions correctly, i.e., hallucinates on the remaining \(76\%\) questions. In contrast, **our approach answers 76\(\%\) questions correctly and hallucinates only on 24\(\%\).** Figure 11 shows this comparison. Furthermore, we note that even in some of these \(24\%\) hallucinated responses, some of the individual sentences in the responses are correct. However, since we focus on complete answer correctness, we mark them as incorrect. Table 16 shows responses on a few false premise questions generated by the GPT-3.5 model, GPT-3.5 model leveraging the retrieved knowledge as context, and our approach.
Efficacy of Question Rectification:We analyze the performance of our approach in rectifying the questions; **it successfully repairs 76\(\%\) false premise questions while not incorrectly modifying any true premise question.** Though this step makes modifications in a small number of true premise questions (\(6\) instances), it does not change the semantics of those questions as shown in the Table 4. We note that not incorrectly modifying a true premise question is an important characteristic of our approach.
### Other Applications
Our approach has utility in a variety of other applications also such as Abstractive Summarization
\begin{table}
\begin{tabular}{l l} \hline \hline
**Original Question** & **After Modification** \\ \hline \hline \multicolumn{3}{c}{**False Premise Questions**} \\ Why does Mars have three moons? & Why does Mars have two moons? (\(\blacktriangledown\)) \\ Why are golf balls bigger than basketballs? & Why are golf balls smaller than basketballs? (\(\blacktriangledown\)) \\ What are some papers on the relationship between homeschooling and neuroplasticity? & What are some papers on the relationship between homeschooling and learning outcomes? (\(\blacktriangledown\)) \\ Why USA has the lowest happiness index? & What factors have contributed to the decline in happiness among Americans? (\(\blacktriangledown\)) \\ How many metres does a typical apple weigh? & How many grams orunces does a typical apple weigh? (\(\blacktriangledown\)) \\ Why do gases have a particular shape? & Why do gases not have a definite volume or shape? (\(\blacktriangledown\)) \\ Why do migrant workers never leave their home? & Why do migrant workers leave their home? (\(\blacktriangledown\)) \\ When a diver swims deeper, why does the water pressure increase? & When a diver swims deeper, why does the water pressure increase? (\(\blacktriangledown\)) \\ Why does Mars have higher gravity than Earth? & Why does Mars have weaker gravity than Earth? (\(\blacktriangledown\)) \\ Why do all rabbits have red eyes? & Why do some rabbits have red eyes? (\(\blacktriangledown\)) \\ Why does Helium have atomic number of 1? & Why does Helium have atomic number of 2? (\(\blacktriangledown\)) \\ Why does Bangladesh have the highest population in the world? & Why does Bangladesh have the highest population growth rate in the world? (\(\blacktimes\)) \\ Why are tigers’ eggs bigger than chicken’s eggs? & Why do some breeds of chickens lay larger eggs than others? (\(\blacktimes\)) \\ \hline \multicolumn{3}{c}{**True Premise Questions**} \\ Why gases are shapeless? & Why are gases shapeless? (\(\blacktriangledown\)) \\ How did USA become a developed country? & How did the United States become a developed country (\(\blackcheck
and Claim Verification. In abstractive summarization where the generated summary has been shown to be often hallucinated Cao et al. (2022); Zhao et al. (2020); Chen et al. (2021) can be improved using our approach. Note that in the validation procedure of our approach, the relevant knowledge for this task will be retrieved from the original document instead of the web. However, for open-summarization, knowledge can be additionally retrieved from the web also. Our approach can be adapted for the claim verification task also as we can first identify the key sub-claims and then verify each sub-claim using the validation procedure. Here, the mitigation step will be useful for providing explanation behind the model's decision. We leave exploring these other usecases of our approach for future work.
## 5 Related Work
Advancements in the field of natural language processing led to the development of models that possess an impressive ability to generate fluent and coherent text. However, these models are vulnerable to a phenomenon called text hallucination. Prior work Maynez et al. (2020); Huang et al. (2021); Ji et al. (2023) has categorized text hallucinations into two classes: Intrinsic (when the generated output contradicts the source content) and Extrinsic (when the generated output cannot be verified from the source content, i.e., it that can neither be supported nor contradicted by the source).
One thread of research pertaining to hallucinations has focused on studying different causes of this phenomenon such as training data quality Wang (2019); Lee et al. (2022), source-target divergence Dhingra et al. (2019), ill-suited modeling Aralikatte et al. (2021); Feng et al. (2020); Li et al. (2018), and randomness during inference Dziri et al. (2021); Tian et al. (2019); Lee et al. (2022).
The other thread focuses on addressing the hallucination problem Manakul et al. (2023); Azaria and Mitchell (2023); Lee et al. (2022); Du et al. (2023); Zhang et al. (2023). Manakul et al. (2023) propose a sampling-based hallucination detection approach in which they first sample multiple responses from the model and then measure the information consistency between the different responses. They posit that when a language model knows a given concept well, the sampled responses are likely to be similar and contain consistent facts; on the other hand, for hallucinated facts, stochastically sampled responses are likely to diverge and may completely contradict one another.
Another recent work Azaria and Mitchell (2023) leverages LLM's internal state to identify the truthfulness of a statement. Using an annotated dataset, they train a separate classifier that takes the LLM's activation values as input and predicts its truthfulness. Lee et al. (2022) hypothesize that the randomness of sampling is more harmful to factuality when it is used to generate the latter part of a sentence than the beginning of a sentence and propose a new sampling algorithm named factual-nucleus sampling that dynamically adapts the 'nucleus' p along the generation of each sentence. Du et al. (2023) propose an approach motivated by _The Society of Mind_ and _multi-agent settings_ in which multiple models individually propose and jointly debate their responses and reasoning processes to arrive at a common answer. In our approach, we leverage the logit output values, web search, and actively detect and mitigate hallucinations. We demonstrate the effectiveness of our approach on a variety of tasks, including article generation, multi-hop question answering, and false premise question answering.
## 6 Conclusion
In this work, we proposed an approach that actively 'detects' and'mitigates' hallucinations of the large language models. Through systematic and extensive experiments with the article generation task, we showed that our approach successfully reduces the hallucinations of the GPT-3.5 (text-davinci-003) from \(47.5\%\) to \(14.5\%\) on average. We also demonstrated the individual efficacy of our detection and mitigation techniques. Specifically, our detection technique achieves a high recall and the mitigation technique successfully mitigates a large fraction of the correctly detected hallucinations. Notably, the mitigation technique does not introduce new hallucinations even in the case of incorrectly detected hallucinations, i.e., false positives. We further demonstrated the effectiveness and wide applicability of our approach and presented several interesting studies including evaluation with another LLM (Vicuna) and answering multi-hop and false premise questions. Overall, our work addresses the LLMs' hallucination problem and thus contributes to improving their reliability and trustworthiness, a crucial step en route to enabling their widespread adoption in real-world applications. |
2306.05007 | Parallel and Asynchronous Smart Contract Execution | Today's blockchains suffer from low throughput and high latency, which
impedes their widespread adoption of more complex applications like smart
contracts. In this paper, we propose a novel paradigm for smart contract
execution. It distinguishes between consensus nodes and execution nodes:
different groups of execution nodes can execute transactions in parallel;
meanwhile, consensus nodes can asynchronously order transactions and process
execution results. Moreover, it requires no coordination among execution nodes
and can effectively prevent livelocks. We show two ways of applying this
paradigm to blockchains. First, we show how we can make Ethereum support
parallel and asynchronous contract execution \emph{without hard-forks}. Then,
we propose a new public, permissionless blockchain. Our benchmark shows that,
with a fast consensus layer, it can provide a high throughput even for complex
transactions like Cryptokitties gene mixing. It can also protect simple
transactions from being starved by complex transactions. | Jian Liu, Peilun Li, Raymond~Cheng, N. Asokan, Dawn Song | 2023-06-08T07:56:45Z | http://arxiv.org/abs/2306.05007v1 | # Parallel and Asynchronous Smart Contract Execution
###### Abstract
Today's blockchains suffer from low throughput and high latency, which impedes their widespread adoption of more complex applications like smart contracts. In this paper, we propose a novel paradigm for smart contract execution. It distinguishes between consensus nodes and execution nodes: different groups of execution nodes can execute transactions in parallel; meanwhile, consensus nodes can asynchronously order transactions and process execution results. Moreover, it requires no coordination among execution nodes and can effectively prevent livelocks. We show two ways of applying this paradigm to blockchains. First, we show how we can make Ethereum support parallel and asynchronous contract execution _without hard-forks_. Then, we propose a new public, permissionless blockchain. Our benchmark shows that, with a fast consensus layer, it can provide a high throughput even for complex transactions like Cryptoktifies gene mixing. It can also protect simple transactions from being starved by complex transactions.
Blockchain, smart contract, parallel execution, asynchronous execution.
## 1 Introduction
Blockchains make digital transactions possible without relying on a central authority. One issue that hinders the wider deployment of blockchain-based applications such as cryptocurrencies [31] and smart contracts [11], is their low throughput and high latency. This is partially due to the fact that _all_ blockchain nodes are required to reach consensus on the order of transactions _and_ execute them.
With the progress of the blockchain consensus algorithms [10, 21, 22, 26, 27, 34], transaction execution will soon become a bottleneck. For example, _CryptoKitties_[38], a popular game built on Ethereum blockchain, has logged the network due to its complex genetic algorithm. This problem will be amplified by the computational demands of future smart contract applications. A straightforward way to get rid of this bottleneck is dividing the blockchain nodes into groups (or shards) to process transactions in parallel [5, 7, 16, 28, 30, 36, 41]. However, existing approaches usually require extensive coordination but still suffer from congestion within the same group. They also require a large group size, i.e., \(3f^{\prime}+1\) (or \(2f^{\prime}+1\)[7]) nodes to tolerate \(f^{\prime}\) faults in each group. Additionally, they incur _livelocks_ for smart contracts (cf. Section 3).
**Scalable execution.** In this paper, we propose Saber, a novel paradigm for scalable smart contract execution, by improving a traditional Byzantine fault-tolerance (BFT) architecture, called "separating execution from consensus" [40]:
1. **Parallel execution.** It distinguishes between _consensus nodes_ and _execution nodes_. For simple transactions like cryptocurrency payments, consensus nodes confirm them directly; for complex transactions that involve expensive execution, consensus nodes order them and assign them to different subsets of execution nodes (_execution groups_) for parallel execution. This can be seen as multiple instances of [40] running in parallel.
2. **Asynchronous execution.** When complex transactions are executed by execution nodes, consensus nodes can keep processing simple transactions in a _non-blocking_ way. This can effectively protect simple transactions from being starved by complex transactions.
Compared with existing blockchain parallelization paradigms, Saber has the following advantages. First of all, it supports asynchronous execution for smart contracts. Secondly, unlike prior sharding paradigm which requires extensive coordination among execution nodes, Saber allows each individual execution node to execute the ordered transactions directly and independently. Thirdly, it only requires \(2f^{\prime}+1\) nodes in each execution group, which can significantly reduce the group size1. Lastly, to the best of our knowledge, it is the _first_ parallelization paradigm that is livelock-free for smart contracts.
Footnote 1: It only requires 70 nodes to reach a group failure probability of less than \(10^{-6}\), whereas sharding schemes require 600 nodes.
We propose two ways to put Saber into practice: on the existing Ethereum blockchain, and as a standalone blockchain.
**Saber for Ethereum.** We apply Saber to Ethereum [11] and show that we can make Ethereum support parallel and asynchronous contract execution _without introducing any hard-fork_. For a transaction that invokes a smart contract with expensive execution, the consensus nodes (all Ethereum miners collectively serve as consensus nodes) simply put it into the ledger without executing it, but they lock the states associated with this transaction and
designate an execution group for the execution. Once this transaction is confirmed, execution nodes in the designated group execute it off-chain and put the result into the ledger by making another transaction. All these rules are enforced by the smart contracts themselves without changing the underlying consensus.
**Saber for a standalone blockchain.** Following the Saber paradigm, we propose a new public and permissionless blockchain called SaberLedger. It leverages the state-of-the-art distributed randomness generation protocol [35] to select a (rotating) committee of consensus nodes which run a Byzantine consensus. The same randomness is used to construct groups of execution nodes. Furthermore, SaberLedger stores the whole blockchain into a distributed storage maintained by all nodes in the system, to support "state sharding".
Our contributions are summarized as follows:
* We propose Saber, a **paradigm for parallel and asynchronous** smart contract execution. It supports a **small group size of \(2f^{\prime}+1\)** execution nodes and it requires no coordination among execution nodes and **prevents (adversarial) livelocks. (Section 4)
* We show how Saber make Ethereum support parallel and asynchronous execution **without introducing any hard-fork**. (Section 5)
* We propose SaberLedger, a new public, permissionless blockchain based on our proposed paradigm. It **supports "state sharding" by further separating storage from consensus. (Section 6)
* We **inplement a prototype of SaberLedger** and deploy it on a network of 3,467 nodes across 15 regions and 5 continents. The results show that it can achieve a high throughput even for complex transactions like Cryptokitties gene mixing, and it can effectively protect simple transactions from being starved by complex transactions. (Section 7)
## 2 Background and Preliminaries
### _Blockchains and smart contracts_
Blockchain technology has fueled a number of innovations such as cryptocurrencies [31] and smart contracts [11]. In particular, smart contracts permit execution of arbitrary code on top of blockchains. However, blockchains introduce large overheads compared with traditional architectures. For example, Bitcoin [31] can only handle \(\sim\)7 transactions per second and each transaction requires one hour to be confirmed. Another reason is that every node in the system is required to execute all transactions.
Blockchains are usually permissionless, i.e., any node can join and leave at any time. Therefore, they need to be able to prevent sybil attacks. PoW naturally provides _sybil-resistant identities_, since the number of sybils that an adversary can spawn is limited by its computing resources. Another solution for sybil-resistant identities is proof-of-stake (PoS), which limits adversary's power by its wealth.
Since blockchains are maintained in a distributed way, an upgrade to the blockchain-based software may lead to _hard-forks_: nodes running the old version software may see the transactions adhering to the new version as invalid. During a recent hard-fork in Bitcoin, the network was divided into two separate parts [14]: Bitcoin and Bitcoin Cash [14]. Therefore, hard-forks have the risks of partitioning the committee.
### _Separating execution from consensus_
Byzantine fault tolerant (BFT) state machine replication is a service where its state is replicated across \(n\) servers and it can handle clients' requests as a single server. One approach to build such services is _practical byzantine fault tolerance_ (PBFT) [13], which requires \(n=3f+1\) servers to tolerate \(f\) faults. In PBFT, one server, the _primary_, decides the order for clients' requests and forwards them to the other servers. Then, all servers agree on the order via a two-phase agreement to generate a _commit certificate_ (CC), execute the requests and reply to the clients. Clients wait for \(f+1\) consistent replies to complete its request.
Yin et al. propose to split all servers in a BFT protocol into two clusters: an _agreement cluster_ and an _execution cluster_[40]. The agreement cluster's job is to order clients' requests via a standalone BFT protocol (e.g., PBFT), send the ordered requests to the execution cluster, and relay replies to the clients. In the execution cluster, \(2f^{\prime}+1\) servers are required to tolerate \(f^{\prime}\) faults, which is independent of the \(f\) faults in the agreement cluster.
### _Multisignatures and message aggregation_
A _multisignature_ scheme allows multiple signers to produce a compact and joint signature on common input via an Aggre operation. Any verifier that holds the aggregated public key can verify the signature in constant time. In practice, Aggre also outputs a bit map indicating which signers have (not) participated in the signing process, so that Verify can compute the aggregated public key correspondingly. For the sake of brevity, we do not explicitly mention the bit map in the rest of the paper.
Multisignatures provide a useful property for _message aggregation_, which was used in ByzCoin [27] to improve the scalability of PBFT. Alternatively, hardware-assisted secret sharing [29] can achieve the same goal with smaller overhead but requires TEEs.
### _Randomness beacon_
Many recent blockchain consensus algorithms [10, 21, 26, 39] rely on a random beacon to generate randomness that is _unbiasable, unpredictable and third-party verifiable_. Such a random beacon is typically simulated by a distributed randomness generation protocol. Suppose there are \(n\) nodes in the system and at most \(f\) of them are malicious. A commit-then-reveal [26, 28, 35] approach can be used to simulate a random beacon. An alternative approach is based on threshold signatures [39], but it requires distributed key generation whenever the membership changes.
## 3 Problem Statement
### _System setting and assumptions_
We target the setting of permissionless blockchains. There are two types of entities in the system: _clients_ and _nodes_. Clients issue _transactions_ to transfer funds or run smart contracts. Nodes process clients' transactions via a blockchain
consensus protocol. Notice that clients can play the role of nodes and vice versa. Each entity has a public/private key pair \((pk,sk)\) for digital signatures, and its identity is represented by its \(pk\). Following prior work [20, 28, 30], to against selfish mining attacks, we assume that at most 25% nodes can fail _at any time point_. We also assume that messages can be delivered within a certain bound \(\Delta\). All notations in this paper are listed in Table I.
### _Parallel and asynchronous execution_
Blockchain protocols are usually running in a sequential and blocking manner: a complex transaction (e.g., genetic algorithm in CryptoKitties) can congest the network so that simple transactions (e.g., cryptocurrency payments) cannot be confirmed on time. Asynchronous execution has been extensively used in web applications to improve performance and enhance responsiveness. It enables some tasks to be executed separately from the main task and notify the main thread when the execution is completed [3] In blockchain settings, _parallel and asynchronous execution_ should satisfy the following requirements:
1. complex transactions should be executed in parallel;
2. avoid blocking simple transactions by complex ones.
Blockchain researchers have already begun to investigate the possibility of integrating parallel execution with blockchains [5, 16, 28, 30, 41]. They divide the blockchain nodes into different groups to process transactions in parallel. However, these solutions require extensive coordination among blockchain nodes: they require BFT within each group, and two-phase lock/commit among different groups. Specifically, transactions that involve data objects in different groups must be committed in two-phases: lock the data first and access them afterwards. If a transaction fails to acquire any of the locks, it releases all previously acquired locks and aborts. For each step of this two-phase protocol, every involved group needs to reach a Byzantine consensus.
**Adversarial livelocks.** Even though the above approach can prevent deadlocks, it raises the rate of aborted transactions due to lock contention (called livelocks), because transactions will abort when they compete for the same lock. Even worse, this problem also opens a channel for denial-of-service attacks: an adversary can easily abort other transactions by competing for locks. For example, two clients Alice and Bob share data objects \(o_{1}\) and \(o_{2}\) which are in two different groups \(G_{1}\) (near Alice) and \(G_{2}\) (near Bob) respectively. Suppose Bob wants to make a transaction \(\text{TX}_{B}\) to update both \(o_{1}\) and \(o_{2}\) (\(\text{TX}_{B}\) will first lock \(o_{2}\) and then lock \(o_{1}\)). If Alice wants to make \(\text{TX}_{B}\) fail, she just needs to make a transaction \(\text{TX}_{A}\) to first lock \(o_{1}\) and then lock \(o_{2}\). In this case, both \(\text{TX}_{A}\) and \(\text{TX}_{B}\) will fail and Alice wins. We name this attack _adversarial livelocks_.
**Group size.** As each group runs a BFT protocol, they require \(3f^{\prime}+1\) nodes to tolerate \(f^{\prime}\) faults (in each group). Based on the analysis in [28], each group requires at least 600 nodes to tolerate 25% adversarial power: Suppose all execution groups are randomly chosen from an infinite pool of potential ENs. We use binomial distribution to calculate the probability that an execution group is _not_ controlled by the adversary:
\[P[f^{\prime}<\lfloor\frac{n}{3}\rfloor]=\sum_{f^{\prime}=0}^{\lfloor\frac{n}{ 3}\rfloor-1}\binom{n}{f^{\prime}}\alpha^{f}(1-\alpha)^{n-f^{\prime}} \tag{1}\]
where \(\alpha\)=25% is the adversarial power in the whole blockchain. In order to get a system failure probability that is less than \(10^{-6}\), it requires at least 600 ENs in each group.
The group failure probability is independent of the number of blocks being added to the chain. After a group being constructed, with a probability of \(10^{-6}\), it is a "faulty group" (i.e., the number of failures is larger than \(f\)). This probability will stay the same even with more blocks being added to the chain.
**Design goals.** To this end, we want to design a paradigm for **parallel and asynchronous smart contract execution** with the following properties:
1. **minimized size for each execution group**;
2. **no coordination among execution nodes**;
3. **no (adversarial) livelocks**.
Figure 1: Overview and workflow of Saber.
## 4 Saber: Parallel and Asynchronous Smart Contract Execution
In existing blockchains, transaction execution is tightly coupled with consensus. We suggest that execution should be separated from consensus, which leads to Saber, a robust (e.g., livelock-free) and efficient paradigm for parallel and asynchronous smart contract execution. Fig. 1 shows the basic architecture and workflow of Saber. We distinguish between _consensus nodes_ (denoted as CNs) and _execution nodes_ (denoted as ENs); and we also distinguish between simple transactions (e.g., cryptocurrency payments) and complex transactions (e.g., smart contract execution):
* For a simple transaction, CNs (1.1) check its validity, (1.2) agree on its order, (1.3) execute it (if needed) and update the blockchain;
* For a complex transaction, CNs (2.1) check its validity, (2.2) agree on its order, (2.3) lock its associated states, (2.4) assign it to an _execution group_ and wait for the results, (they can keep processing simple transactions while waiting) (2.5) collect the execution results, (2.6) unlock the states and update the blockchain.
Notice that validity checking, transaction ordering and state locking can be done by CNs within one round of the underlying consensus protocol.
We leave it to the contract developers to decide whether a certain transaction should be simple or complex. A basic rule could be based on its execution time. Let \(t_{1}\) be the latency of one consensus round, \(t_{2}\) be the execution time of this transaction, \(k\) be the number of transactions being batched in one consensus round (cf. Section 6), and \(m\) be the number of execution groups. If
\[t_{1}>\frac{k}{m}\cdot t_{2},\]
ENs will keep waiting for CNs. In this case, it is better to treat this transaction as a "simple" one.
Next, we explain how Saber works. Recall that Saber is a paradigm instead of a comprehensive protocol. We simplify some details (e.g., we use transactions instead of blocks) for the ease of understanding. A comprehensive protocol for permissionless blockchains is discussed in Section 6.
### _Consensus nodes_
The main job for CNs is to order transactions. We assume that there are \(m\) groups of ENs (selected by an unbiased randomness, cf. Section 6). CNs maintain a separate and independent _sequence number_ for each of them: \(\langle sn_{1},\ldots,sn_{m}\rangle\). After gathering \(m\)_complex_ transactions, CNs randomly assign each transaction to an execution group and increase the corresponding sequence number \(sn_{i}\). Then, all CNs run the blockchain consensus to agree on \(\langle\langle\text{TX}_{1},sn_{1}\rangle,\ldots,\langle\text{TX}_{m},sn_{m} \rangle\rangle\); and send each \(\langle\text{TX}_{i},sn_{i}\rangle\) to the \(i\)th execution group. After execution, each execution group returns the execution result \(\textit{res}_{i}\). In the end, CNs put \(\langle\langle\text{TX}_{1},\textit{res}_{1},sn_{1}\rangle,\ldots,\langle \text{TX}_{m},\textit{res}_{m},sn_{m}\rangle\rangle\) into the ledger. Notice that for _simple_ transactions, CN directly put them into the ledger after ordering.
Other than ordering, CNs have three additional jobs:
**State maintenance.** The global state of the data ledger is maintained by CNs. Namely, CNs run a Byzantine consensus protocol to ensure the consistency and availability of the data ledger. Meanwhile, CNs leave the execution to ENs and update the state based on the execution results. ENs can execute any transaction assigned to them, so that any transaction can be confirmed by one execution group within one round, instead of being divided into multiple transactions [28, 41]. This allows us to easily handle more complex transactions such as a smart contract calling other smart contracts. On the other hand, this requires every EN has access to the data ledger as well. We solve this issue in Section 6. Notice that the transactions and results are only written to the data ledger _once_. The structure of the ledger is the same as other blockchain protocols like Bitcoin or Ethereum.
**Lock handling.** There may be multiple transactions aiming to access the same state. If these transactions are assigned to different execution groups in one round, the state will diverge. We solve this problem by locking the state. Specifically, during the consensus round, CNs lock the states that this transaction wants to read/write. The locks are released only when the execution of this transaction is done. Other transactions that want to access these locks need to wait for the next round. Since all the locks are handled by CNs, there are no livelocks in our paradigm. A transaction locking all required objects gets executed and the locks will be released afterwards, i.e., no transaction can lock the acquired objects forever. A fundamental difference between livelocks and our locking scheme is: livelocks cause all related transactions fail and the attacker pays no transaction fee; in our case, an attacker locks all required objects can cause all other transactions fail, but she has to pay transaction fee. Monetary penalty is a common way to prevent denial-of-service attack. With monetary penalty, a malicious client with resources to waste can still cause damage to the system. We remark that denial-of-service attack can happen in any smart contract system if the attacker is willing to waste resources. For example, in Ethereum, a malicious client with a fast network connection and willing to pay the gas fee, can always successfully call a smart contract and make other competing transactions fail. However, its consistency property will always be maintained. In this aspect, our scheme is equal to Ethereum.
The locks are specified by the contract developers (cf. Section 5). In particular, Fig. 4 shows how consensus nodes find the objects touched by a transaction without executing it. Developers are incentivized to reduce the gas consumption of their contracts. However, it is clearly useful if we can provide assistants for lock handling at the compiler level, so that it is easier for the developers to develop their smart contracts. We leave this as future work.
**Validity checking.** CNs are also responsible for checking the validity of the gathered transactions, e.g., whether a client has enough balance to make a payment, whether the required state of a transaction is locked etc. A transaction will be ignored if it cannot pass the validity checking. An alternative way of separation is to leave the validity check
ing to ENs. However, this leads to a denial-of-service attack which is similar to livelocks. Suppose Bob wants to make a transaction TX\({}_{B}\) to update an object \(o_{B}\) and he is the only one who has write permissions. Alice has a faster network connection and wants to delay the execution of TX\({}_{B}\) for \(k\) rounds. Then, Alice just needs to issue \(k\) transactions to update \(o_{B}\) in front of TX\({}_{B}\). In each round, only one of these \(k\) transactions will be forwarded to ENs and the rest will be cached. In this case, TX\({}_{B}\) has to wait for \(k\) rounds until all Alice's transactions got rejected. In contrast, if we have CNs check the validity, they will immediately find all these \(k\) transactions are invalid and reject them in one round.
### _Execution nodes_
It is enough to have ENs execute TX\({}_{i}\) directly if \(sn_{i}\) is sequential to the sequence numbers they have seen. This is based on the fact that, for each execution group, CNs can never assign the same sequence number to different transactions due to the underlying consensus. Therefore, we only need to make sure that the execution results returned by ENs are correct. However, in each execution group, some ENs may be faulty and they may return results that are different from the ones returned by correct ENs. In this case, CNs need to resolve this dispute and decide which result to follow. In the rest of this section, we will introduce several existing solutions as well as our solution, and we will also provide a comparison.
**Verifiable computation.**_Verifiable computation_ allows a _defeqator_ to outsource the execution of a complex function to some _workers_, and the delegator verifies the correctness of the returned result while performing less work than executing the function itself. The state-of-the-art solution for verifiable computation in cryptography is based on _succinct non-interactive argument of knowledge_ (SNARK) [8, 32]. It allows the worker to provide a constant-size proof for the correct evaluation of a _circuit_. In these cases, each execution group only requires \((f^{\prime}+1)\) ENs (workers), because each EN can only crash but cannot return wrong results. However, such solutions usually require a trusted setup, and the overhead for generating and verifying the proof is still too large to use in practice.
**Trusted execution environments (TEEs).** Another solution for verifiable computation is via TEEs [2] (such as ARM TrustZone [1] and Intel's SGX [24]), which provide protected memory and isolated execution so that adversaries can neither control nor observe the data being stored or processed inside them. SGX also allows remote verifiers to ascertain the current configuration and behavior of a device via _remote attestation_. Therefore, we can assume that each EN has a TEE and CNs only trust the results that are executed and signed by TEEs. Same as the zk-SNARK based solution, each execution group requires \((f^{\prime}+1)\) ENs.
**Interactive verification.** This solution was initially proposed by Canetti et al. [12]; later adopted by TrueBit [37] and Arbitrum [25]. If two ENs return different results, all CNs will collectively run as a judge and launch an interactive verification game where they have one EN act as a solver and the other as a challenger. The game proceeds in a series of rounds and each round narrows down the range of the execution in this dispute. In each round, the challenger challenges a subset of the solver's execution, and it challenges a subset of that set in the next round, until the judge can make a final decision on whether the challenge was justified. In the end, either the cheating solver will be discovered and punished or the challenger will compensate for the resources consumed by the false alarm. This solution introduces logarithmical number of rounds in terms the complexity of the function, and each round requires CNs to reach a consensus. It requires at least one correct EN (and \(f^{\prime}+1\) in total) in each execution group to be the challenger.
**Majority voting.** We adopt the simplest way for dispute resolution. Assuming honest majority (\(2f^{\prime}+1\)) in each execution group, if more than half of ENs in each group return the same result, this result must be correct. Notice that CNs plus a single execution group exactly match the architecture proposed by Yin et al. [40]: CNs correspond to the agreement cluster, and the group corresponds to the execution cluster. Even though we have multiple "execution clusters", the "agreement cluster" maintains a separate sequence number for each of them and there is no co-ordination among them. Therefore, this system can be considered as \(m\) instances of the system proposed in [40] running in parallel. Following the analysis in Section 2.2 (equation 3), it requires 70 ENs in each execution group to reach a failure probability of \(10^{-6}\).
## 5 Asynchronous Execution for Ethereum
In this section, we show how we can add parallel and asynchronous execution to Ethereum _without any hard-forks_. We follow the architecture of Saber (Fig. 1): there are consensus nodes CNs and execution nodes ENs. We design a standard Ethereum smart contract for the Saber execution management. Any Ethereum developer who wants to make their contract support our paradigm needs to include this contract and use the functionality exposed via its interface for their contract development. CNs are the original Ethereum miners collectively, and they are also allowed to register with the execution management contract and run as ENs. That means the separation is only _in logic_. CNs run the standard Ethereum protocol as they are and the contract code will handle the execution. Since we only require 70 ENs to execute a transaction, the gas usage is much lower. Notice that the inputs and outputs of transactions are recorded in the blockchain state, and the code of smart contract is also publicly available. Therefore, new nodes that want to download and validate the entire chain can simply take the inputs, feed them into the contract code, and check if the execution results are consistent with those executed by ENs.
### _Execution management_
Fig. 2 shows the pseudocode of the execution management contract. We use a \(pk\) list to record the identities of the nodes that have been registered with this contract as ENs (line 2). Recall that each node is identified by its \(pk\). and we assume that the \(pks\) can be used to verify multisignatures (line 19). The gas consumption for signature verification is constant, independent of the complexity of the smart contract. So using signature verification instead of execution is worthwhile for complex smart contracts.
Any Ethereum node can register as an EN by calling the _Register_ function (line 8). For sybil-resistance, we require EN
to deposit some Ether to this contract account: misbehaving ENs will get punished in the same way as in proof-of-stake; otherwise they will get some transaction fees as the miners. ENs are stored in the \(pk\) list in an order according to the deposit they put, i.e., the one who deposits most will be in the head of the list.
ENs are uniformly and periodically assigned to different execution groups (line 3) via the _Shuffle_ function (line 12). After every epoch (e.g., 1,000 confirmations), all ENs (or a subset of them) jointly run a distributed randomness generation protocol off-chain to generate an unbiased random number \(r\). Then, they input \(r\) to the _Shuffle_ function, which verifies \(r\) first (recall that one property of this randomness is third-party verifiable), re-assigns each EN to an execution group based on \(pk\) and \(r\) (line 15). Note that ENs who deposit more will also be assigned in the head of each execution group, i.e., _groups_[\(i\)]0 is the leader of _groups_[\(i\)]. Recall that each execution group requires \((2f^{\prime}+1)\) ENs due to the requirement for majority voting. In Bitcoin or Ethereum, CNs not only make money from mining, but also from transactions fees. In our paradigm, we can distribute the transaction fees to ENs.
### _Running example: CryptoKitties_
We take CryptoKitties as a running example to explain how to use Saber for parallel and asynchronous Ethereum contract execution without hard-forks. CryptoKitties is a popular game built on the Ethereum blockchain [15], which allows players to buy, collect, breed and sell digital cats. Fig. 3 shows the pseudocode of its contract with only a _giveBirth_ function (adapted from [18]), which runs an expensive gene mixing algorithm to create a new cat (line 7). This complex genetic algorithm has clogged the Ethereum network recently: the number of unconfirmed transactions has remained consistently above 15,000 [38]. Next, we show how to improve the throughput by executing the _giveBirth_ function in a parallel and asynchronous manner.
Fig. 4 shows the Saber-version of CryptoKitties contract. It has a variable called \(em\), which is initialized with the contract address of _ExecutionManager_ (line 2). Therefore, we can directly use the _ExecutionManager_ contract via \(em\). A transaction TX calling the _giveBirth_ function will call the _giveBirth_\(\_\)_look_ function instead (line 6). CNs first check if the targeted matron has been locked (line 7), i.e., being accessed by other transactions. If not, they check matron's validity (line 10), e.g., whether it is a valid cat, whether it is pregnant, and whether its time has come. Then, they designate an idle execution group for the execution of TX (line 10). They also record the current block number, i.e., height of the current blockchain (line 12). Next, they lock this matron by putting \(\langle\)_matronID_, _groupID_, _blockNum_,TX\(\rangle\) into the _locks_ array (line 13), and put TX into the _tasks_ array of the designated execution group (line 14). Finally, they put this transaction into the ledger and update the state, as normal Ethereum miners.
Once TX is confirmed on the blockchain, each EN\({}_{i}\) in the designated execution group runs the _mixGenes_ function off-chain, signs the result _childGenes_, and sends the signature \(\sigma_{i}\) to the group leader _groups_[\(i\)]0. The group leader combines the received \(2f^{\prime}+1\) signatures into a single multisignature \(\widetilde{\sigma}\), and issues another transaction TX\({}^{\prime}\) calling the _giveBirth_\(\_\)_unlock_ function. If other ENs in _groups_[\(i\)] did not see TX\({}^{\prime}\) after a timeout, they send \(\sigma_{i}\)s to the second leader _groups_[\(i\)]1, so on and so forth, until TX\({}^{\prime}\) appears.
Footnote 1: [https://github.com/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/enen/en/en/en/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/enen/en/enen/en/en/en/en/en/en/en/enen/en/enen/en/en/en/en/en/enen/en/en/enen/en/en/enen/en/en/enen/en/en/en/en/enen/en/enen/en/en/en/enen/enen/en/enen/en/enen/enen/enen/en/enen/en/enen/en/enen/en/enen/en/en/en/enen/enen/en/enen/enen/en/enen/en/enen/enen/en/enen/enen/enen/enen/enen/enen/enen/enen/en/enen/en/enen/enen/enen/enen/en/en/en/enen/en/enen/en/enen/enen/en/enen/en/enen/en/enen/enen/enen/en/enen/en/enen/en/enen/enen/en/enen/enen/enen/enen/en/enen/enen/enen/en/enen/en/en/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enenen/enenen/en/enenen/en/enenen/enen/enen/enen/enen/enen/enenen/enen/enen/enenen/enenen/enenen/enen/enenenen/enenen/enenen/enenenen/enenenenen/enenenenen/en](https://github.com/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/enen/en/en/enen/en/en/en/en/en/en/en/en/en/enen/en/en/en/enen/en/en/en/en/enen/en/enen/en/en/en/en/en/en/en/enen/en/enen/en/en/en/en/en/enen/en/en/enen/en/en/enen/en/en/enen/en/en/en/en/enen/en/enen/en/en/en/enen/enen/en/enen/en/enen/enen/enen/en/enen/en/enen/en/enen/en/enen/en/en/en/enen/enen/en/enen/enen/en/enen/en/enen/enen/en/enen/enen/enen/enen/enen/enen/enen/enen/en/enen/en/enen/enen/enen/enen/en/en/en/enen/en/enen/en/enen/enen/en/enen/en/enen/en/enen/enen/enen/en/enen/en/enen/en/enen/enen/en/enen/enen/enen/enen/en/enen/enen/enen/en/enen/en/en/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enen/enenen/enenen/en/enenen/en/enenen/enen/enen/enen/enen/enen/enenen/enen/enen/enenen/enenen/enenen/enen/enenenen/enenen/enenen/enenenen/enenenenen/enenenenen/en)
2. proof-of-stake (PoS) for sybil-resistant identities;
3. BFT for the underlying consensus;
4. epoch transitions via a randomness beacon;
5. "state sharding" via a distributed storage.
### _Identity management and epoch transitions_
All nodes maintain a separate ledger called _identity ledger_ to record the sybil-resistant identities. One can get all the required \(pk\)s from the identity ledger. This ledger can be implemented as a smart contract that is similar to Fig. 2. Any user can participate in SaberLedger (i.e., become a sybil-resistant identity) by generating a key pair locally and making a deposit to this contract. Their identities are recorded in a \(pk\) list. The identity ledger also has a separated \(pk\) lists to record CNs and groups of ENs for all epochs.
We assume that an initial set of CNs was chosen in the bootstrapping phase of the system, and they run a distributed randomness generation protocol (i.e., randomness beacon, cf. Section 2) to generate an unbiased random number to build \(m\) execution groups, each of which has \(n^{\prime}=2f^{\prime}+1\) ENs. All participants are ranked according to the deposit they put, and the execution groups are built in the same way as in Section 5 (line 15 in Fig. 2). Each execution group has a leader which is the one who deposits most in that group.
To prevent an adaptive adversary from compromising more than a threshold number of CNs, as well as ENs in each execution group, we need to periodically rotate them from the underlying sybil-resistant identities. After an epoch, CNs run the randomness generation protocol again to rotate CNs and ENs. The duration for each epoch depends on the required time for an adaptive adversary to compromise a node. Following OmniLedger [28], we only rotate a subset of nodes to minimize the chances of a temporary loss of liveness. After rotating, they move to the next epoch.
### _"State sharding" via a distributed storage_
SaberLedger further shards state by storing the blockchain in a distributed storage maintained by all nodes in the system. Specifically, all nodes run a distributed storage (e.g., IPFS [9]) that only supports _read_ and _write_ operations. CNs need to keep track of the version numbers and hashes of the last write operation for each state.
Upon receiving a complex transaction associated with some states ST (e.g., _matron_ in Fig. 3), CNs only forward TX and the version numbers of ST to ENs, who read ST from the distributed storage and make sure the version numbers match. Then ENs execute the transaction and write the updated states ST\({}^{\prime}\) back to the distributed storage, which accepts ST\({}^{\prime}\) only if it has been signed by all ENs in that execution group. ENs will notify CNs when the writing is done. After receiving the notification from ENs, CNs check the version numbers of the states from the storage and unlock the states if the version numbers are correct.
To reduce write frequency, ENs _cache_ all the updated states and write them to the storage once. Naively, after execution, the execution nodes need to write back the state to the storage immediately so that followup transactions access this state can be executed. However, this requires the execution nodes read and write the state very frequently, introducing a large overhead. To avoid this, we have the execution nodes cache the state, and have the consensus nodes forward the followup transactions to the execution group holding the corresponding state. In this case, CNs keep track of which ST have been assigned to which execution group, and keep forwarding the transactions that are associated to that ST to the same execution group. This works except that there is a transaction that is associated to ST and ST\({}^{\prime}\) which have been assigned to two different execution groups. In this case, CN will notify these two groups to write back ST and ST\({}^{\prime}\) to the storage.
### _Consensus and execution_
As in BFT, clients send transactions to the primary consensus node CN\({}_{p}\) (cf. Section 2). If their transactions did not appear on the blockchain (in the distributed storage)
Figure 4: CryptoKitties in Saber paradigm.
Figure 5: Overview of SaberLedger.
after a timeout, they send those transactions again to all CNs.
After gathering enough transactions, \(\mathsf{CN}_{p}\) puts the valid ones into _transaction blocks_. For simple transactions like cryptocurrency payments, \(\mathsf{CN}_{p}\) put them into a single transaction block \(B\). Then, all \(\mathsf{CN}\)s together run BFT to put \(B\) into the distributed storage.
For each complex transaction \(\mathsf{TX}_{i}\), \(\mathsf{CN}_{p}\) finds all its associated states \(\mathsf{ST}_{i}\) and locks them (in a way as shown in Fig. 4). Other transactions requiring \(\mathsf{ST}_{i}\) have to either be assigned to the same execution group or wait for the next round. Then, \(\mathsf{CN}_{p}\) puts all transactions arbitrarily and uniformly into \(m\) transaction blocks \(\langle B_{1},\ldots,B_{m}\rangle\) and assigns a sequence number \(sn_{i}\) to each \(B_{i}\). Recall that \(\mathsf{CN}\)s maintain a separate and independent sequence number for each execution group.
\(\mathsf{CN}_{p}\) sends \(\langle\langle B_{1},sn_{1}\rangle\ldots,\langle B_{m},sn_{m}\rangle\rangle\) to all other \(\mathsf{CN}\)s, who will check the validity of all \(\langle\mathsf{TX}_{i},\mathsf{ST}_{i}\rangle\)s and also check if there are multiple transactions in different blocks accessing the same state. Then all CNs run BFT to agree on the proposal. In the end of the BFT round, they generate a commit certificate \(CC\) (cf. Section 2) for each \(\langle B_{i},sn_{i}\rangle\).
Next, \(\mathsf{CN}_{p}\) sends each \(\langle B_{i},sn_{i},CC\rangle\) to all \(\mathsf{EN}\)s in the \(i\)th execution group. To distribute the loads, the leader in each execution group coordinates the communication between \(\mathsf{CN}_{p}\) and \(\mathsf{EN}\)s. Specifically, \(\mathsf{CN}_{p}\) only sends \(\langle B_{i},sn_{i},CC_{i}\rangle\) to the group leader, who further distributes them to all \(\mathsf{EN}\)s in that group. Then, each \(\mathsf{EN}\) reads the states from the distributed storage and executes the transactions in \(B_{i}\) following the same order as they are being put into \(B_{i}\), updates the corresponding states, and puts the updated states into a _result block_\(R_{i}\). It also generates a signature for \(\langle R_{i},sn_{i}\rangle\). The group leader gathers signatures from \(\mathsf{EN}\)s and returns \(\langle R_{i},sn_{i},\widetilde{\sigma}_{i}\rangle\) to all \(\mathsf{EN}\)s, where \(\widetilde{\sigma}_{i}\) is a multisignature signaling that \(\langle R_{i},sn_{i}\rangle\) has been output by all \(\mathsf{EN}\)s in that execution group. \(\mathsf{EN}\)s write \(\langle R_{i},sn_{i},\widetilde{\sigma}_{i}\rangle\) back to the distributed storage or their local cache, and notify \(\mathsf{CN}_{p}\).
Recall that we follow the same architecture as [40]: the agreement job of consensus nodes is to order clients' requests via a standalone BFT protocol (e.g.,PBFT), send the ordered requests to the execution nodes, and relay replies to the clients. Then, \(\mathsf{CN}_{p}\) is exactly the BFT primary in [40], and the view change procedure of BFT works for \(\mathsf{CN}\) as well.
## 7 Implementation and Evaluation
### _Experimental setup_
In order to systematically evaluate the performance of SaberLedger, we build a simulation framework allowing us to easily define the conditions and control the experiments.
First, we setup a cluster of 3,467 Amazon EC2 tz.micro VMs across 15 regions and 5 continents to introduce real network latency. Each VM contains 1 2.3GHz vCPU, 1 GB memory and runs Amazon Linux 2.
Second, we assign 70 ENs to each execution group, which leads to a group failure probability of less than \(10^{-6}\), base on the analysis in [28]. As a comparison, existing sharding protocols require \(\sim\)600 nodes to reach the same failure probability, because they require at least \(3f^{\prime}+1\) nodes in each shard. We only require \(2f^{\prime}+1\) nodes in each group. We can easily change the group size to make a trade-off between robustness and efficiency.
Third, we run SputnikVM [17] - a blockchain virtual machine for Ethereum - on each \(\mathsf{EN}\), so that we can test SaberLedger with different types of Ethereum transactions ranging from simple cryptocurrency payments to CryptocKitties gene mixing algorithm. Furthermore, we store the blockchain state in IPFS [9], but we cache the state as we discussed in Section 6.2.
Last, recall that the bottleneck of SaberLedger is its consensus layer. To this end, we make the consensus layer as a parameter as well. We simulate the consensus layer by having it agree on a 1 MB block for every consensus round, with different rates:
* 0.1 rounds/s, which corresponds to the performance of current PoW-based or PoS-based consensus.
* 1 rounds/s, which corresponds to the performance of current BFT consensus, e.g., PBFT [13], Byzcoin [27].
* 10 rounds/s, which conjectures the future of consensus protocols, e.g., EOS [23] and others.
Assuming 1 MB block contains 2,000 transactions (following Bitcoin and Omniledger [28]), these consensus layers correspond to different throughputs of 200 TX/s, 2,000 TX/s and 20,000 TX/s2 respectively. The advantage of this setup allows us to easily use different consensus layers.
Footnote 2: Notice that this throughput cannot be achieved by current public blockchains. For example, current CPU can only verify \(\sim\)3,500 ECDSA signatures per second. However, this figure is used to conjecture the future of consensus protocols.
To better compare SaberLedger with previous state-of-the-art, we set three baselines for our benchmarks:
* Throughput of current Ethereum, i.e., 30 TX/s at most.
* Throughput of Ethereum-like systems with different consensus layers, i.e., 200 TX/s, 2,000 TX/s and 20,000 TX/s respectively (for simple transactions).
* Throughput of current sharding protocols with different consensus layers, i.e., 200 TX/s, 2,000 TX/s and 20,000 TX/s respectively (for simple transactions).
Our evaluation captures the setting with failures: if crash nodes is fewer than \(f^{\prime}\), performance is unaffected. If it is more (or if a group cannot be formed), consensus nodes just stop forwarding requests to that group and wait for the next epoch to rotate groups. This only results in fewer groups.
**The worst case performance can be shown from the results of only one execution group.** However, we assume this case rarely happens because faulty nodes will get punished.
### _Evaluation results_
**Performance with complex transactions.** We first assume that all the workloads are complex transactions (i.e., Cryptockitties gene mixing), which gives us an estimate on the lower bound performance of SaberLedger. We run the experiments with a varying number of execution groups and measure the peak throughput (TX/s) when the system is saturated. Note that each \(\mathsf{EN}\) will receive a transaction block of size \(\frac{1}{m}\) MB, where \(m\) is the number of execution groups. Intuitively, the workload for each \(\mathsf{EN}\) decreases as \(m\) increases. The results shown in Fig. 6(a) validate this conjecture: as more execution groups being added, the
performance of SaberLedger keeps increasing until reaching the bottleneck of the consensus layer. Specifically, for a fast consensus layer (20,000 TX/s, blue line), the throughput of SaberLedger increases until reaching a throughput of 8,100 TX/s, after which the signature verification becomes a bottleneck. For a medium consensus layer (2,000 TX/s, red line), its throughput increases linearly until it reaches the bottleneck of its consensus layer when the number of the execution groups is 20. For a slow consensus layer (200 TX/s, brown line), its throughput is almost the same as its consensus layer. As a baseline, we also show the throughput of Ethereum, which is below 30 TX/s. In principle, the throughput of Ethereum should be similar with the brown line. However, in Ethereum, each block on average only batches 100 transactions due to the total gas limit for each block. In SaberLedger, we can set a much higher gas limit and batch more transactions in one block, since each EN is only required to execute a subset of transactions. **Remarks:**
* _SaberLedger can achieve a high throughput even for complex transactions like Cryptokitits gene mixing._
* _When there is no separation (the case of one execution group), even if the consensus layer is fast (20,000 TX/s), the throughput is still very low (100 TX/s)._
* _Recall that sharding protocols require 600 nodes in each shard. With 3,467 nodes (5 shards), sharding protocols can reach a throughput of at most \(\sim\)500 TX/s, even with a fast consensus layer (20,000 TX/s). With the same number of nodes and consensus layer, SaberLedger can reach a throughput of 4,201 TX/s. This demonstrates the prominent advantages of separating execution from consensus._
**Performance with mixed workloads.** In SaberLedger, simple transactions like cryptocurrency payments are confirmed asynchronously, independent of complex transactions. Therefore, the advantage of SaberLedger will become more prominent if we consider real-world workloads that mix simple transactions with complex transactions. To this
Figure 6: Evaluation results.
end, we retrieve around 50,000 transactions of recent 500 Ethereum blocks (from height 5,998,827 to 5,999,326) from Etherscan [19], and run these transactions on SaberLedger. To be conservative, we treat all contract invocations as complex transactions and assign them to different execution groups; and we treat cryptocurrency payments as simple transactions and confirm them directly in consensus layer. We check if the sender or receiver address is a contract address by querying Etherscan's API. Among these transactions, 47% of them are simple and 53% of them are complex. Furthermore, we treat two transactions as "conflict" as long as they are invoking the same contract, and one of them will be cached for the next round. Fig. 6(b) shows that the peak throughput of SaberLedger for mixed transactions is significantly higher than only considering complex transactions (in Fig. 6(a)). For example, when the number of execution groups is 32, SaberLedger can process another 1,000 simple transactions in addition to 3,200 complex transactions (for fast consensus). Fig. 6(c) shows that it takes 7s-7min for SaberLedger to process all these 50,000 transactions depending on the number of execution groups and consensus layer. As a comparison, by inspecting the timestamps on the Ethereum blockchain, we found Ethereum requires 2 hours to finish processing these transactions. **Remarks:**
* _Asynchronous execution can effectively protect simple transactions from being starved by complex transactions, thus significantly improving the throughput._
* _In Ethereum-like systems or sharding protocols, complex transactions can block the processing of simple transactions. In the worst case, simple transactions have to wait until all complex transactions to be executed (at least 53 seconds)._
**Performance with a varying number of transactions in one 1MB block.** As we mentioned, SaberLedger can have a higher gas limit and batch more transactions in one block. In principle, 1MB block can include around 9,000 Ethereum transactions3. So the throughput of SaberLedger can be improved if we batch more transactions in every block. To this end, we set both the number of execution groups and the group size as constants (44 execution groups) and run experiments with different batch sizes. Fig. 6(d) shows that, for a slow consensus layer (200 TX/s, brown line), its throughput increases linearly as the batch size increases. For a medium consensus layer (2,000 TX/s, red line), its throughput increases linearly until it reaches the bottleneck of its execution layer (around 4,400 TX/s). For a fast consensus layer (20,000 TX/s, blue line), the throughput of SaberLedger is exactly the same as its execution layer. **Remarks:**
Footnote 3: For example, a Cryptokitities gene mixing transaction is 115 bytes.
* _As the throughput for consensus layer increases, the execution layer becomes a bottleneck. However, we conjecture that the blockchain network will become larger in the future. So we can introduce more execution groups._
* _For Ethereum-like systems and sharding protocols, increasing the batch size have no significant effect on throughput, as execution is the bottleneck and it blocks the processing._
## 8 Related Work
**Hybrid consensus** Another solution to avoid having all nodes execute all transactions is _hybrid consensus_[4, 27, 33], which uses a slow permissionless blockchain protocol to bootstrap a fast permissioned blockchain protocol. For example in [27], a committee is elected by sliding a fixed-size window over a permissionless blockchain. Then, nodes in the committee run a BFT protocol to agree on the order of transactions _and_ execute them, and other nodes just follow the results. They achieve Visa-level throughput for cryptocurrency payments. However, execution is still a bottleneck for smart contracts that require expensive executions.
**HyperLedger Fabric** Researchers in IBM propose an _execute-order-validate_ paradigm for their permissioned blockchains [7]. In their paradigm, clients send transactions to multiple execution nodes (called _endorsers_, which is specified by the smart contracts) first. The endorsers execute the transactions independently and return the signed results (called _endorsement_) to the clients. Each client collects endorsements until reaching the endorsement policy, and then submits them to a BFT-based ordering service, which establishes a total order on all endorsements and atomically broadcasts them. Compared with our consensus nodes, their ordering service is more generic: only does ordering but leaves the validation and ledger updates to the receivers. This paradigm supports parallel execution, but suffers from the same livelock issues as the sharding approaches: different endorsers may execute the same set of transactions in different order, in which case, all transaction fail as it requires multiple endorsers to produce the same result.
**ParBlockchain** Amiri et al. [6] propose a similar order-execute paradigm called OXII, based on which, they propose a permissioned blockchain called ParBlockchain. However, they do not have multiple execution groups, instead, all execution nodes execute all transactions (cf. Figure 2 and 3 of [6]). As a result, SaberLedger has a much higher level of parallelism. Furthermore, in ParBlockchain, each execution nodes needs to multicast the execution results to all others, which introduce \(O(n^{2})\) communication complexity, whereas we have \(O(n)\) communication complexity.
**TrueBit and Arbitrum TrueBit [37] and Arbitrum [25] also target the execution issues for smart contracts. They also delegate the execution to a set of execution nodes and use interactive verification to resolve dispute (cf. Section 4.2). As we discussed, the dispute resolution strategy in Saber requires much less communication as well as coordination between consensus nodes and execution nodes. In addition, Saber further considers lock handling, which was ignored in TrueBit and Arbitrum. Table II summarizes the comparisons between SaberLedger and related work.**
## 9 Limitations
Regardless of the various benefits brought by our paradigm, we have to admit that it has two limitations. First, it changes the coding paradigm of smart contracts: the contract developers need to enumerate all dependencies when they develop the contract. It is clearly useful if we can provide assistants for lock handling at the compiler level, so that it is easier for the developers to develop their smart contracts. We leave this as future work. The second limitation is that
the monetary counter-incentive can only alleviate denial-of-service attacks, instead of totally eliminating them.
## 10 Conclusion
In this paper, we propose a novel paradigm for parallel and asynchronous smart contract execution. It neither requires extensive coordination nor suffers from (adversarial) live-locks, and it requires a small group size. We propose two ways to put this paradigm into practice. We first apply it to Ethereum and show that we can make Ethereum support parallel and asynchronous execution without any hardforks. Then, we propose a new public and permissionless blockchain SaberLedger, and show its performance by implementing a prototype.
## Acknowledgments
The work was supported in part by Zhejiang Key R&D Plans (Grant No. 2021C01116, 2019C03133), National Natural Science Foundation of China (Grant No. 62002319, U20A20222) as well as a grant from China Zheshang Bank.
|
2302.10843 | Polarization Imaging of Back-Scattered Terahertz Speckle Fields | Speckle patterns observed in coherent optical imaging reflect important
characteristic information of the scattering object. To capture speckle
patterns, angular resolved or oblique illumination geometries are usually
employed in combination with Rayleigh statistical models. We present a portable
and handheld 2-channel polarization-sensitive imaging instrument to directly
resolve terahertz (THz) speckle fields in a collocated telecentric
back-scattering geometry. The polarization state of the THz light is measured
using two orthogonal photoconductive antennas and can be presented in the form
of the Stokes vectors of the THz beam upon interaction with the sample. We
report on the validation of the method in surface scattering from gold-coated
sandpapers, demonstrating a strong dependence of the polarization state on the
surface roughness and the frequency of the broadband THz illumination. We also
demonstrate non-Rayleigh first-order and second-order statistical parameters,
such as degree of polarization uniformity (DOPU) and phase difference, for
quantifying the randomness of polarization. This technique provides a fast
method for broadband THz polarimetric measurement in the field and has the
potential for detecting light depolarization in applications ranging from
biomedical imaging to non-destructive testing. | Kuangyi Xu, Zachery B. Harris, M. Hassan Arbab | 2023-02-21T17:49:54Z | http://arxiv.org/abs/2302.10843v1 | # Polarization Imaging of Back-Scattered Terahertz Speckle Fields
###### Abstract
Speckle patterns observed in coherent optical imaging reflect important characteristic information of the scattering object. To capture speckle patterns, angular resolved or oblique illumination geometries are usually employed in combination with Rayleigh statistical models. We present a portable and handheld 2-channel polarization-sensitive imaging instrument to directly resolve terahertz (THz) speckle fields in a collocated telecentric back-scattering geometry. The polarization state of the THz light is measured using two orthogonal photoconductive antennas and can be presented in the form of the Stokes vectors of the THz beam upon interaction with the sample. We report on the validation of the method in surface scattering from gold-coated sandpapers, demonstrating a strong dependence of the polarization state on the surface roughness and the frequency of the broadband THz illumination. We also demonstrate non-Rayleigh first-order and second-order statistical parameters, such as degree of polarization uniformity (DOPU) and phase difference, for quantifying the randomness of polarization. This technique provides a fast method for broadband THz polarimetric measurement in the field and has the potential for detecting light depolarization in applications ranging from biomedical imaging to non-destructive testing.
osajournal
## 1 Introduction
Speckle patterns are usually formed when a coherent light reflects or transmits through a rough surface or a turbid medium. Their unique spatial feature is described by a random granular structure. Speckle patterns can be considered a hurdle that degrades the image quality, but they also carry the characteristics information regarding the illuminated scattering object and have enabled innovative breakthroughs in many imaging applications. In recent years, speckle patterns with unique and tailored statistical properties [1], often described by their probability density function (PDF), have found promising applications in microscopy [2, 3], super-resolution imaging [4, 5, 6], ghost imaging [7], Speckle-Tracking Echocardiography [8, 9], and diffuse biomedical spectroscopy and imaging [10, 11, 12].
Coherent terahertz (THz) spectroscopy has likewise enjoyed a wide range of promised biomedical [13, 14, 15, 16] and non-destructive testing applications [17, 18], which can give rise to scattering and speckle phenomena at different wavelength scales. Some of the first investigations of this phenomenon in the THz regime were studied by analyzing the statistics of the amplitude and phase of the broadband diffuse scattering [19, 20]. The frequency-dependent scattering loss of THz pulses has been reported for different sample morphology, including the surface scattering due to roughness [21, 22, 23] and the volume scattering in porous [24, 25] or granular [26, 27] mediums. There has been significant electromagnetic modeling and signal processing efforts in relating the THz spectra with limited parameters describing the samples through turbid media [28, 29, 30, 31], which usually require approximations of the dielectric properties in order to distinguish between scattering and feature-less absorption losses. The alternative approach is to investigate the unique phenomena of scattering, such as the optical diffusion measured via angular-resolved detection [32], the change in polarization states characterized by the Mueller
matrix of samples [33, 34]. However, these methods are not commonly used in THz spectroscopy systems due to cumbersome alignment and time-consuming nature.
THz speckle patterns with broadband spectroscopic or polarimetric signatures have not been investigated due to technological challenges. For example, monochromatic THz speckle images with limited bandwidth were captured using a free-electron laser operating at 2.3 THz [35], and also reported in the imaging by radar systems working at around 600 GHz [36, 37]. Speckles constructed by a broadband THz illumination were captured through the combination of the electro-optic sampling with a CMOS camera [38]. However, spectroscopic signatures of the speckle were not investigated. In order to characterize the THz speckle occurred in practical applications, it is necessary to extend the single-point terahertz time-domain spectroscopy (THz-TDS) into a high-speed and portable spectral imaging technique. For instance, to capture _in vivo_ THz speckle patterns in biophotonics applications, the instrument should be robust and rapid such that it does not introduce additional grainy features due to the motion artifacts of the subject or the user. Further, the polarization change information of the THz beam can be used to discern biological information with higher sensitivity through light scattering mechanisms [39]. We have developed single-point THz time-domain polarimetry (THz-TDP) techniques with high measurement accuracy over a broad spectral bandwidth [40, 41], however they lacked beam steering or image formation capability. Until recently, broadband THz-TDS spectral or polarimetric imaging techniques that can achieve large field of view (FOV) and fine spatial resolution necessary for studying speckle patterns were lacking.
In this paper, we present a polarization-sensitive and fast imaging method to directly measure the spatial distribution of the THz speckle fields. Previously, we have developed a PHASR (Portable HAndheld Spectral Reflection) Scanner, which adopted a telecentric beam steering strategy, to image the full 40 x 27 mm FOV in both Asynchronous Optical Sampling (ASOPS) and Electronically Controlled Optical Sampling (ECOPS) modes [42, 43]. By incorporating two Photoconductive Antenna (PCA) detectors and a polarizing beam-splitter to simultaneously record the two orthogonal polarization directions of the THz field, we have upgraded our PHASR Scanner to capture polarization-sensitive images of the target without sacrificing the scanning speed. We investigated the polarization speckle patterns formed by the Stokes vectors calculated from the THz-TDS images of gold-coated sandpapers. We observed that the polarization of the back-scattered THz fields became either partially (under-developed speckles) or fully randomized according to the degree of roughness and the wavelength of the THz illumination. The statistical properties of the back-scattered THz speckle fields are in agreement with the non-Rayleigh models of partially-developed speckles, which are valid when the sample roughness is smaller than the wavelength of illumination. Our results show that the normal-incidence THz polarimetric reflection imaging modality can be used for characterizing the depolarization caused by rough surface, which provides the potential for its applications in other turbid media and highly scattering samples.
## 2 Methods
The THz-TDS measurements are obtained using an upgraded version of our PHASR (Portable HAndheld Spectral Reflection) Scanner, described in detail previously [43]. The upgraded PHASR Scanner, shown in Fig. 1, includes a polarizing beam splitter and two PCA detectors oriented along orthogonal directions. The PHASR Scanner incorporates the TERA ASOPS (Asynchronous OPtical Sampling) dual-fiber-laser THz spectrometer (Menlo Systems, Inc., Newton, NJ, USA) into a handheld, collocated, telecentric imaging system. A THz beam generated by the photoconductive antenna (PCA) in the emitter (E) is collimated using a TPX lens (CL) with 50 mm focal length. The collimated beam is directed towards a gimballed mirror (GM) using a high-resistivity silicon beam splitter (BS). The gimballed stage is a two-axis motorized system composed of a goniometer and a rotational stage. It raster scans the collimated beam over
the aperture of a custom-made telecentric \(f\)-\(\theta\) lens [44]. Therefore, the focused beam is always normal incident onto the target and has a constant focal spot-size. A free-standing grid acts as a polarizing beam splitter (PBS) after the BS, which separates the back-scattered (or specularly reflected) radiation into the two orthogonal components denoted by X and Y. The signals from the two PCA detectors are converted in two transimpedance amplifiers, and then collected with two digital acquisition (DAQ) cards at synchronized time.
To investigate the depolarization of THz waves due to scattering, we prepared targets with high reflectivity and different surface roughness. These targets are made of sandpaper of grits 36, 60, 80 and 120 (lower grit number corresponds to larger average particle size and thus rougher sandpaper [45]), each coated with a 120-nm layer of gold by vacuum deposition. Figure 2 shows the microscope images of the gold-coated sandpapers with 5x magnification. These sandpapers are placed at the focal plane of the \(f\)-\(\theta\) lens, scanning by 20x20 \(mm^{2}\) FOV with the pixel size of 0.5\(\times\)0.5 \(mm^{2}\). Each pixel is recorded in 1 seconds by averaging 100 replicates in time-domain. Images of a flat mirror are taken under the same setting as references to establish the minimum polarimetric detection resolution of our instrument and the laser-induced speckle background. In the following measurements, the emitter PCA was oriented to 45 degrees to ensure sufficient THz power in both the X and Y detection channels.
Subsequently, the averaged THz-TDS waveforms for components X and Y are converted to complex frequency functions \(\mathbf{A}_{x}\) and \(\mathbf{A}_{y}\) via Fourier transform. These complex functions can be represented in terms of the real Stokes parameters by [46],
\[\begin{split} I=\mathbf{A}_{x}^{*}\mathbf{A}_{x}+\mathbf{A}_{x}^{*} \mathbf{A}_{y},\ Q=\mathbf{A}_{x}^{*}\mathbf{A}_{x}-\mathbf{A}_{y}^{*}\mathbf{ A}_{y},\\ U=\mathbf{A}_{y}^{*}\mathbf{A}_{x}+\mathbf{A}_{x}^{*}\mathbf{A}_{ y},\ V=i(\mathbf{A}_{y}^{*}\mathbf{A}_{x}-\mathbf{A}_{x}^{*}\mathbf{A}_{y}). \end{split} \tag{1}\]
The Stokes parameters derived above has less degree of freedom than those obtained from intensity measurement since \(I^{2}=Q^{2}+U^{2}+V^{2}\) is guaranteed by Eq. 1, which would account for an apparent fully-polarized THz wave regardless of its actual states. However, it is well known that light can appear depolarized on average even if it is fully polarized locally [47]. The imaging capability of our instrument makes it possible to characterize the spatial variation of the Stokes parameters, which is closely relevant to the surface morphology of the sandpapers.
## 3 Results and Discussions
### Imaging of THz Speckle Fields with Stokes Parameters
Figure 3 presents the spatial variation of Stokes parameters in the THz images obtained by normal-incidence backscattered wave from the surface of five targets, including the gold-coated sandpapers of four different grit numbers and a flat mirror. The signals reflected by mirror are considered uniform in the contrasts of the Stokes parameters, despite some minor variations accounted by the intrinsic systematic errors of our instrument. As the surface roughness of the target increases, the images gradually become more inhomogeneous and contain granular structures. The granularity of parameter \(I\) is a known feature of speckle noise, while the
Figure 3: The spatial variation of Stokes parameters \(I\), \(Q\), \(U\) and \(V\) for the speckles formed by different targets at 0.5 THz. \(I\) is normalized by the maximum pixel value in the image of the mirror, as the reference, while the other Stokes parameters are normalized by \(I\).
decrease of \(I\) over increasing target roughness suggests the transition from specular reflection to back-scattering. Meanwhile, the spatial fluctuations of \(Q\), \(U\) and \(V\) implies the random nature of the polarization states for the back-scattered THz fields. It can be observed that the THz speckle images using Stokes parameters are closely relevant to the surface structures of the targets.
### First-order Statistics of the THz Speckle Fields
We investigated the quantitative relationship between the THz speckle images and the surface profile of the targets and the frequency of THz illumination. Since the images of the Stokes
Figure 4: Distribution of electric field amplitude along the X and Y channels, \(|\mathbf{A}_{x}|\) and \(|\mathbf{A}_{y}|\), are shown in (a) and (b) respectively, at 0.3 THz using histograms for each gold-coated sandpaper and the reference mirror. The PDF of normalized \(|\mathbf{A}_{x}|\) and \(|\mathbf{A}_{y}|\) are summarized in (c) and (d) at 0.3 THz. The same histograms obtained for 0.5 THz are shown in (e) and (f), and the corresponding PDF in (g) and (h), respectively.
parameters are essentially the joint functions of two separate data sets, comprised of the complex Fourier coefficients of the orthogonal components of the THz electric fields, i.e., \(\{\mathbf{A}_{x}\}\) and \(\{\mathbf{A}_{y}\}\) (Eq. 1), we start with the independent analysis of each, which is also referred to as the first-order statistics. For a fixed THz frequency, \(\{\mathbf{A}_{x}\}\) or \(\{\mathbf{A}_{y}\}\) would always consist of \(N=1681\) complex scalars corresponding to different pixels in space. Figure 4 presents the distribution of \(\{|\mathbf{A}_{x}|\}\) (left column) and \(\{|\mathbf{A}_{y}|\}\) (right column) at 0.3 and 0.5 THz for the five targets. It is evident that with increasing surface roughness, these distributions vary from a narrow shape resembling a delta function to a Gaussian-like form of smaller mean value and larger standard deviation, and finally approaching the Rayleigh distribution. The absolute values of amplitudes in different histograms are usually not directly comparable due to the variation in the incident and backscattered THz intensity. Therefore, we have also calculated the PDF of normalized amplitudes in Figure 4, showing that \(\{\mathbf{A}_{x}\}\) and \(\{\mathbf{A}_{y}\}\) follow a similar trend, i.e., the departure from the delta-like function increases with increasing roughness and frequency.
The contrast of the THz amplitude, defined as the relative standard deviation (RSD) [46] of the probability distributions function, is presented in Fig. 5 in the frequency range between 0 and 1 THz. The RSD values are calculated by the ratio of the standard deviation of the THz amplitude to the mean of the same quantity. The measurements obtained from the reference mirror have non-zero contrast values varying with frequencies, which are attributed to the systematic errors of our instrument, such as the frequency-dependent performance of the PCAs, cross-talk between the channels and the reflections from the internal walls, the fluctuations of THz illumination with time, the polarization changes induced by our beam steering system [48], etc. The mirror contrast measurements shown by the red traces in Fig. 5(a and b) can be used to determine a useful frequency range below 0.6 THz, in which RSD is still a good measure of the speckle noise relative to the surface roughness. For both X and Y detection channels, the amplitude contrast rise with the THz frequency and finally saturates around \(\sqrt{(4-\pi)/\pi}\approx 0.523\), which is the theoretical RSD value of a Rayleigh distribution. Also, the curves for targets with increased roughness have larger contrast and will reach the plateau value of 0.523 earlier, as compared to the smoother samples. In summary, upon the illumination from 0 to 0.6 THz, the gold-coated sandpaper sample of grit 120 only generated somewhat small amount of speckle, whereas the grit 80 and 60 samples have generated partially-developed speckles, and the grit 36 sample gives rise to fully-developed speckles at higher frequencies.
The contrast variations in Fig. 5 can be explained by the sums of finite random phasors with a non-uniform distribution of phase functions, which is a classic treatment for partially-developed
Figure 5: Amplitude contrast of the speckles formed by different samples separated into two orthogonal polarizations of the THz waves, along the laboratory \(x-\) (a) and \(y-\) (b) directions. (c) Numerical simulation of the amplitude contrast of the speckles corresponding to different grits of sandpapers. The RMS height \(\sigma_{h}\) used for simulation is obtained from [49, 45].
speckles [50, 51, 52]. We have adopted a numerical approach [53], based on generating a 2D height profiles with Gaussian correlation functions, and summing up the reflected wavefronts that has been dephased by the surface. The simulation results are summarized in Fig. 5(c), showing great agreement with trends observed in Fig. 5(a and b).
### Second-order Statistics of the THz Speckle Fields
In addition to their modulus, the argument (or phase) of complex \(\{\mathbf{A}_{x}\}\) or \(\{\mathbf{A}_{y}\}\) is another important parameter for the statistical analysis of speckle patterns. However, the phase measurements obtained by a single channel THz-TDS system often suffer from errors caused by the time-shift of laser pulses (drift or jitter) or the placement of sample. Instead, we choose to investigate the difference of phase between \(\{\mathbf{A}_{x}\}\) and \(\{\mathbf{A}_{y}\}\), which reduces the uncontrolled phase shift occurred at the same point in time or space. Figure 6(a) and (b) show the PDF of \(\theta_{xy}=\arg(\mathbf{A}_{y}^{*}\mathbf{A}_{x})\) for the five targets at 0.3 and 0.5 THz, respectively. For different THz frequencies, the PDF of \(\theta_{xy}\) generally have quasi symmetric forms, while they become more dispersed over \([-\pi,\pi]\) as the roughness or frequency increases, and finally approach the uniform distribution.
For the perturbative surface (Grit 120) \(\theta_{xy}\) still center around similar value to that of incident THz beam (Mirror), yet for the rougher surface (e.g., Grit 36) the central value of \(\theta_{xy}\) tend to approach zero. In simplified models, such as Kirchhoff Approximation [54] or Small Perturbation Method [55], the normal-incidence backscatter from perfectly conducting, but slightly rough, surface is predicted to retain the incident polarization. Therefore, the trend between \(\theta_{xy}\) and relative roughness shown in Fig. 6 involves a scattering phenomena of higher complexity. Also, we have generated representations of speckle fields in the polar coordinates using \(\sqrt{I}\) and \(\theta_{xy}\)
Figure 6: Distribution of phase difference \(\theta_{xy}\) for the speckles fields scattered by the five targets at (a) 0.3 THz and (b) 0.5 THz. Representation of speckles fields in the polar coordinates of \(\sqrt{I}\) and \(\theta_{xy}\) at (c) 0.3 THz and (d) 0.5 THz.
which are shown in Fig. 6(c) and (d). Taking advantage of the second-order statistics, these representations have provided adequate discrimination between different targets.
As defined in Eq. 1, the Stokes parameters also contain the statistics of \(\theta_{xy}\), since \(\theta_{xy}=\tan^{-1}(V/U)\). Figure 7(a) show the Poincare sphere representation of the normalized Stokes vectors of the speckles fields scattered by sandpapers of grit 36, grit 60 and flat mirror at 0.3 THz. It is clear that the the Stokes vectors become more dispersed on the unit sphere as the target roughness increases, which simultaneously decreases the norm of the average Stokes vectors from 1 to 0. This norm is thus a good measure of spatial randomness of the polarization speckles, which has been named by Wang et al. [56] as the "spatial degree of polarization" and by Gotzinger et al. [57] as the "degree of polarization uniformity (DOPU)". For simplicity, we will use the abbreviation DOPU defined as,
\[\text{DOPU}=\sqrt{Q^{2}+\overline{U}^{2}+\overline{V}^{2}}/\overline{I}, \tag{2}\]
where the spatial average operation is applied over the N = 1681 pixels in the entire 20\(\times\)20 \(mm^{2}\) images. There is a slightly different definition of DOPU than given in Eq. 2, where the normalization by \(I\) is applied before the spatial averaging operation [57], yet we did not find any significant change using our data sets. Figure 7(b) presents the frequency dependent DOPU values for the five targets. Despite the observable variations in the Stokes parameters of the reference mirror (as shown in Fig. 3), we determined that DOPU \(\in[0.98,1]\) is valid in the frequency range between 0.1 and 0.6 THz, corresponding to the high uniformity of fully-polarized fields. Instead, the departure of DOPU from 1 at higher frequencies suggests the limitation of our instrument due to other sources of polarization measurement noise, as explained earlier, rather than the speckles. As for the gold-coated sandpapers, DOPU gradually approaches 0 with increased roughness and THz frequency, indicating that the randomness of THz polarization speckles are associate with the strength of scattering.
The statistics of \(\theta_{xy}\) can be attributed to the scattering matrices \(\mathbf{S}\), expressed by
\[\begin{bmatrix}\mathbf{A}_{x}\\ \mathbf{A}_{y}\end{bmatrix}=\begin{bmatrix}\mathbf{S}_{xx}&\mathbf{S}_{xy}\\ \mathbf{S}_{yx}&\mathbf{S}_{yy}\end{bmatrix}\begin{bmatrix}\mathbf{A}_{x}^{i} \\ \mathbf{A}_{y}^{i}\end{bmatrix}, \tag{3}\]
where \(i\) labels the quantities for incident waves. For a smooth surface, \(\mathbf{S}\) is the identity matrix and no change in \(\theta_{xy}\) is expected. As for the rough surface, It has been substantiated that two
Figure 7: (a) The distribution of normalized Stokes vectors on the Poincaré sphere for the speckles fields at 0.3 THz. (b) Frequency dependent degree of polarization uniformity (DOPU) for different samples.
types of changes in \(\mathbf{S}\) can both cause \(\theta_{xy}\) to vary: (i) the cross-polarization entries \(\mathbf{S}_{xy}\) and \(\mathbf{S}_{yx}\) become non-zero [58, 59]. (ii) the co-polarization entries \(\mathbf{S}_{xx}\) and \(\mathbf{S}_{yy}\) become out of phase (i.e. \(\arg(\mathbf{S}_{xx}-\mathbf{S}_{yy})\neq 0\)) [60, 61]. Therefore, characterizing the complete matrix \(\mathbf{S}\) is necessary for understanding which mechanism is underlying in our experimental conditions. Experimental conditions that can distinguish between these two mechanisms would thus require a higher degree of the polarization control of the THz emission and calibration of the systematic errors induced by our instrument, which represent the limitation of our current PHASR Scanner design.
## 4 conclusion
We have presented a handheld and fast polarization-sensitive THz spectral imaging method for resolving speckle fields in terms of the Stokes vectors of the backscattered light. This method requires two PCAs and a polarizing beam-splitter to be incorporated into the PHASR Scanner we have previously developed. We statistically analyzed the Stokes vector images formed by the new instrument from the gold-coated sandpapers of different grit sizes. The first-order statistics of the THz speckle fields gradually transition to Rayleigh distribution as the target roughness becomes comparable to the wavelength of light, which is predicted by the models of partially-developed speckles. The second-order statistics of the THz speckle fields, i.e., the polarization states, reveal that the randomness of phase difference or Stokes vectors serve as an accurate measure of the strength of scattering. This work can pave the way for THz polarimetric imaging of speckle patterns as a potential marker for discrimination of sample-induced scattering in biomedical imaging and non-destructive testing.
Funding. Stony Brook University; National Institute of General Medical Sciences (GM112693). Disclosures. The authors declare no conflicts of interest. Data availability. Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
|
2307.04030 | Adaptive Force-Based Control of Dynamic Legged Locomotion over Uneven
Terrain | Agile-legged robots have proven to be highly effective in navigating and
performing tasks in complex and challenging environments, including disaster
zones and industrial settings. However, these applications normally require the
capability of carrying heavy loads while maintaining dynamic motion. Therefore,
this paper presents a novel methodology for incorporating adaptive control into
a force-based control system. Recent advancements in the control of quadruped
robots show that force control can effectively realize dynamic locomotion over
rough terrain. By integrating adaptive control into the force-based controller,
our proposed approach can maintain the advantages of the baseline framework
while adapting to significant model uncertainties and unknown terrain impact
models. Experimental validation was successfully conducted on the Unitree A1
robot. With our approach, the robot can carry heavy loads (up to 50% of its
weight) while performing dynamic gaits such as fast trotting and bounding
across uneven terrains. | Mohsen Sombolestan, Quan Nguyen | 2023-07-08T18:46:19Z | http://arxiv.org/abs/2307.04030v2 | # Adaptive Force-Based Control of Dynamic Legged Locomotion over Uneven Terrain
###### Abstract
Agile-legged robots have proven to be highly effective in navigating and performing tasks in complex and challenging environments, including disaster zones and industrial settings. However, these applications normally require the capability of carrying heavy loads while maintaining dynamic motion. Therefore, this paper presents a novel methodology for incorporating adaptive control into a force-based control system. Recent advancements in the control of quadruped robots show that force control can effectively realize dynamic locomotion over rough terrain. By integrating adaptive control into the force-based controller, our proposed approach can maintain the advantages of the baseline framework while adapting to significant model uncertainties and unknown terrain impact models. Experimental validation was successfully conducted on the Unitree A1 robot. With our approach, the robot can carry heavy loads (up to 50% of its weight) while performing dynamic gaits such as fast trotting and bounding across uneven terrains.
Adaptive control, Model predictive control (MPC), Quadruped robots, Unknown impact model.
## I Introduction
Legged robots have numerous potential uses, from search and rescue operations to autonomous construction. To perform these tasks effectively, it is important for the robot to have an accurate understanding of the environment it will be operating in. However, due to the complexity of the robot and the environment, the model of the robot itself might contain a significant level of uncertainty and affect the robot's stability, particularly when performing agile movements. To overcome these challenges, there is a need for the development of a control framework that can effectively compensate for these uncertainties in real-time.
The utilization of convex model predictive control (MPC) with the single rigid body (SRB) model in legged robots [1] has greatly enhanced the real-time implementation of diverse walking gaits. Unlike the balance controller based on quadratic programming [2], MPC offers the capability to perform agile motions like jumping [3, 4] and high-speed bounding [5] for quadruped robots. Additionally, MPC exhibits robustness in traversing rough and uneven terrains. However, it is important to note that MPC assumes perfect knowledge of the dynamic model.
To enhance trajectory tracking in the presence of unknown and changing disturbances, researchers have explored the combination of MPC with adaptive control techniques [6, 7, 8]. Additionally, parameter estimation techniques have been employed to further improve the robustness of the control system [9]. These approaches aim to adapt the controller and estimate system parameters to effectively compensate for uncertainties and disturbances, leading to improved trajectory tracking performance. It is worth noting that all of these studies were conducted using a position-based controller model.
In this work, we tackle the legged robot locomotion issue in real-world scenarios with a significant level of uncertainty. The uncertainty can come from both the robot model and the environment. Since our proposed method is based on a force controller, it retains the advantage of robustness to uneven terrain. Thanks to MPC as our baseline controller, our framework can be extended to different locomotion gaits and trajectories without adjusting the controller parameters. Additionally, by incorporating the adaptive controller, our control system can handle significant model uncertainty. As a result, our approach enables legged robots to move across different terrains with unknown impact models.
### _Related Works_
#### I-A1 Offline Learning
The offline learner can either leverage a model-based control approach or learn the control system from scratch. Using a model-based method, researchers mainly target learning the dynamic to improve the controller performance [10]. One example of this approach is the integration of deep learning with MPC, in which the proposed model tries to learn the cost or dynamic terms of an MPC [11]. This hybrid method shows considerable improvement for the aerial robot [12] when learning the dynamic model from experimental data.
Fig. 1: Our proposed adaptive MPC is successfully validated in an experiment on a Unitree A1 robot while carrying an unknown load of 5 kg (almost 50% of body weight) on rough terrain. Experimental results video: [https://youtu.be/QmwysJfM1k](https://youtu.be/QmwysJfM1k).
The major limitation of this method is that it is restricted to the dynamic model learned during the training phase. However, the dynamic model is prone to frequent changes in real-world scenarios due to environmental uncertainties and external disturbances.
To overcome the limitations of previous approaches, there has been growing interest in utilizing reinforcement learning (RL) to train models from scratch. The key advantage of RL models is their ability to adapt swiftly to changes in real-world environments due to being trained in diverse environments with varying properties. In the case of quadruped robots, an RL model can directly predict appropriate joint torques for traversing different types of terrain, as demonstrated by Chen et al. [13]. Additionally, Bellegarda et al. [14] enable quadrupeds to run quickly while carrying unknown loads by training the model to learn foot positions. However, these methods heavily rely on domain randomization during training to generalize well to challenging environments. Yang et al. [15] also propose an end-to-end RL method that utilizes proprioceptive states and visual feedback to predict environmental changes.
#### I-A2 Online Learning
To address inaccuracies in model-based controllers, researchers have explored an alternative approach using online learning, particularly supervised learning methods [16, 17, 18]. In this approach, the focus is on learning disturbances online [19], and in some cases, researchers also aim to learn the dynamics of the system itself [20]. Furthermore, this approach has been successfully applied for online calibration of kinematic parameters in legged robots [21]. In addition to that, in a recent study, a Lipschitz network method has been developed to bridge the model-reality gap in real-time [22]. The online learning method shares a close relationship with adaptive control, and numerous studies have explored the combination of these two approaches [23]. This combination aims to leverage the advantages of both methods, allowing for dynamic adaptation and continuous learning from real-time data to improve control system performance. Perhaps closest to our work in terms of online adaption is the learning method presented in [24] for legged robots. The authors correct the model behind the controller using a supervised learner while the robot is walking in an unknown environment. The data is collected during the robot's operation to learn a linear residual model which can compensate for system errors. However, in the transition from simulation to experiment, the acceleration estimators make noisy data required for training the model. As a result, the method is only applied to estimate the linear terms since the angular terms data proved to be too noisy to be helpful in the model.
#### I-A3 Adaptive Control
The goal of adaptive control is to tune the controller's variables online during deployment [25]. Adaptive control has been applied for manipulation tasks to robotic arms [26], mobile robots [27, 28, 29], and quadruped robots [30, 31]. The conventional Model Reference Adaptive Control (MRAC) architecture was originally designed for controlling linear systems in the presence of parametric uncertainties [32, 33]. However, it lacks the ability to characterize the input/output performance of the system during the transient phase. To address this limitation and improve the transient performance of adaptive controllers, the \(L_{1}\) adaptive control offers several advantages over traditional MRAC, such as decoupling adaptation and robustness within a control framework [34]. In addition, by incorporating a low-pass filter in adaptation law, the \(L_{1}\) adaptive control can provide stability [35] and transient performance [36]. Therefore, the \(L_{1}\) adaptive control technique guarantees robustness with fast adaptation [37], an essential criterion in dynamic robotics applications. Recently, by integrating \(L_{1}\) adaptive controller and Bayesian learner, researchers leverage the fast adaption performance of the \(L_{1}\) adaptive controllers and introduce a safe simultaneous control and learning framework [38, 39].
For legged robots, the adaptive controller has also been employed to find the value and location of the center of mass [40]. Our work on \(L_{1}\) adaptive control for bipedal robots [41] considers a Control Lyapunov Function (CLF)-based controller as a closed-loop nonlinear reference model for the \(L_{1}\) adaptive controller. It was validated for the robot's walking [42] and running [43]. However, the control framework in this prior work is based on Hybrid Zero Dynamics [44], which uses joint position control to track the desired trajectory from optimization for each robot joint. Moreover, in [45], an adaptive control based on a CLF is designed for quadrupeds to interact with unknown objects. Then, they combined the criteria derived by adaptive control as a constraint in an MPC framework. However, adding more inequality constraints to MPC makes the controller more complex in terms of computation. In our approach, we compute a residual vector for compensating dynamic uncertainty, which makes the controller more time-efficient. Additionally, by employing our method, the robot is able to adapt to terrains with unknown impact models.
### _Contributions_
A preliminary version of this research previously appeared in [46]; however, this paper presents several novel contributions to the prior work. This work incorporates the \(L_{1}\) adaptive controller into the model predictive control (MPC). The proposed control system leverages MPC due to its robustness to uneven terrain, contact constraint, and generalization to different locomotion gaits. Moreover, by integrating adaptive control into MPC, the proposed model can compensate for significant model uncertainty. In the previous work [46], the robot can only perform quasi-static walking; however, in this work, the robot can perform dynamic motions thanks to MPC. Finally, the authors present new hardware experiments to demonstrate the effectiveness of the proposed adaptive MPC (as illustrated in Fig. 1). The main contributions of the paper are as follows:
* We introduce a novel control system that combines the \(L_{1}\) adaptive control into the force-based control system, designed to address the challenges posed by model uncertainty in real-world applications.
* Thanks to MPC, our approach offers greater versatility as it can be adapted to a wide range of locomotion gaits and trajectories. Moreover, our method can handle terrain uncertainty, allowing the robot to navigate rough terrains, such as grass and gravel, as well as high-sloped terrain.
* By integrating the adaptive control into MPC, it is possible for quadruped robots to carry an unknown heavy load (up to 50% of the robot's weight) across challenging terrains, with the capability of executing dynamic gaits such as fast trotting and bounding. This is a significant improvement compared to our previous work, which only allowed the robot to perform quasi-static walking.
* The combination of using MPC for both the reference model and the real model in the adaptive controller makes the control system computationally expensive, leading to potential delays in computation. To ensure real-time performance, we have developed an update frequency scheme for the control system, which allows for the optimized allocation of processing resources to each control component.
* Our proposed approach enables the control system to adapt to terrains with unknown impact models, such as soft terrain. Traversing soft terrain is a challenging task for quadruped robots. The A1 robot can walk on double-foam terrain in different directions using our method. In comparison, the robot cannot maintain its balance using the baseline controller, resulting in a collapse.
The remainder of the paper is organized as follows. Sec. II presents the baseline control architecture for quadruped robots and provides some knowledge on force-based controllers. In Sec. III, we will briefly present an overview of our control approach. Then, our proposed adaptive force-based controller using balance controller and MPC will be elaborated in Sec. IV and Sec. V, respectively. Furthermore, the numerical and experimental validation are shown in Sec. VII. Finally, Sec. VIII provides concluding remarks.
## II Preliminaries
In this section, we present the background on the control architecture of quadruped robots and describe each control component. According to [47], the robot's control system consists of several modules, including a high-level controller, low-level controller, state estimation, and gait scheduler as presented in Fig. 2.
A reference trajectory can be generated for high-level control from user input and state estimation. The gait scheduler defines the gait timing and sequence to switch between each leg's swing and stance phases. The high-level part controls the position of the swing legs and optimal ground reaction force for stance legs based on the user commands and gait timing. As the baseline for the stance leg controller, we will use two common approaches: 1) quadratic program (QP) based balancing controller [2] and 2) model predictive control (MPC) [1]. The low-level leg control converts the command generated by high-level control into joint torques for each motor. These modules of the control architecture will be described briefly in the following subsections. More details can be found in [1, 2, 47].
### _Gait Scheduler_
The A1's gait is defined by a finite state machine using a leg-independent phase variable to schedule contact and swing phases for each leg [47]. The gait scheduler utilizes independent boolean variables to define contact states scheduled \(\mathbf{s}_{\phi}\in\{1=contact,0=wing\}\) and switch each leg between swing and stance phases. Based on the contact schedule, the controller will execute either position control during swing or force control during stance for each leg.
In our previous work [46], we focus on the application of load-carrying tasks, where the load is unknown to the robot or the control system. Having more legs on the ground during walking could also mean that the robot could produce a larger total ground reaction force to support the heavy load. Therefore, we used a quasi-static walking gait to maximize the number of legs on the ground during walking (i.e., 3 stance legs and 1 swing leg throughout the gait). However, in this paper, our framework is not limited by any specific gait. Similar to the baseline MPC control approach [1], the approach can work for different gaits by only changing the gait definition in the gait scheduler.
### _Desired Trajectory_
The desired trajectory is generated based on the robot's velocity command. The robot operator commands \(xy\)-velocity and yaw rate, then \(xy\)-position and yaw are determined by integrating the corresponding velocity. \(z\) position contains a constant value of \(0.3m\), and the remaining states (roll, roll rate, pitch, pitch rate, and \(z\)-velocity) are always zero.
### _Single Rigid Body (SRB) Model of Robot_
Due to the complexity of the legged robot, a simplified rigid-body model has been used to present the system's dynamic. This model enables us to calculate the ground reaction forces (GRFs) in real-time. A few assumptions have been made to achieve simplified robot dynamics [1]:
**Assumption 1**: _The robot has low inertia legs, so their effect is negligible._
**Assumption 2**: _For small values of roll (\(\phi\)) and pitch (\(\theta\)), the rotation matrix \(\mathbf{R}\) which transforms from the body to
Fig. 2: **Baseline Control Structure.** Block diagram of a control architecture for a quadruped robot. For the stance leg control, we use two common baseline control systems: QP-based balancing controller and MPC.
world coordinates, can be approximated as the rotation matrix corresponding to the yaw angle (\(\psi\)):
\[\mathbf{R}\cong\mathbf{R}_{z}(\psi)=\left[\begin{array}{ccc}\cos(\psi)&-\sin(\psi)&0\\ \sin(\psi)&\cos(\psi)&0\\ 0&0&1\end{array}\right] \tag{1}\]
Therefore, by defining the robot's orientation as a vector of Z-Y-X Euler angles \(\mathbf{\Theta}=[\phi,\theta,\psi]^{T}\), the rate of change of the robot's orientation can be approximated as [1]:
\[\dot{\mathbf{\Theta}}\cong\mathbf{R}_{z}(\psi)\mathbf{\omega}_{b} \tag{2}\]
where \(\mathbf{\omega}_{b}\) is the robot's angular velocity in the world frame.
**Assumption 3**: _For small angular velocity, the following approximation can be made:_
\[\frac{d}{dt}(\mathbf{I}_{G}\mathbf{\omega}_{b})=\mathbf{I}_{G}\dot{\mathbf{\omega}}_{b}+\mathbf{ \omega}_{b}\times(\mathbf{I}_{G}\mathbf{\omega}_{b})\approx\mathbf{I}_{G}\dot{\mathbf{\omega}} _{b} \tag{3}\]
_where \(\mathbf{I}_{G}\in\mathbb{R}^{3\times 3}\) is the moment of inertia in the world frame._
Based on the above assumptions, the state representation of the system is as follows [1]:
\[\left[\begin{array}{c}\dot{\mathbf{p}}_{c}\\ \dot{\mathbf{\Theta}}\\ \dot{\mathbf{p}}_{c}\\ \dot{\mathbf{\omega}}_{b}\end{array}\right] =\underbrace{\left[\begin{array}{cccc}\mathbf{0}_{3}&\mathbf{0}_{3}& \mathbf{1}_{3}&\mathbf{0}_{3}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{R}_{z}(\psi)\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}\end{array}\right]}_{\mathbf{D} \in\mathbb{R}^{12\times 12}}\underbrace{\left[\begin{array}{c}\mathbf{p}_{c}\\ \mathbf{\Theta}\\ \mathbf{\tilde{p}}_{c}\\ \mathbf{\omega}_{b}\end{array}\right]}_{\mathbf{X}\in\mathbb{R}^{12}}+ \tag{4}\] \[\underbrace{\left[\begin{array}{c}\mathbf{0}_{6\times 12}\\ \mathbf{M}^{-1}\mathbf{A}\end{array}\right]}_{\mathbf{H}\in\mathbb{R}^{12\times 12}}\mathbf{F}+ \left[\begin{array}{c}\mathbf{0}_{6\times 1}\\ \mathbf{G}\end{array}\right]\]
with
\[\mathbf{M} =\left[\begin{array}{cc}m\mathbf{1_{3}}&\mathbf{0}_{3}\\ \mathbf{0}_{3}&\mathbf{I}_{G}\end{array}\right]\in\mathbb{R}^{6\times 6} \tag{5}\] \[\mathbf{A} =\left[\begin{array}{ccc}\mathbf{1_{3}}&\cdots&\mathbf{1_{3}}\\ \mathbf{[}\mathbf{p}_{1}-\mathbf{p}_{c}]\times&\cdots&\mathbf{[}\mathbf{p}_{4}-\mathbf{p}_{c}]\times \end{array}\right]\in\mathbb{R}^{6\times 12}\] \[\mathbf{G} =\left[\begin{array}{c}\mathbf{g}\\ \mathbf{0}_{3\times 1}\end{array}\right]\in\mathbb{R}^{6}\]
where \(m\) is the robot's mass, \(\mathbf{g}\in\mathbb{R}^{3}\) is the gravity vector, \(\mathbf{p}_{c}\in\mathbb{R}^{3}\) is the position of the center of mass (COM), \(\mathbf{p}_{i}\in\mathbb{R}^{3}\) (\(i\in\{1,2,3,4\}\)) are the positions of the feet, \(\ddot{\mathbf{p}}_{c}\in\mathbb{R}^{3}\) is body's linear acceleration, \(\dot{\mathbf{\omega}}_{b}\in\mathbb{R}^{3}\) is angular acceleration, and \(\mathbf{F}=[\mathbf{F}_{1}^{T},\mathbf{F}_{2}^{T},\mathbf{F}_{3}^{T},\mathbf{F}_{4}^{T}]^{T}\in \mathbb{R}^{12}\) are the ground reaction forces acting on each of the robot's four feet. The term \([\mathbf{p}_{i}-\mathbf{p}_{c}]\times\) is the skew-symmetric matrix representing the cross product \((\mathbf{p}_{i}-\mathbf{p}_{c})\times\mathbf{F}_{i}\). Note that \(\mathbf{p}_{i}\) and \(\mathbf{F}_{i}\) are presented in the world frame. Therefore, the state representation of the system can be rewritten in the compact form:
\[\dot{\mathbf{X}}=\mathbf{D}\mathbf{X}+\mathbf{H}\mathbf{F}+\left[\begin{array}{c}\mathbf{0}_{6 \times 1}\\ \mathbf{G}\end{array}\right] \tag{6}\]
### _Balance Controller_
One of the baseline control approach for calculating GRFs for quadruped robots is the balance controller presented in [2] based on quadratic program (QP) solver. Based on the assumptions presented in Sec. II-C, the approximated dynamic model between the body acceleration and GRFs is as follows:
\[\underbrace{\left[\begin{array}{ccc}\mathbf{1_{3}}&\cdots&\mathbf{1_{3}}\\ \mathbf{[}\mathbf{p}_{1}-\mathbf{p}_{c}]\times&\cdots&\mathbf{[}\mathbf{p}_{4}-\mathbf{p}_{c}]\times \end{array}\right]}_{\mathbf{A}\in\mathbb{R}^{6\times 12}}\mathbf{F}=\underbrace{\left[\begin{array}{c}m( \ddot{\mathbf{p}}_{c}+\mathbf{g})\\ \mathbf{I}_{G}\dot{\mathbf{\omega}}_{b}\end{array}\right]}_{\mathbf{I}\in\mathbb{R}^{6}} \tag{7}\]
and the vector \(\mathbf{b}\) in (7) can be rewritten as:
\[\mathbf{b}=\mathbf{M}(\left[\begin{array}{c}\ddot{\mathbf{p}}_{c}\\ \dot{\mathbf{\omega}}_{b}\end{array}\right]+\mathbf{G}). \tag{8}\]
Since the model (7) is linear, the controller can naturally be formulated as the following QP problem [48], which can be solved in real-time at \(1\)\(kHz\):
\[\mathbf{F}^{*}=\operatorname*{argmin}_{\mathbf{F}\in\mathbb{R}^{12}} \left(\mathbf{A}\mathbf{F}-\mathbf{b}_{d}\right)^{T}\mathbf{S}(\mathbf{A}\mathbf{F}-\mathbf{b}_{d}) \tag{9}\] \[+\gamma_{1}\|\mathbf{F}\|^{2}+\gamma_{2}\|\mathbf{F}-\mathbf{F}_{\text{pev}}^{ *}\|^{2}\] s.t. \[\mathbf{d}\leq\mathbf{C}\mathbf{F}\leq\mathbf{\tilde{d}}\] \[\mathbf{F}_{swing}^{z}=0\]
where \(\mathbf{b}_{d}\) is the desired dynamics. The idea of designing \(\mathbf{b}_{d}\) will be elaborated in Sec. IV-A. The cost function in (9) includes terms that consider three goals, including (1) driving the COM position and orientation to the desired trajectories; (2) minimizing the force commands; and (3) minimizing the change of the current solution \(\mathbf{F}^{*}\) with respect to the solution from the previous time-step, \(\mathbf{F}_{prev}^{*}\). The priority of each goal in the cost function is defined by the weight parameters \(\mathbf{S}\in\mathbb{R}^{6\times 6}\), \(\gamma_{1}\), \(\gamma_{2}\) respectively.
The constraints in the QP formulation enforce friction constraints, input saturation, and contact constraints. The constraint \(\underline{\mathbf{d}}\leq\mathbf{C}\mathbf{F}\leq\underline{\mathbf{d}}\) ensures that the optimized forces lie inside the friction pyramid and the normal forces stay within a feasible range. More details can be found in [2]. Besides the friction constraint, we will enforce the force constraints for the swing legs, \(\mathbf{F}_{swing}=\mathbf{0}\). The swing legs are then kept in the posing position until it switches to the stance phase. More details on swing leg control are provided in Sec. II-F.
### _SRB-based Convex MPC_
The calculation of GRFs in quadruped robots is often approached through Model Predictive Control (MPC) [1]. This method determines the optimal sequence of inputs over a finite-time horizon, taking into account any constraints within the dynamic model. Every time MPC is executed in the control system, only the first computed control input from the MPC cycle is applied. The inputs determined over the finite time horizon are only used for the optimization problem and are not directly applied in the control system.
To have the dynamic equation in the convenient state-space form, gravity should be added to the state. So, the system can represent as:
\[\dot{\mathbf{X}}^{c}=\mathbf{D}^{c}\mathbf{X}^{c}+\mathbf{H}^{c}\mathbf{F} \tag{10}\]
where
\[\mathbf{X}^{c}=\left[\begin{array}{c}\mathbf{p}_{c}\\ \mathbf{\Theta}\\ \dot{\mathbf{p}}_{c}\\ \mathbf{\omega}_{b}\\ ||\mathbf{g}||\end{array}\right]\in\mathbb{R}^{13} \tag{11}\] \[\mathbf{D}^{c}=\left[\begin{array}{cccc}\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{1 }_{3}&\mathbf{0}_{3}&\mathbf{0}_{3\times 1}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{R}_{z}(\psi)&\mathbf{0}_{3\times 1}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\frac{\mathbf{\theta}}{||\mathbf{g}||}\\ \mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3}&\mathbf{0}_{3\times 1}\\ \mathbf{0}_{1\times 3}&\mathbf{0}_{1\times 3}&\mathbf{0}_{1\times 3}&0\end{array} \right]\in\mathbb{R}^{13\times 13}\] \[\mathbf{H}^{c}=\left[\begin{array}{c}\mathbf{0}_{6\times 12}\\ \mathbf{M}^{-1}\mathbf{A}\\ \mathbf{0}_{1\times 12}\end{array}\right]\in\mathbb{R}^{13\times 12}\]
We consider a linear MPC problem with horizon length \(k\) as follows:
\[\min_{\mathbf{F}_{i}} \sum_{i=0}^{k-1}\mathbf{e}_{i+1}{}^{T}\mathbf{Q}_{i}\mathbf{e}_{i+1}+\mathbf{F}_ {i}{}^{T}\mathbf{R}_{i}\mathbf{F}_{i}\] (12) s.t. \[\mathbf{X}^{c}_{i+1}=\mathbf{D}_{t,i}\mathbf{X}^{c}_{i}+\mathbf{H}_{t,i}\mathbf{F}_{i}\] \[\underline{\mathbf{d}}\leq\mathbf{C}\mathbf{F}_{i}\leq\bar{\mathbf{d}}\]
where \(\mathbf{F}_{i}\) is the computed ground reaction forces at time step \(i\), \(\mathbf{Q}_{i}\) and \(\mathbf{R}_{i}\) are diagonal positive semi-definite matrices, \(\mathbf{D}_{t,i}\) and \(\mathbf{H}_{t,i}\) are discrete time system dynamics matrices. The \(\mathbf{e}_{i+1}\) is the system state error at time step \(i\) define as \(\mathbf{e}=[\mathbf{e}_{p},\ \dot{\mathbf{e}}_{p}]^{T}\in\mathbb{R}^{12}\), with
\[\mathbf{e}_{p}=\left[\begin{array}{c}\mathbf{p}_{c}-\mathbf{p}_{c,d}\\ \log(\mathbf{R}_{d}\mathbf{R}^{T})\end{array}\right]\in\mathbb{R}^{6},\quad\dot{\mathbf{ e}}_{p}=\left[\begin{array}{c}\dot{\mathbf{p}}_{c}-\dot{\mathbf{p}}_{c,d}\\ \mathbf{\omega}_{b}-\mathbf{\omega}_{b,d}\end{array}\right]\in\mathbb{R}^{6}, \tag{13}\]
where \(\mathbf{p}_{c,d}\in\mathbb{R}^{3}\) is the desired position of COM, \(\dot{\mathbf{p}}_{c,d}\in\mathbb{R}^{3}\) is the desired body's linear velocity, and \(\mathbf{\omega}_{b,d}\in\mathbb{R}^{3}\) is the desired body's angular velocity. The desired and actual body orientations are described using rotation matrices \(\mathbf{R}_{d}\in\mathbb{R}^{3\times 3}\) and \(\mathbf{R}\in\mathbb{R}^{3\times 3}\), respectively. The orientation error is obtained using the exponential map representation of rotations [49, 50], where the \(log(.):\mathbb{R}^{3\times 3}\rightarrow\mathbb{R}^{3}\) is a mapping from a rotation matrix to the associated rotation vector [2]. The constraint \(\underline{\mathbf{d}}\leq\mathbf{C}\mathbf{F}_{i}\leq\bar{\mathbf{d}}\) is equivalent to the constraint in equation (9) at time step \(i\).
### _Swing Leg Control_
For the swing legs, the final footstep location for each leg is calculated from the corresponding hip location using a linear combination of Raibert heuristic [51], and a feedback term from the capture point formulation [47, 52]. The final footstep locations (\(\mathbf{p}_{f,i}\)) are projected on an assumed ground plane and are calculated by:
\[\mathbf{p}_{f,i}=\mathbf{p}_{h,i}+\frac{T_{e_{o}}}{2}\dot{\mathbf{p}}_{c,d}+\sqrt{\frac{ \mathbf{z}_{0}}{\|\mathbf{g}\|}}(\dot{\mathbf{p}}_{c}-\dot{\mathbf{p}}_{c,d}) \tag{14}\]
where \(T_{e_{o}}\) is the stance time scheduled, \(\mathbf{z}_{0}\) is the height of locomotion and \(\mathbf{p}_{h,i}\in\mathbb{R}^{3}\) is the position of the corresponding hip \(i\). A Beizer curve calculates the desired swing trajectory (including desired position \(\mathbf{p}_{d,i}\) and velocity \(\mathbf{v}_{d,i}\)) for swing legs which starts from the initial lift-off position \(\mathbf{p}_{0,i}\) and ends at the final touch-down location \(\mathbf{p}_{f,i}\).
### _Low-level Control_
The low-level leg control can generate joint torque commands from the high-level controller. For low-level force control, the controller transforms the force vector to the hip frame by rotation matrix \(\mathbf{R}\). Then, joint torques are calculated as follows:
\[\mathbf{\tau}_{stance,i}=-\mathbf{J}(\mathbf{q}_{i})^{T}\mathbf{R}^{T}\mathbf{F}_{i} \tag{15}\]
where \(\mathbf{J}(\mathbf{q}_{i})\in\mathbb{R}^{3\times 3}\) is the leg Jacobian matrix and \(\mathbf{q_{i}}\) is the joints angle of leg \(i\)-th.
To track the desired swing trajectory for each foot, a PD controller with a feedforward term is used to compute joint torques [47]:
\[\mathbf{\tau}_{swing,i}=\mathbf{J}(\mathbf{q}_{i})^{T}[\mathbf{K}_{p,p}(\mathbf{p}_{d,i}-\mathbf{p}_{i })+\mathbf{K}_{d,p}(\mathbf{v}_{d,i}-\mathbf{v}_{i})] \tag{16}\]
where \(\mathbf{p}_{d,i}\) and \(\mathbf{v}_{d,i}\) are desired foot position and velocity, respectively, \(\mathbf{p}_{i}\) and \(\mathbf{v}_{i}\) are actual foot position and velocity in the robot's frame, \(\mathbf{K}_{p,p}\in\mathbb{R}^{3\times 3}\) and \(\mathbf{K}_{d,p}\in\mathbb{R}^{3\times 3}\) are the diagonal matrices of the proportional and derivative gains.
## III Overview of the Proposed Approach
This section will present an overview of our novel control architecture to incorporate adaptive control into the force control framework. While our approach is not limited to any specific adaptive control approach, we decide to use \(L_{1}\) adaptive control [41, 37] thanks to its advancement in guaranteeing fast adaptation and smooth control signals. Note that our proposed control system is designed for the stance leg control part in the control architecture of the quadruped robot (see Fig. 2).
Our prior work [41] introduced an adaptive control based on Hybrid Zero Dynamics (HZD) [53] for bipedal robots. HZD is a common control approach for bipedal robots since it can handle hybrid and underactuated dynamics associated with this kind of robot. In this paper, however, our approach leverages the combination of the adaptive control and force control system, which calculates ground reaction forces (GRFs) to achieve highly dynamic locomotion for quadrupeds [47]. The use of force control in legged robot systems has several key benefits, including increased robustness in the presence of challenging terrains [2] and the ability to accommodate a wide range of dynamic movements [1], such as various types of locomotion gaits. By combining force control with adaptive control strategies that compensate for model uncertainty, achieving an enhanced control system with these advantages is possible.
The overview of our proposed adaptive force-based control system is presented in Fig. 3a. By incorporating a \(L_{1}\) adaptive controller, we aim to design a combined controller. The force-based controller calculates the optimal GRFs for following the desired trajectory. The adaptive controller calculates the residual parameters for compensating the nonlinear model uncertainty \(\mathbf{\theta}\) in the system dynamic. Therefore, the goal is to adjust adaptive control signal \(\mathbf{u}_{a}\) as well as adaptation law to estimate the model uncertainty (\(\mathbf{\theta}\)) correctly and make the real model follows the reference model. For the reference model, we employ a similar linear model described in (6), and we will
update the reference model in real-time using an ODE solver. Moreover, the vector of uncertainties estimation \(\hat{\mathbf{\theta}}\) typically has high frequency due to fast estimation in the adaptation law. Thus, we employ a low-pass filter to obtain smooth control signals. We use the same swing leg control to appropriately synchronize the reference and real models. This means that we also use the real model's foot position for the reference model.
In the following sections, we will elaborate on integrating two different force-based control as the baseline controller into the adaptive control. First, in Sec. IV, we will describe the proposed method using a QP-based balancing controller, as presented in Fig. 2(b). Then, in Sec. V, we will show how to incorporate MPC into the adaptive controller in detail, as illustrated in Fig. 2(c).
## IV Adaptive force-based control Using the Balance Controller
In this section, we use the balance controller as the force-based controller, previously demonstrated in [46]. In Sec. V, we will present our control framework for integrating the \(L_{1}\) adaptive control into MPC.
### _Closed-loop Dynamics_
The \(L_{1}\) adaptive control is basically designed for trajectory tracking; however, the goal of the balance controller is to compute optimal GRFs. Hence, to integrate the balance controller presented in Sec. II-D into \(L_{1}\) adaptive control, we should relate the linear model described in (7) into the closed-loop dynamics.
Let us consider the system state error (\(\mathbf{e}\)) according to equation (13) as the state variable. Therefore, the closed-loop error dynamics in state-space form can be represented as follow:
\[\dot{\mathbf{e}}=\mathbf{D}_{l}\mathbf{e}+\mathbf{B}\mathbf{u}, \tag{17}\]
where
\[\mathbf{D}_{l}=\left[\begin{array}{cc}\mathbf{0}_{6}&\mathbf{1}_{6}\\ \mathbf{0}_{6}&\mathbf{0}_{6}\end{array}\right]\in\mathbb{R}^{12\times 12},\quad\mathbf{ B}=\left[\begin{array}{cc}\mathbf{0}_{6}\\ \mathbf{1}_{6}\end{array}\right]\in\mathbb{R}^{12\times 6} \tag{18}\]
and \(\mathbf{u}\in\mathbb{R}^{6}\) is the control input function. By employing a PD control law, we have
\[\mathbf{u}=\left[-\mathbf{K}_{P}\quad-\mathbf{K}_{D}\right]\mathbf{e}, \tag{19}\]
where \(\mathbf{K}_{P}\in\mathbb{R}^{6\times 6}\) and \(\mathbf{K}_{D}\in\mathbb{R}^{6\times 6}\) are diagonal positive definite matrices. According to definition of matrices \(\mathbf{D}_{l}\) and \(\mathbf{B}\), from equation (17) it can be obtained that:
\[\ddot{\mathbf{e}}_{p}=\left[\begin{array}{c}\ddot{\mathbf{p}}_{c}-\ddot{\mathbf{p}}_{c,d}\\ \dot{\mathbf{\omega}}_{b}-\dot{\mathbf{\omega}}_{b,d}\end{array}\right]=\mathbf{u}, \tag{20}\]
where \(\ddot{\mathbf{e}}_{p}\) is the derivative of \(\dot{\mathbf{e}}_{p}\) presented in (13), \(\ddot{\mathbf{p}}_{c,d}\) and \(\dot{\mathbf{\omega}}_{b,d}\) are the desired COM linear acceleration and the desired angular acceleration, respectively. Since the desired trajectory is obtained from the velocity command, both desired accelerations \(\ddot{\mathbf{p}}_{c,d}\) and \(\dot{\mathbf{\omega}}_{b,d}\) are zero vectors. Then from (8) and (20), the desired dynamics can be given by:
\[\mathbf{b}_{d}=\mathbf{M}(\mathbf{u}+\mathbf{G}), \tag{21}\]
where \(\mathbf{M}\) and \(\mathbf{G}\) are defined in (5). By substituting (21)
Fig. 3: **Proposed adaptive force-based control system diagram.** a) The main structure of the proposed adaptive force-based control system, b) Block diagram of the proposed adaptive QP-based balancing controller, c) Block diagram of the proposed adaptive MPC. Each dashed line indicates the update frequency for control components
into the QP problem (9), we can obtain the optimal GRFs as the input for the low-level leg controller. The objective of the QP formulation in equation (9) is to find a solution that ensures the actual dynamics \(\mathbf{A}\mathbf{F}\) match the desired dynamics \(\mathbf{b}_{d}\). In general, the QP-based balance controller is capable of achieving the desired control input function outlined in equation (19), thus keeping the error \(\mathbf{e}\) within a certain range. However, if the desired dynamics vector \(\mathbf{b}_{d}\) violates any of the inequality constraints, such as force limits or friction constraints, the controller may yield an optimal solution \(\mathbf{F}^{*}\) that may not completely align with the desired dynamics. With this solution, the optimal dynamic \(\mathbf{b}_{d}^{*}\) and \(\mathbf{u}^{*}\) can be written as:
\[{\mathbf{b}_{d}}^{*}=\mathbf{A}\mathbf{F}^{*}, \tag{22}\]
\[\mathbf{u}^{*}=\mathbf{M}^{-1}~{}{\mathbf{b}_{d}}^{*}-\mathbf{G} \tag{23}\]
where in the appendix, we will show that the \(\mathbf{u}^{*}\) remains within a bounded range.
Note that the optimal ground reaction force \(\mathbf{F}^{*}\) serves as the control input for the robot and the variable \(\mathbf{u}^{*}\) acts as an input for the closed-loop dynamic. The closed-loop structure for the robot is depicted in Fig. 2(b) (the green dashed line).
### _Effects of Uncertainty on Dynamic_
If we consider uncertainty in the dynamic equation (6) and assume that the matrices \(\mathbf{D}\) and \(\mathbf{H}\) are not accurate, then we need to present the dynamic based on the nominal matrices \(\mathbf{\bar{D}}\), \(\mathbf{\bar{H}}\). The model uncertainty mostly comes from inaccurate values for mass, inertia, and foot position with respect to the center of mass. In addition to that, various terrain (e.g., rough terrain or soft terrain) might have a different impact on the robot, and it is unknown in a practical situation. Therefore, terrain uncertainty should also be considered in the dynamic model. In this section, we solely derive our control equations based on the model uncertainty. In Sec. VI, we will elaborate on how our proposed control system can also consider terrain uncertainty.
There is another parameter involved in the dynamic equation, namely the yaw angle. This angle is obtained through the state estimation, and we assumed that the state estimation has minimal uncertainty. According to the definition of matrices \(\mathbf{D}\) and \(\mathbf{H}\) in (4), the inaccurate value of the dynamic parameter mentioned above reflects on the \(\mathbf{H}\) matrix. Therefore, the dynamic equation in the presence of uncertainty can be represented as:
\[\hat{\mathbf{X}}=\mathbf{D}\mathbf{X}+(\mathbf{\bar{H}}+\mathbf{\bar{H}})\mathbf{F}+\left[\begin{array} []{c}\mathbf{0}_{6\times 1}\\ \mathbf{G}\end{array}\right] \tag{24}\]
where \(\tilde{\mathbf{H}}\) represent the uncertainty in matrix \(\mathbf{H}\). It is worth noting that according to the definition of \(\mathbf{H}\) in equation (11), the first six rows of \(\mathbf{H}\) consist of zeros. Thus, we can rephrase the dynamic equation (24) as follows:
\[\hat{\mathbf{X}}=\mathbf{D}\mathbf{X}+\mathbf{\bar{H}}\mathbf{F}+\mathbf{B}\mathbf{G}+\mathbf{B}\mathbf{\theta} \tag{25}\]
where \(\mathbf{\theta}\in\mathbb{R}^{6}\) is the vector of uncertainty for six corresponding equations and is defined as follows:
\[\mathbf{\theta}\triangleq\mathbf{B}^{T}\tilde{\mathbf{H}}\mathbf{F} \tag{26}\]
With reference to the state representation given by equation (25), the vector \(\mathbf{\theta}\) can be interpreted as a time-varying disturbance affecting the body and orientation accelerations.
The uncertainty vector \(\mathbf{\theta}\) depends on both time \(t\) and \(\mathbf{F}\). Since \(\mathbf{F}\) is obtained through the QP problem (9), it is a function of \(\mathbf{b}_{d}\). Furthermore, \(\mathbf{b}_{d}\) is a function of \(\mathbf{u}\) according to (21). Considering that \(\mathbf{u}\) is determined by the PD control (19), we can conclude that \(\mathbf{\theta}\) is a function of both the tracking error \(\mathbf{e}\) and time \(t\). As a result, for any given time \(t\), it is always possible to find \(\mathbf{\alpha}(t)\in\mathbb{R}^{6}\) and \(\mathbf{\beta}(t)\in\mathbb{R}^{6}\) satisfying [34]:
\[\mathbf{\theta}(\mathbf{e},t)=\mathbf{\alpha}(t)||\mathbf{e}||+\mathbf{\beta}(t). \tag{27}\]
### _Designing Adaptive Controller for Compensating the Uncertainty_
By incorporating \(L_{1}\) adaptive controller, we want to design a combined controller \(\mathbf{u}=\mathbf{u}_{1}+\mathbf{u}_{2}\), where \(\mathbf{u}_{1}\) is the control input to follow the desired trajectory for the nominal model as presented in (19) and \(\mathbf{u}_{2}\) is to compensate the nonlinear model uncertainties \(\mathbf{\theta}\). Therefore, the goal is to adjust the control signal \(\mathbf{u}_{2}\) so that the real model can follow the reference model. For the reference model, we employ a similar linear model described in (7) which, instead of \(\mathbf{M}\), the nominal matrix \(\mathbf{\bar{M}}\) is being used. The diagram of our proposed force-based adaptive control based on a balance controller is presented in Fig. 2(b).
The duplicate version of equation (25) for state space representation presented in (17) by considering combined controller \(\mathbf{u}=\mathbf{u}_{1}+\mathbf{u}_{2}\) is as follows:
\[\dot{\mathbf{e}}=\mathbf{D}_{l}\mathbf{e}+\mathbf{B}\mathbf{u}_{1}+\mathbf{B}(\mathbf{u}_{2}+\mathbf{\theta}). \tag{28}\]
Note that the vector of uncertainty \(\mathbf{\theta}\) in equations (25) and (28) are not the same since the state vector of equation (25) is \(\mathbf{X}\) while the state vector of equation (28) is system error \(\mathbf{e}\).
The state representation for the reference model can be expressed as follows:
\[\dot{\hat{\mathbf{e}}}=\mathbf{D}_{l}\hat{\mathbf{e}}+\mathbf{B}\hat{\mathbf{u}}_{1}+\mathbf{B}(\mathbf{u} _{2}+\hat{\mathbf{\theta}}), \tag{29}\]
where,
\[\hat{\mathbf{\theta}}=\hat{\mathbf{\alpha}}||\mathbf{e}||+\hat{\mathbf{\beta}}, \tag{30}\]
and \(\hat{\mathbf{u}}_{1}\) is defined as:
\[\hat{\mathbf{u}}_{1}=\left[-\mathbf{K}_{P}\quad-\mathbf{K}_{D}\right]\hat{\mathbf{e}}. \tag{31}\]
To compensate the estimated uncertainty \(\hat{\mathbf{\theta}}\), we can just simply choose \(\mathbf{u}_{2}=-\hat{\mathbf{\theta}}\) to obtain
\[\dot{\hat{\mathbf{e}}}=\mathbf{D}_{l}\hat{\mathbf{e}}+\mathbf{B}\hat{\mathbf{u}}_{1}. \tag{32}\]
However, \(\hat{\mathbf{\theta}}\) typically has high frequency due to fast estimation in the adaptation law. Therefore, we employ a low-pass filter to obtain smooth control signals as follows:
\[\mathbf{u}_{2}=-C(s)\hat{\mathbf{\theta}}, \tag{33}\]
where \(C(s)\) is a second-order low-pass filter with a magnitude of 1:
\[C(s)=\frac{{{\omega_{n}}^{2}}}{s^{2}+2\zeta{\omega_{n}}s+{{\omega_{n}}^{2}}}. \tag{34}\]
According to (21), the \(\mathbf{b}_{d}\) for the real model in the presence of uncertainty get the following form:
\[\mathbf{b}_{d}=\mathbf{\bar{M}}(\mathbf{u}_{1}+\mathbf{u}_{2}+\mathbf{G}). \tag{35}\]
Respectively, \(\hat{\mathbf{b}}_{d}\) for reference model is as follows:
\[\hat{\mathbf{b}}_{d}=\mathbf{\bar{M}}(\hat{\mathbf{u}}_{1}+\mathbf{u}_{2}+\hat{\mathbf{\theta}}+ \mathbf{G}). \tag{36}\]
The QP solver outlined in equation (9) allows us to obtain the optimal GRFs for the real model. Similarly, the optimal GRFs \(\mathbf{\hat{F}}\) for the reference model can be obtained as follows:
\[\hat{\mathbf{F}}^{*}=\underset{\hat{\mathbf{F}}\in\mathbb{R}^{12}}{\mathrm{ argmin}} (\mathbf{\hat{A}}\hat{\mathbf{F}}-\hat{\mathbf{b}}_{d})^{T}\mathbf{S}(\mathbf{\hat{A}}\hat{\mathbf{F}}- \hat{\mathbf{b}}_{d}) \tag{37}\] \[+\gamma_{1}\|\hat{\mathbf{F}}\|^{2}+\gamma_{2}\|\hat{\mathbf{F}}-\hat{ \mathbf{F}}^{*}_{\text{prev}}\|^{2}\] s.t. \[\mathbf{\underline{d}}\leq\mathbf{C}\hat{\mathbf{F}}\leq\mathbf{\bar{d}}\] \[\hat{\mathbf{F}}^{*}_{swing}=0.\]
Define the difference between the real model and the reference model \(\hat{\mathbf{e}}=\hat{\mathbf{e}}-\mathbf{e}\), we then have,
\[\dot{\hat{\mathbf{e}}}=\mathbf{D}_{l}\hat{\mathbf{e}}+\mathbf{B}\tilde{\mathbf{u}}_{1}+\mathbf{B}(\tilde {\mathbf{\alpha}}||\mathbf{e}||+\tilde{\mathbf{\beta}}), \tag{38}\]
where
\[\tilde{\mathbf{u}}_{1}=\hat{\mathbf{u}}_{1}-\mathbf{u}_{1},\ \tilde{\mathbf{\alpha}}=\hat{\mathbf{ \alpha}}-\mathbf{\alpha},\ \tilde{\mathbf{\beta}}=\hat{\mathbf{\beta}}-\mathbf{\beta}. \tag{39}\]
As a result, we will estimate \(\mathbf{\theta}\) indirectly through \(\mathbf{\alpha}\) and \(\mathbf{\beta}\), or the values of \(\hat{\mathbf{\alpha}}\) and \(\hat{\mathbf{\beta}}\) computed by the following adaptation laws based on the projection operators [54],
\[\dot{\hat{\mathbf{\alpha}}}=\mathbf{\Gamma}\text{Proj}(\hat{\mathbf{\alpha}},\mathbf{y}_{ \alpha}),\ \dot{\hat{\mathbf{\beta}}}=\mathbf{\Gamma}\text{Proj}(\hat{\mathbf{\beta}},\mathbf{y}_{ \beta}) \tag{40}\]
where \(\mathbf{\Gamma}\in\mathbb{R}^{6\times 6}\) is a symmetric positive definite matrix. The projection functions \(\mathbf{y}_{\alpha}\in\mathbb{R}^{6}\) and \(\mathbf{y}_{\beta}\in\mathbb{R}^{6}\) are:
\[\mathbf{y}_{\alpha}=-\mathbf{B}^{T}\mathbf{P}\tilde{\mathbf{e}}||\mathbf{e}||,\] \[\mathbf{y}_{\beta}=-\mathbf{B}^{T}\mathbf{P}\tilde{\mathbf{e}}, \tag{41}\]
where \(\mathbf{P}\in\mathbb{R}^{12\times 12}\) is a positive definite matrix that is defined according to the stability criteria using the Lyapunov equation. Moreover, the stability proof of the system is provided in the appendix.
## V Adaptive Force-based Control using MPC
Model predictive control (MPC) has been widely used across various fields, from finance to robotics. One of MPC's main advantages is its ability to handle complex systems with multiple inputs and outputs while considering hard control constraints [55]. MPC has also been applied to quadruped robots, providing stable locomotion [1]. Thanks to dynamic prediction in MPC, by using the same control framework, it can achieve different dynamic locomotion gaits. However, MPC's limitations become evident when dealing with significant uncertainty in the dynamic model. For instance, in the case of a quadruped robot carrying an unknown heavy load, MPC fails to track the desired state trajectory, resulting in unstable behavior and deviation from the desired trajectory, especially with dynamic gaits like bounding. Furthermore, the ability of a robot to traverse soft terrain, where the impact model is unknown, can present a significant challenge. Our proposed approach can tackle this challenge effectively, and we will discuss the details of how it handles the terrain unknown impact model in Sec. VI.
In the previous section Sec. IV, we presented an adaptive force-based control framework based on the balance controller. The balance controller relies on a quadratic program (QP) solver, which is simple to put into practice and well-suited for motions that are slow and safe, like standing and quasi-static walking. Additionally, the balance controller is an instantaneous control technique, meaning it does not predict the robot's future movement. As a result, the balance controller proves to be ineffective in fast-paced, highly dynamic scenarios. On the other hand, MPC has shown great potential in handling agile motions, even when it comes to underactuated gaits such as bounding.
In this section, we will present a novel control architecture to integrate adaptive control into the MPC framework. By this proposed framework, we can achieve fast and robust locomotion in the presence of uncertainties. This framework can also be extended to accommodate various dynamic gaits, such as trotting and bounding, in legged robots. As we discussed in a previous section, our approach is not restricted to a specific type of adaptive control, but we have chosen to utilize \(L_{1}\) adaptive control, which has demonstrated advantages over other adaptive control techniques. The first step in integrating \(L_{1}\) adaptive control and MPC is to understand the importance of a reference model and the challenges in synchronizing the real model and reference model. We then present our proposed adaptive MPC, which combines conventional MPC [1] with adaptive control. Finally, we address the challenge of real-time computation while having two MPC's in our control system. We will elaborate on how to adjust the frequency of each control component in an optimized manner to allocate enough computation resources for critical control parts and achieve real-time computation.
### _Reference Model_
Our method aims to design a combined controller based on MPC and \(L_{1}\) adaptive control that the real model follows the reference model. In accordance with our previous discussion in Sec. IV-C, the combined controller incorporates a control signal \(\mathbf{u}_{2}\) to account for model uncertainty, as indicated in equation (28). In this section, the auxiliary control signal for this purpose is \(\mathbf{u}_{a}\in\mathbb{R}^{6}\), thus, the uncertain dynamic equation (25) can be rewritten as follow:
\[\mathbf{\hat{X}}=\mathbf{D}\mathbf{X}+\mathbf{\bar{H}}\mathbf{F}+\mathbf{B}\mathbf{G}+\mathbf{B}(\mathbf{u}_{a}+\bm {\theta}). \tag{42}\]
The reference model is similar to the quasi-linear model described in (6) which, instead of \(\mathbf{H}\), the nominal matrix \(\bar{\mathbf{H}}\) is being used. The proposed adaptive MPC diagram is presented in Fig. 2(c).
We consider a reference model for \(L_{1}\) adaptive control that arises from MPC. The MPC method is computationally
expensive, but replacing it with other simpler control methods, such as the balance controller while simulating the robot's performance using dynamic gaits such as bounding is impossible. The reason is that in bounding gait, the robot's two feet on either the front or rear side touch the ground at each time step, making it challenging to accurately control the height and pitch angle. The MPC approach balances the error in the height and pitch angle and, based on the predicted dynamics of the system in the future, computes the optimal ground reaction forces. As seen in Fig. 4, the center of mass (COM) height oscillates around the desired value. Thus, the underactuated nature of certain gaits like bounding necessitates the use of MPC as the control system for the reference model.
When implementing MPC for a reference model, one challenge is ensuring that the reference model is synchronized with the real model. This is particularly important when the robot performs a gait with a periodic behavior, such as bounding (see Fig. 4). In order to correctly compare the real model with the reference model, both should have the same gait schedule. Additionally, the adaptive MPC proposed for legs in the stance phase is independent of the swing leg control. However, the foot position is crucial in calculating the moment of ground reaction force around the center of mass. Therefore, to maintain consistency between the real and reference models, it is important to ensure that the real robot's foot position is fed into the reference model as shown in Fig. 3c.
The reference model can be expressed as follows:
\[\dot{\tilde{\mathbf{X}}}=\mathbf{D}\tilde{\mathbf{X}}+\tilde{\mathbf{H}}\tilde{\mathbf{F}}+\mathbf{B} \mathbf{G}+\mathbf{B}(\mathbf{u}_{a}+\hat{\mathbf{\theta}}), \tag{43}\]
where
\[\hat{\mathbf{\theta}}=\hat{\mathbf{\alpha}}||\mathbf{e}||+\hat{\mathbf{\beta}}. \tag{44}\]
In this case, similar to Sec. IV, we use a second-order low-pass filter, same as (34). Therefore, the auxiliary control signal would be:
\[\mathbf{u}_{a}=-C(s)\hat{\mathbf{\theta}}. \tag{45}\]
By defining the difference between the real model and the reference model \(\tilde{\mathbf{X}}=\hat{\mathbf{X}}-\mathbf{X}\), we then have:
\[\dot{\tilde{\mathbf{X}}}=\mathbf{D}\tilde{\mathbf{X}}+\tilde{\mathbf{H}}\tilde{\mathbf{F}}+\mathbf{B} (\tilde{\mathbf{\alpha}}||\mathbf{e}||+\tilde{\mathbf{\beta}}), \tag{46}\]
where
\[\tilde{\mathbf{F}}=\hat{\mathbf{F}}-\mathbf{F},\ \tilde{\mathbf{\alpha}}=\hat{\mathbf{\alpha}}- \mathbf{\alpha},\ \tilde{\mathbf{\beta}}=\hat{\mathbf{\beta}}-\mathbf{\beta}. \tag{47}\]
Since the desired trajectory for both the real model and the reference model is the same (\(\mathbf{X}_{d}=\hat{\mathbf{X}}_{d}\)), the difference between the real model and reference model can be defined as:
\[\tilde{\mathbf{X}}=(\hat{\mathbf{X}}-\hat{\mathbf{X}}_{d})-(\mathbf{X}-\mathbf{X}_{d})=\hat{\mathbf{ e}}-\mathbf{e}=\tilde{\mathbf{e}}. \tag{48}\]
Therefore, equation (46) is equal to the following equation:
\[\dot{\tilde{\mathbf{e}}}=\mathbf{D}\tilde{\mathbf{e}}+\tilde{\mathbf{H}}\tilde{\mathbf{F}}+\mathbf{B} (\tilde{\mathbf{\alpha}}||\mathbf{e}||+\tilde{\mathbf{\beta}}). \tag{49}\]
The adaption laws and projection functions for computing the value of \(\mathbf{\alpha}\) and \(\mathbf{\beta}\) are the same as equations (40) and (41), respectively. Moreover, the stability of the control system can be proven using the same logic provided in the appendix.
### _Adaptive MPC_
After computing the auxiliary control signal \(\mathbf{u}_{a}\) using the adaptive controller presented in the previous subsection, we will integrate the \(\mathbf{u}_{a}\) with the conventional MPC for legged locomotion [1] and propose our adaptive MPC framework. We treat the auxiliary control signal \(\mathbf{u}_{a}\) as a residual vector in the system's equation to compensate for dynamic uncertainty. Therefore, the \(\mathbf{u}_{a}\) should be combined into the state vector and the equation (42) can be written as follow:
\[\hat{\mathbf{\eta}}=\mathbf{D}^{e}\mathbf{\eta}+\tilde{\mathbf{H}}^{e}\mathbf{F}+\mathbf{B}^{e}\mathbf{\theta} \tag{50}\]
with the following extended matrices:
\[\mathbf{\eta}=\left[\begin{array}{c}\mathbf{X}^{c}\\ \hline\mathbf{u}_{a}\end{array}\right]\in\mathbb{R}^{19} \tag{51}\] \[\mathbf{D}^{e}=\left[\begin{array}{c}\mathbf{D}^{c}_{13\times 13}& \begin{array}{c}\mathbf{0}_{6\times 6}\\ \mathbf{1}_{6\times 6}\\ \mathbf{0}_{1\times 6}\end{array}\\ \hline\mathbf{0}_{6\times 13}&\mathbf{0}_{6\times 6}\end{array}\right]\in\mathbb{R}^{19 \times 19}\] \[\tilde{\mathbf{H}}^{e}=\left[\begin{array}{c}\tilde{\mathbf{H}}^{c}\\ \hline\mathbf{0}_{6\times 12}\end{array}\right]\in\mathbb{R}^{19\times 12}\] \[\mathbf{B}^{e}=\left[\begin{array}{c}\mathbf{B}\\ \hline\mathbf{0}_{7\times 6}\end{array}\right]\in\mathbb{R}^{19\times 6}\]
where \(\tilde{\mathbf{H}}^{c}\) is the nominal value of \(\mathbf{H}^{c}\). The definition of \(\mathbf{X}^{c}\), \(\mathbf{D}^{c}\), and \(\mathbf{H}^{c}\) can be found in (11). Although \(\mathbf{u}_{a}\) is considered a part of the state vector in (50), it is just a residual vector for compensating dynamic uncertainty. Therefore, \(\mathbf{u}_{a}\) is constant in the state space equation and over the horizons. To this end, the components associated with \(\mathbf{u}_{a}\) in matrices \(\mathbf{D}^{e}\) and \(\tilde{\mathbf{H}}^{e}\) are assigned zero, which means \(\tilde{\mathbf{u}}_{a}=0\). Note that the value of \(\mathbf{u}_{a}\) will be updated according to the adaptive law, but it is constant during the prediction horizons.
The state representation in (50) is also convenient for discretization methods such as zero-order hold [56] for MPC. Therefore, our adaptive MPC can be designed according to (12) and based on the following discrete-time dynamic:
\[\mathbf{\eta}_{i+1}=\mathbf{D}^{e}{}_{t,t}\mathbf{\eta}_{t,i}+\tilde{\mathbf{H}}^{e}_{t,i}\mathbf{F}_ {i} \tag{52}\]
Fig. 4: **Motion snapshot of the robot with bounding gaits.** The quadruped’s center of mass motion (yellow line) cannot be easily predicted with a simple controller. This illustration can represent the importance of using MPC for reference model
### _Real-time Computation_
The main challenge in executing our proposed adaptive MPC framework is ensuring that the computation required is fast enough to be performed in real-time for hardware experiments. If the controller is unable to perform updates at a high frequency, it could result in the robot collapsing during dynamic motion. The control system comprises two MPCs, each with 13 to 19 states predicted over ten horizons. To ensure the robot's balance and allocate sufficient computation resources to each control component, we have devised a scheme, as depicted in Fig. 2(c), to update each control component in an optimized manner.
The robot's sensory data updates in real-time with a frequency of \(1\)\(kHz\). Thus, the reference model should update with the same frequency to compare the reference model states (\(\hat{\mathbf{X}}\)) and real model states (\(\mathbf{X}\)) correctly. The yellow dashed line in Fig. 2(c) indicates the update frequency for the reference model. We use the _odeint_ package from Boost software in C++ [57] to solve the ODE problem associated with the dynamic equation for the reference model.
One of the critical components in our proposed framework is the adaptive MPC, which is responsible for computing the ground reaction force for the robot, as shown in Fig. 2(c). Through our experimentation, we have determined that for robust locomotion with dynamic gaits, the optimal update frequency for the adaptive MPC should be 300 Hz. In contrast, the reference MPC, which plays a supporting role in the control system, is less sensitive and runs at a slower rate of 30 Hz. In addition, there is a two-millisecond delay between the running of the adaptive MPC and reference MPC to ensure sufficient computational resources are allocated to each component. This means that the two MPC frameworks do not run simultaneously in our control system.
## VI Adaptation to Unknown Impact model
The dynamic formulation presented in Sec. IV and Sec. V considers the presence of model uncertainty in real-world situations. It is assumed that the terrain is hard enough to allow the robot receives the desired force as ground reaction forces on its feet. However, this assumption may not hold if the robot walks on soft or elastic terrain with an unknown impact model, which may not generate the desired force needed for stable locomotion. Some previous studies have included terrain knowledge and contact models in their balancing controllers to address the soft terrain challenge, mainly using a spring-damper model to characterize the soft terrain [58, 59]. Some control frameworks for adapting to soft terrain in real-time have also been developed using iterative learning [60] and whole-body control [61], without prior knowledge about the terrain. This section demonstrates that the proposed method in sections Sec. IV and Sec. V can also handle unknown impact models from terrain, allowing the robot to maintain stability while walking on soft terrains.
Assume the computed force \(\mathbf{F}\) by MPC in (25) cannot be achieved perfectly due to walking on soft terrain. Therefore, equation (25) can be rewritten as follow:
\[\hat{\mathbf{X}}=\mathbf{D}\mathbf{X}+\bar{\mathbf{H}}(\mathbf{F}_{a}+\tilde{\mathbf{F}}_{a})+\mathbf{B} \mathbf{G}+\mathbf{B}\mathbf{\theta} \tag{53}\]
which \(\mathbf{F}_{a}\) is the actual ground reaction force exerted on the robot and \(\tilde{\mathbf{F}}_{a}\) is the difference between the desired ground reaction force and actual reaction force. Given that \(\tilde{\mathbf{F}}_{a}\) depends on the tracking error \(\mathbf{e}\) and time, the uncertainty vector arising from the ground reaction force can be incorporated with \(\mathbf{\theta}\). Therefore, we can reformulate equation (53) as follows:
\[\hat{\mathbf{X}}=\mathbf{D}\mathbf{X}+\bar{\mathbf{H}}\mathbf{F}_{a}+\mathbf{B}\mathbf{G}+\mathbf{B}(\mathbf{ \theta}+\mathbf{\theta}_{F}). \tag{54}\]
where the uncertainty vector \(\mathbf{\theta}_{F}\) is defined as follow:
\[\mathbf{\theta}_{F}\triangleq\mathbf{B}^{T}\bar{\mathbf{H}}\tilde{\mathbf{F}}_{a} \tag{55}\]
The equation (54) is in the form of equation (25), which uses actual ground reaction force instead of desired ground reaction force. Therefore, all formulations for implementing adaptive controllers are also valid for a situation with an unknown impact model.
## VII Results
In this section, we validate our control approach in simulation and hardware experiments on a Unitree A1 robot. All the hardware experiment's computation runs on a single PC (Intel i7-6500U, 2.5 GHz, 64-bit). For simulation, the control system is implemented in ROS Noetic with the Gazebo 11 simulator, which provides a high-fidelity simulation of the A1 robot. A video showcasing the results accompanies this paper1.
Footnote 1: [https://youtu.be/Qunwyys/fTk1k](https://youtu.be/Qunwyys/fTk1k)
We set the control parameters for MPC, the adaption law, and the low-pass filter as presented in Table I. We use one set of parameters for all the experiments with different locomotion gaits, indicating that our approach is easily generalizable. The following subsections will introduce different experiment results in terms of model and environment uncertainty (see Fig. 5). In each experiment, the robot starts by using a balance controller to stand up and then switches to the MPC framework for walking or running.
### _Comparative Analysis_
In order to evaluate the performance of our proposed adaptive MPC method, we conduct a comparative experiment with the conventional MPC method presented in [1]. The objective is to understand the advantages of integrating the adaptive controller into MPC for quadrupedal locomotion.
#### V-A1 Walking with significant model uncertainty
The experiment involves the robot walking and rotating in different directions, using both adaptive and non-adaptive controllers while carrying an unknown load. The results of the experiment show that the adaptive controller provides robust locomotion, with excellent tracking error, even when carrying an unknown 5 kg load. On the other hand, the non-adaptive controller results in a considerable error in the COM height and eventually collapses under the weight of just a 3 kg load. The comparative results for the adaptive and non-adaptive controllers are shown in Fig. 6.
#### V-A2 Walking on soft terrain
To evaluate the capability of our proposed control method in handling unknown impact models, we conducted an experiment where the robot was made for walking on a double foam, which symbolizes a soft terrain. The performance of both the adaptive and non-adaptive controllers was evaluated and compared. The results are depicted in Fig. 7, which represents the robot's roll angle. The figure clearly illustrates that the adaptive controller was able to maintain the robot's balance on the soft terrain, while the non-adaptive controller was unable to do so, leading to the collapse of the robot.
### _Running with Multiple Gaits_
To demonstrate the superiority of our proposed approach for dynamic gaits, we conducted experiments with the robot running while carrying an unknown load. These experiments were carried out for both the trotting and bounding gaits, with an unknown load of 5 kg and 3 kg, respectively. The results of these experiments are shown in 8. It can be seen from the figure that the tracking of the center of mass height during
Fig. 5: **Navigating different terrain using our proposed adaptive MPC while carrying an unknown heavy load.** a) gravel, b) grass, c) rough terrain, d) high-sloped terrain.
Fig. 6: **Comparing performance of adaptive and non-adaptive controllers.** a) Snapshots of the A1 robot with the non-adaptive controller while carrying an unknown 3 kg load and it collapses, b) snapshots of the A1 robot walking robustly with the adaptive controller while carrying an unknown 5 kg load, c) Comparative plots of the COM height for adaptive and non-adaptive controllers.
Fig. 7: **Comparing performance of adaptive and non-adaptive controllers on soft terrain.** The A1 robot tries to walk on double soft foam using a) non-adaptive and b) adaptive controllers. c) Shows the plot of the robot’s roll angle.
the bounding gait is more unstable compared to the trotting gait, which is due to the inherent underactuated nature of the bounding gait.
### _Time-varying Load_
To demonstrate the effectiveness of our proposed adaptive force control in adapting to model uncertainty, we conducted simulations where the robot carries a time-varying load of up to 92% of its weight during walking. As shown in Fig. 9, our approach can enable the robot to adapt to time-varying uncertainty. In the simulation, the robot starts with an unknown 5 kg load. While increasing the robot's velocity, the robot is subjected to a varying external force in the z-direction that rises to 60 N, resulting in an additional unknown 11 kg load. These results indicate that our proposed approach effectively handles high levels of model uncertainty.
### _Terrain Uncertainty_
To demonstrate the capability of our proposed method to handle terrain uncertainty, we tested the robot navigating various terrain while carrying an unknown 5 kg load. To this end, we tried walking experiments on multiple rough terrains as well as high-sloped terrain, and we got impressive results.
#### Iv-D1 Rough terrain
We tested the robot navigating various rough terrains such as grass and gravel. The robot walks and rotates in multiple directions while carrying an unknown 5 kg load. Some snapshots of the robot walking on diverse rough terrain are presented in Fig. 5. Our approach is based on a force controller and retains the robustness features of the baseline framework, allowing the robot to handle the rough terrain effectively.
#### Iv-D2 Sloped terrain
To enable the robot to climb the sloped terrain perfectly without vision, we need to adjust its orientation to make its body parallel to the walking surface. This is done by using the footstep location to estimate the slope of the ground. For each \(i\)-th leg, we can measure the foot position \(\mathbf{p}_{i}=(p_{x,i},p_{y,i},p_{z,i})\) and build the vector of feet x-position (\(\mathbf{p}_{x}\)), y-position (\(\mathbf{p}_{y}\)), and z-position (\(\mathbf{p}_{z}\)). Thus, we can model the walking surface as a plane:
\[z(x,y)=a_{0}+a_{1}x+a_{2}y \tag{56}\]
and the coefficients (\(a_{0}\), \(a_{1}\), and \(a_{2}\)) will be obtained through the solution of the least square problem using \(\mathbf{p}_{x}\), \(\mathbf{p}_{x}\), and \(\mathbf{p}_{x}\) data (see [47] for more details).
Note that the desired roll and pitch angles for the robot will be modified on the slope according to the following:
\[\text{roll}=\arctan(a_{2}),\quad\text{pitch}=\arctan(a_{1}). \tag{57}\]
As a result, the reference model's desired pitch and roll angles must be adjusted to the non-zero values determined as described above. It's important to note that the reference model utilizes the actual foot position of the robot, so there is
Fig. 8: **Running experiment.** The A1 robot runs with the velocity of 1 m/s using our proposed method. a) trotting gait with an unknown 5 kg load, b) bounding gait with an unknown 3 kg load, c) Plots of COM height.
Fig. 9: **Simulation results for the robot carrying a time-varying load.** a) The robot starts with an unknown 5 kg load, then gradually, an unknown time-varying force will be exerted on the robot as shown in (b) and (c) while the robot’s velocity increases. d) Plot of COM height, e) Robot velocity tracking in the x-direction.
no need to make any changes to the reference model's footstep planning when the robot is attempting to climb a slope.
## VIII Conclusion
In conclusion, a novel control system has been presented that incorporates adaptive control into force control for legged robots walking under significant uncertainties. We have demonstrated our proposed approach's effectiveness using numerical and experimental validations. The experiments show the success of the implementation of the proposed adaptive force control on quadruped robots, allowing them to walk and run while carrying an unknown heavy load on their trunk. The results are remarkable, with the robot being able to carry a load of up to 5 kg (50% of its weight) while still keeping the tracking error within a small range and maintaining stability even in all directions. The experiment demonstrates that the proposed adaptive force control system cannot only adapt to model uncertainty but also leverage the benefits of force control in navigating rough terrains and soft terrain. On the other hand, the baseline non-adaptive controller fails to track the desired trajectory and causes the robot to collapse under uncertainty.
## Acknowledgments
The authors would like to thank Yiyu Chen at Dynamic Robotics and Control lab (DRCL) for his help in conducting the hardware experiments.
### _Linear Quadratic Lyapunov Theory_
According to Lyapunov theory [62], the PD control described in (19) will asymptotically stabilize the system if
\[\mathbf{A}_{m}=\begin{bmatrix}\mathbf{0}_{6}&\mathbf{1}_{6}\\ -\mathbf{K}_{P}&-\mathbf{K}_{D}\end{bmatrix}\in\mathbb{R}^{12\times 12} \tag{58}\]
is Hurwitz. This means that by choosing a control Lyapunov function candidate as follows:
\[V(\mathbf{e})=\mathbf{e}^{T}\mathbf{P}\mathbf{e}, \tag{59}\]
where \(\mathbf{P}\in\mathbb{R}^{12\times 12}\) is the solution of the Lyapunov equation
\[\mathbf{A}_{m}{}^{T}\mathbf{P}+\mathbf{P}\mathbf{A}_{m}=-\mathbf{Q}_{L}, \tag{60}\]
and \(\mathbf{Q}_{L}\in\mathbb{R}^{12\times 12}\) is any symmetric positive-definite matrix. We then have:
\[\dot{V}(\mathbf{e},\mathbf{u})+\lambda V(\mathbf{e}) =\mathbf{e}^{T}(\mathbf{D}_{l}{}^{T}\mathbf{P}+\mathbf{P}\mathbf{D}_{l})\mathbf{e}\] \[\quad+\lambda V(\mathbf{e})+2\mathbf{e}^{T}\mathbf{P}\mathbf{B}\mathbf{u}\ \leq 0, \tag{61}\]
where,
\[\lambda=\frac{\lambda_{min}(\mathbf{Q}_{L})}{\lambda_{max}(\mathbf{P})}>0. \tag{62}\]
As a result, the state variable \(\mathbf{e}\) and the control input \(\mathbf{u}\) always remain bounded:
\[\|\mathbf{e}\|\leq\delta_{\eta},\quad\|\mathbf{u}\|\leq\delta_{u}. \tag{63}\]
However, the control signal \(\mathbf{u}^{*}\) (23) we construct by solving QP problem (9), is not always the same as \(\mathbf{u}\). Based on the friction constraints present in equation (9), the value of \(\mathbf{F}^{*}\) is always bounded. Besides, according to the definition of \(\mathbf{A}\), \(\mathbf{M}\), and \(\mathbf{G}\), these matrices also have bounded values. Thus, it implies that:
\[\|\mathbf{u}^{*}\|\leq\delta_{u^{*}}. \tag{64}\]
Therefore, the vector of difference between \(\mathbf{u}\) and \(\mathbf{u}^{*}\) can be defined as:
\[\mathbf{\Delta}=\mathbf{u}^{*}-\mathbf{u} \tag{65}\]
which is also bounded according to (64) and (63):
\[\|\mathbf{\Delta}\|\leq\delta_{\Delta}. \tag{66}\]
By substituting \(\mathbf{u}^{*}\) in (61), we have:
\[\dot{V}(\mathbf{e},\mathbf{u}^{*})+\lambda V(\mathbf{e})\leq 2\mathbf{e}^{T}\mathbf{P}\mathbf{B}\mathbf{ \Delta}\leq\epsilon_{V}, \tag{67}\]
where
\[\epsilon_{V}=2\|\mathbf{P}\|\delta_{\eta}\delta_{\Delta}. \tag{68}\]
### _Stability Analysis_
_Theorem_: Consider the system dynamics with uncertainty described by (28), and a reference model described by (29). Assume the use of an \(L_{1}\) adaptive controller with the optimal closed-loop control signal given by (23), the adaptive control signal given by (33), and the adaptation laws given by (40). Then, under the aforementioned \(L_{1}\) adaptive controller, the tracking error between the real model and reference model denoted as \(\hat{\mathbf{e}}\), as well as the errors between the real and estimated uncertainty, denoted as \(\hat{\mathbf{\alpha}}\) and \(\hat{\mathbf{\beta}}\), respectively, are bounded.
_Proof_: Let us consider the following control Lyapunov candidate function:
\[\tilde{V}=\hat{\mathbf{e}}^{T}\mathbf{P}\tilde{\mathbf{e}}+\tilde{\mathbf{\alpha}}^{T}\mathbf{ \Gamma}^{-1}\tilde{\mathbf{\alpha}}+\tilde{\mathbf{\beta}}^{T}\mathbf{\Gamma}^{-1}\tilde{ \mathbf{\beta}}. \tag{69}\]
Therefore, its time derivative will be
\[\dot{\tilde{V}} =\hat{\mathbf{e}}^{T}\mathbf{P}\tilde{\mathbf{e}}+\tilde{\mathbf{e}}^{T}\mathbf{P} \hat{\mathbf{e}}+\dot{\tilde{\mathbf{\alpha}}}^{T}\mathbf{\Gamma}^{-1}\tilde{\mathbf{\alpha} }+\tilde{\mathbf{\alpha}}^{T}\mathbf{\Gamma}^{-1}\dot{\tilde{\mathbf{\alpha}}}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
From (40) and (74), we can imply that
\[\tilde{\mathbf{\alpha}}^{T}\mathbf{\Gamma}^{-1}\dot{\tilde{\mathbf{\alpha}}} \leq\tilde{\mathbf{\alpha}}^{T}\mathbf{y}_{\alpha}-\tilde{\mathbf{\alpha}}^{T} \mathbf{\Gamma}^{-1}\dot{\mathbf{\alpha}},\] \[\tilde{\mathbf{\beta}}^{T}\mathbf{\Gamma}^{-1}\dot{\tilde{\mathbf{\beta}}} \leq\tilde{\mathbf{\beta}}^{T}\mathbf{y}_{\beta}-\tilde{\mathbf{\beta}}^{T}\mathbf{ \Gamma}^{-1}\dot{\mathbf{\beta}}. \tag{75}\]
We now replace (71), (72) and (75) to (70), which results in
\[\dot{\tilde{V}} \leq-\lambda\tilde{\mathbf{e}}^{T}\mathbf{P}\tilde{\mathbf{e}}+\epsilon_{\tilde {V}}\] \[+\tilde{\mathbf{\alpha}}^{T}(\mathbf{y}_{\alpha}+\mathbf{B}^{T}\mathbf{P}\tilde{ \mathbf{e}}||\mathbf{e}||)-\tilde{\mathbf{\alpha}}^{T}\mathbf{\Gamma}^{-1}\dot{\mathbf{\alpha}}\] \[+(\mathbf{y}_{\alpha}^{T}+\tilde{\mathbf{e}}^{T}\mathbf{P}\tilde{\mathbf{B}}|| \mathbf{e}||)\tilde{\mathbf{\alpha}}-\tilde{\mathbf{\alpha}}^{T}\mathbf{\Gamma}^{-1}\mathbf{\alpha}\] \[+\tilde{\mathbf{\beta}}^{T}(\mathbf{y}_{\beta}+\mathbf{B}^{T}\mathbf{P}\tilde{ \mathbf{e}})-\tilde{\mathbf{\beta}}^{T}\mathbf{\Gamma}^{-1}\dot{\mathbf{\beta}}\] \[+(\mathbf{y}_{\beta}^{T}+\tilde{\mathbf{e}}^{T}\mathbf{P}\mathbf{B})\tilde{\mathbf{ \beta}}-\tilde{\mathbf{\beta}}^{T}\mathbf{\Gamma}^{-1}\tilde{\mathbf{\beta}} \tag{76}\]
So, by using the chosen projection functions (41), then we conclude that:
\[\dot{\tilde{V}}+\lambda\tilde{V}\leq\epsilon_{\tilde{V}} +\lambda\tilde{\mathbf{\alpha}}^{T}\mathbf{\Gamma}^{-1}\tilde{\mathbf{\alpha }}+\lambda\tilde{\mathbf{\beta}}^{T}\mathbf{\Gamma}^{-1}\tilde{\mathbf{\beta}}\] \[-\tilde{\mathbf{\alpha}}^{T}\mathbf{\Gamma}^{-1}\dot{\mathbf{\alpha}}-\dot{ \mathbf{\alpha}}^{T}\mathbf{\Gamma}^{-1}\tilde{\mathbf{\alpha}}\] \[-\tilde{\mathbf{\beta}}^{T}\mathbf{\Gamma}^{-1}\dot{\mathbf{\beta}}-\tilde{ \mathbf{\beta}}^{T}\mathbf{\Gamma}^{-1}\tilde{\mathbf{\beta}}. \tag{77}\]
We assume that the uncertainties \(\mathbf{\alpha}\), \(\mathbf{\beta}\), and their time derivatives are bounded. Furthermore, the projection operators (40) will also keep \(\tilde{\mathbf{\alpha}}\) and \(\tilde{\mathbf{\beta}}\) bounded (see [34] for a detailed proof about these properties.) We define these bounds as follows:
\[||\tilde{\mathbf{\alpha}}||\leq \tilde{\mathbf{\alpha}}_{b},\ \ ||\tilde{\mathbf{\beta}}||\leq\tilde{\mathbf{\beta}}_{b},\] \[||\dot{\mathbf{\alpha}}||\leq \dot{\mathbf{\alpha}}_{b},\ \ ||\tilde{\mathbf{\beta}}||\leq\tilde{\mathbf{\beta}}_{b}. \tag{78}\]
Combining this with (77), we have,
\[\dot{\tilde{V}}+\lambda\tilde{V}\leq\lambda\delta_{\tilde{V}}, \tag{79}\]
where
\[\delta_{\tilde{V}}=2||\mathbf{\Gamma}||^{-1}(\tilde{\mathbf{\alpha}}_{b}^{2}+\tilde{ \mathbf{\beta}}_{b}^{2}+\frac{1}{\lambda}\tilde{\mathbf{\alpha}}_{b}\dot{\mathbf{\alpha} }_{b}+\frac{1}{\lambda}\tilde{\mathbf{\beta}}_{b}\dot{\mathbf{\beta}}_{b})+\frac{1}{ \lambda}\epsilon_{\tilde{V}}. \tag{80}\]
Thus, if \(\tilde{V}\geq\delta_{\tilde{V}}\) then \(\dot{\tilde{V}}\leq 0\). As a result, we always have \(\tilde{V}\leq\delta_{\tilde{V}}\). In other words, by choosing the adaptation gain \(\mathbf{\Gamma}\) sufficiently large and \(\mathbf{P}\) relatively small, we can limit the Control Lyapunov Function (69) in an arbitrarily small neighborhood \(\delta_{\tilde{V}}\) of the origin. According to (58) and (60), achieving a small value for \(\mathbf{P}\) depends on choosing a proper value for \(\mathbf{K}_{P}\), \(\mathbf{K}_{D}\), and \(\mathbf{Q}_{L}\). Therefore, the value of PD gains affects the stability of the whole system. Finally, the tracking errors between the dynamics model (28) and the reference model (29), \(\tilde{\mathbf{e}}\), and the error between the real and estimated uncertainty, \(\tilde{\mathbf{\alpha}}\), \(\tilde{\mathbf{\beta}}\) are bounded as follows:
\[||\tilde{\mathbf{e}}||\leq\sqrt{\frac{\delta_{\tilde{V}}}{||\mathbf{P}||}},||\tilde{ \mathbf{\alpha}}||\leq\sqrt{||\mathbf{\Gamma}||\delta_{\tilde{V}}},||\tilde{\mathbf{ \beta}}||\leq\sqrt{||\mathbf{\Gamma}||\delta_{\tilde{V}}}. \tag{81}\]
|
2303.00979 | Multi-Source Soft Pseudo-Label Learning with Domain Similarity-based
Weighting for Semantic Segmentation | This paper describes a method of domain adaptive training for semantic
segmentation using multiple source datasets that are not necessarily relevant
to the target dataset. We propose a soft pseudo-label generation method by
integrating predicted object probabilities from multiple source models. The
prediction of each source model is weighted based on the estimated domain
similarity between the source and the target datasets to emphasize contribution
of a model trained on a source that is more similar to the target and generate
reasonable pseudo-labels. We also propose a training method using the soft
pseudo-labels considering their entropy to fully exploit information from the
source datasets while suppressing the influence of possibly misclassified
pixels. The experiments show comparative or better performance than our
previous work and another existing multi-source domain adaptation method, and
applicability to a variety of target environments. | Shigemichi Matsuzaki, Hiroaki Masuzawa, Jun Miura | 2023-03-02T05:20:36Z | http://arxiv.org/abs/2303.00979v2 | # Multi-Source Soft Pseudo-Label Learning with
###### Abstract
This paper describes a method of domain adaptive training for semantic segmentation using multiple source datasets that are not necessarily relevant to the target dataset. We propose a _soft_ pseudo-label generation method by integrating predicted object probabilities from multiple source models. The prediction of each source model is weighted based on the estimated domain similarity between the source and the target datasets to emphasize contribution of a model trained on a source that is more similar to the target and generate reasonable pseudo-labels. We also propose a training method using the soft pseudo-labels considering their entropy to fully exploit information from the source datasets while suppressing the influence of possibly misclassified pixels. The experiments show comparative or better performance than our previous work and another existing multi-source domain adaptation method, and applicability to a variety of target environments.
## I Introduction
Semantic segmentation based on deep neural networks (DNNs) has been used as a common and strong tool for scene recognition of autonomous mobile agents. However, efficiency of training remains a critical problem. In a usual practice, networks are trained on a large amount of manually labeled data collected via a laborious annotation process. Although rich datasets are publicly available for some actively studied tasks such as urban scenes where autonomous driving is a hot topic [1, 2], there are way fewer datasets for specific scenes such as greenhouses and unstructured scenes.
Domain adaptation (DA) is a task to adapt a model pre-trained on a source dataset to a target dataset, and unsupervised DA (UDA) is a specific problem setting where no ground truth label for the target dataset is accessible. Especially in autonomous driving community, UDA for semantic segmentation has been actively studied as a promising approach to efficient training by exploiting simulated photo-realistic images that are generated by video games [3] or dedicated simulators [4, 5]. However, again, it is not easy to collect suitable source datasets for many other environments.
In our previous work [6], we proposed a method to train a semantic segmentation model for greenhouse images using multiple publicly available datasets of the scenes that are not very relevant to greenhouses, such as urban scenes and unstructured outdoor scenes, as source datasets to overcome the aforementioned problem. The method allows for utilizing source datasets with different scenes and label sets to train a model on the target dataset. In the method, the outputs from each source model are converted to pixel-wise one-hot labels in the common target label sets, and merged based on unanimity of all the models (see Fig. 1). This way, reliable labels can be effectively extracted.
While such a strict criterion, contributes to excluding possibly wrong labels and results in good performance, it completely ignores information of many pixels excluded from the training, leading to sparsity of valid labels. A limitation of the method stemming from the problem is that, the source datasets must have at least one corresponding object class for all target classes to enable unanimity-based label selection. This restricts the choice of source datasets.
In this paper, we extend our previous work [6] and propose a novel multi-source pseudo-label generation method. Instead of selecting one-hot pseudo-labels based on unanimity, we generate _soft_ pseudo-labels which take a form of class probability distribution on each pixel by integrating the outputs from the source models. To integrate the outputs from the source models, we take into account quantitatively evaluated domain similarities between the target data and each source dataset, so that the predictions of a model trained on a dataset closer to the target is emphasized. In training, we weight the
Fig. 1: **Top:** Previous method [6] to generate pseudo-labels using multiple source models. A label is assigned only if all models agree with each other to remove wrong labels. If a class is not included in a source, the class never appears in the resulting pseudo-labels. **Bottom**: Proposed method. It generates _soft_ pseudo-labels by summing predicted scores weighted by inverse domain gap, i.e., domain similarity. The degree of agreement of the source models are represented by the inverse entropy of the soft pseudo-labels. It can also involve labels not present in some source datasets.
loss values on each pixel with a value inverse proportional to the entropy of the soft pseudo-label.
The contributions of the paper are as follows:
1. A soft pseudo-label generation method using multiple source datasets considering domain gap between the source datasets and the target dataset
2. A training method using the soft pseudo-labels considering entropy of the labels
## II Related Work
### _Domain adaptation for semantic segmentation_
DA is attracting attention as a method to workaround the necessity of manual annotation on the dataset of the target task. Specifically in training of DNNs for semantic segmentation, there are two major approaches: _domain alignment_ and _pseudo-label learning_[7, 8]. The former is realized via minimizing divergence metrics [9], adversarial learning [10, 11], etc. These two approaches are not mutually exclusive, but jointly used in many methods [12, 13].
Multi-source Domain Adaptation (MDA) for semantic segmentation has been actively studied in a last few years. Zhao et al. [14] pioneered an MDA method for semantic segmentation by extending the work by [10] to the multi-source setting. He et al. [15] proposed to use multiple collaboratively trained source models to generate pseudo-labels by simply summing their predicted probabilities. This approach assumes that the source datasets are equally similar to the target dataset, which is not the case in our problem setting. In our previous work [6], we proposed a multi-source pseudo label learning method specifically for training a model on greenhouse images leveraging multiple publicly available datasets. Unlike [15], our previous method showed effectiveness on transferring knowledge from source datasets structurally dissimilar to the target dataset. However, the pseudo-labels generated in [6] are based on unanimous outputs from the source models, which inherently discards much information and does not consider similarities of the source datasets with the target. The present work is an attempt to resolve the limitation of the previous work and to gain better applicability of the method.
### _Domain gap evaluation_
The domain gap, or domain shift, generally stems from the discrepancy between data distributions of the two data domains. There have been several metrics to measure the shift between two domains, such as Kulback-Leibler divergence (KLD), Maximum Mean Discrepancy (MMD) [16], \(\mathcal{H}\Delta\mathcal{H}\)-divergence [17]. Those metrics are often used as an objective to minimize in domain adaptive tasks to learn domain-invariant knowledge [18, 19].
Liu et al. [20] proposed a data-driven method of domain gap evaluation. While the discrepancy metrics such as MMD are used as an objective of minimization during adaptation, the method focuses on evaluating the discrepancy between the source and the target dataset themselves. We employ this method to evaluate relative domain gaps between different source datasets against the target dataset.
## III Preliminaries
### _Notations_
Formally, we assume \(M\) labeled source datasets \(S_{1},\cdots,S_{M}\) and an unlabeled target dataset \(S_{T}\). A source dataset \(S_{i}\) is a set of \(N_{i}\) input images \(X_{i}=\{x_{i,j}\}_{j=1}^{N_{i}}\) and corresponding pixel-wise semantic label maps \(Y_{i}=\{y_{i,j}\}_{j=1}^{N_{i}}\) with \(C_{i}\) classes. The target dataset \(S_{T}\) consists of a set of \(N_{T}\) unlabeled images \(X_{T}=\{x_{T,j}\}_{j=1}^{N_{T}}\). Let \(F\left(\cdot;\theta_{k}\right)\) denote a segmentation model with learnable weights \(\theta_{k}\) trained on a source dataset \(S_{k}\). In addition, let \({}^{k}p_{i,j}\in\mathbb{R}^{H\times W\times C_{k}}\) and \({}^{k}f_{i,j}\in\mathbb{R}^{H\times W\times D}\) denote a pixel-wise object probability and \(D\)-dimensional intermediate features produced by \(F\left(x_{i,j};\theta_{k}\right)\), respectively. For a tensor \(z\in\mathbb{R}^{H\times W\times C}\), \(z^{\left(h,w,c\right)}\) denotes an element at index \(\left(h,w,c\right)\).
### _Network architecture_
The proposed method does not rely on a specific network architecture. The method, however, assumes two parallel segmentation decoders, namely _main_ and _auxiliary_ branches to enable uncertainty-based loss rectification [12, 6]. Let \({}^{k}_{m}p_{i,j}\) and \({}^{k}_{a}p_{i,j}\) denote object scores predicted by the main branch and the auxiliary branch, respectively. The object probability \({}^{k}p_{i,j}\) is specifically given as follows [12]:
\[{}^{k}p_{i,j}=\text{Softmax}\left({}^{k}_{m}p_{i,j}+0.5\,{}^{k}_{a}p_{i,j} \right). \tag{1}\]
## IV Multi-source soft pseudo-label generation considering domain similarity
In this section, we describe the method of generating soft pseudo-labels utilizing segmentation models pre-trained on the source datasets as the first step of the proposed method.
### _Pseudo-label generation_
First, we train semantic segmentation models using the source datasets. Following [6], the models are trained using the ordinary cross entropy loss:
\[L_{ce}\left(p,y\right)^{\left(h,w\right)}=-\sum_{c\in C}y^{\left(h,w,c\right) }\log p^{\left(h,w,c\right)}, \tag{2}\]
where \(p\) denotes the predicted pixel-wise probability distributions, and \(y\) denotes a one-hot label map.
After pre-training, we generate pseudo-labels by integrating outputs on the target images from the source models. The outputs are weighted with domain similarity, i.e., inverse of domain gap between the source and the target datasets. To evaluate relative domain gap, we employ a method by Liu et al. [20] that uses entropy of predictions as a measure of the domain gap. The domain gap \({}^{i}G_{j}\) between source dataset \(S_{i}\) and a target image \(x_{T,j}\) is calculated as follows:
\[{}^{i}G_{j}=\frac{1}{\log C_{i}}E\left({}^{i}p_{T,j}\right), \tag{3}\]
where \(E\left(\cdot\right)\) is entropy of the prediction defined as follows:
\[E\left({}^{i}p_{T,j}\right)=-\sum_{h,w}\sum_{c=1}^{C_{i}}{}^{i}p_{T,j}^{\left( h,w,c\right)}\log\left({}^{i}p_{T,j}^{\left(h,w,c\right)}\right). \tag{4}\]
The larger \({}^{i}G_{j}\) is, the farther the source dataset \(S_{i}\) and the target image \(x_{T,j}\) are. This method of domain gap estimation
is based on an observation that it affects the distribution of classification outputs, and large domain gap makes fuzzy classification scores [20], which eventually lead to high entropy of the distribution. Since the value range of entropy depends on the number of classes, the value is normalized by the maximum value, i.e., \(\log C_{i}\).
Using the estimated domain gaps, soft pseudo-labels are generated as follows:
\[\hat{y}_{j}^{(h,w)}=\text{Softmax}\left(\sum_{i=1}^{M}\frac{1}{iG_{j}}\psi_{i} \left(i^{p(h,w)}_{T,j}\right)\right), \tag{5}\]
where \(\psi_{i}:\mathbb{R}^{C_{i}}\rightarrow\mathbb{R}^{C_{T}}\) denotes a function to convert a probability distribution in the source label space to the target label space (see Fig. 2). The label mapping is heuristically defined for each source as in [6].
### _Analysis of the soft pseudo-labels_
Fig. 3 shows examples of predictions from each source model, and predicted relative domain similarity (inverse of domain gap) of the inputs, as well as argmax and inverse label entropy of the generated soft pseudo-labels. The first image of TUT Park dataset having a large area of _plant_ and _grass_ on both sides was evaluated as similar to the Forest dataset, which shares a similar structure. In the second image, on the other hand, there is a large building, which is seen more often in urban scenes, resulting in higher similarity with Cityscapes. In a case where the similarity between the sources and the target is difficult to evaluate due to large difference (e.g., greenhouse vs. urban / outdoor scenes), the domain similarity serves as relative confidence measures. For example, in the second example of Greenhouse A, predictions by CamVid and Forest models are fairly reasonable and the estimated similarities are relatively high, while the other way around in Cityscapes. From the observation above, it seems reasonable to use the domain similarity scores as importance weights in pseudo-label generation.
By merging the source predictions using the proposed method described above, fairly accurate pseudo-labels are generated (as can be seen in the column of "argmax"). In addition, entropy tends to be higher (a weight is lower) on pixels on which all source models agree with each other. This observation leads to our training method utilizing the inverse entropy of the soft pseudo-labels, which is a soft alternative of the unanimity-based label selection [6].
Notably in the examples of _TUT Park_ dataset, _grass_ class is assigned in the pseudo-labels though it is not present in CamVid dataset. In the previous method [6], such flexible label assignment is not possible because of the strict unanimity-based label selection. This characteristic allows for more flexible choice of source datasets.
## V Network training
We train a target model using the soft pseudo-labels with the loss function specifically tailored for the soft pseudo-labels. We also employ an existing method [13] for training robust to misclassified pseudo-labels.
### _Loss function_
As a base classification loss, we employ symmetric cross-entropy (SCE) loss [22], a variant of cross-entropy loss robust to noisy labels following [12].
\[L_{\text{{see}}}^{(h,w,c)} =\alpha L_{\text{{ce}}}\left({}^{T}p_{T,j}^{(h,w,c)},\delta\left( \hat{y}_{j}^{(h,w)}\right)\right) \tag{6}\] \[+\beta L_{\text{{ce}}}\left(\delta\left(\hat{y}_{j}^{(h,w)}\right),{}^{T}p_{T,j}^{(h,w,c)}\right), \tag{7}\]
where \(\alpha\) and \(\beta\) are balancing parameters and set to \(0.1\) and \(1.0\), respectively. \(\delta\left(\cdot\right)\) denotes a function to convert a soft label to one-hot label. In implementation, the one-hot label is clamped to \([1e-4,1.0]\) to avoid numerical error [12].
Based on the observation in IV-B, we consider weighting loss values on each pixel with the inverse value of label entropy. The pixel-wise weight is calculated as follows:
\[W_{j}^{(h,w)}=\exp{\left(-\lambda_{scale}\cdot E\left(\hat{y}_{j}^{(h,w)} \right)\right)}, \tag{8}\]
where \(\lambda_{scale}\) is a scaling parameter. Using eq. (8), we calculate a weighted SCE loss:
\[L_{w,\text{{see}}}^{(h,w)}=W_{j}^{(h,w)}\cdot L_{\text{{see}}}^{(h,w)}. \tag{9}\]
This way, loss values on the pixels with low label entropy are weighted more, and it result in similar effect to the previous unanimity-based _hard_ pseudo-labels.
Following [6], we also introduce a loss rectification method based on the pixel-wise uncertainty of the prediction proposed by [12] to suppress the effect of pixels with high uncertainty, whose pseudo-labels are more likely to be wrong. The uncertainty is estimated as Kulback-Leibler divergence between the branches on each pixel as follows:
\[L_{kld}^{(h,w)}=\sum_{c\in C}{}^{T}_{m}p_{T,j}^{(h,w,c)}\log\frac{mP_{T,j}^{(h,w,c)}}{\frac{m}{a}p_{T,j}^{(h,w,c)}}. \tag{10}\]
The rectified classification loss is defined as follows:
\[L_{rect}^{(h,w)}=\exp{\left(-L_{kld}^{(h,w)}\right)}L_{w,\text{{ce}}}^{(h,w)}. \tag{11}\]
Along with the classification loss, we add entropy loss of the predictions to force the network to clearly distinguish the classes, as used e.g., in [11]:
\[L_{ent}^{(h,w)}=E\left({}^{T}p_{T,j}^{(h,w)}\right), \tag{12}\]
The overall loss is as follows:
\[L_{all}=\sum_{h,w}\left(L_{rect}^{(h,w)}+\lambda_{ent}L_{ent}^{(h,w)}+\lambda_ {kld}L_{kld}^{(h,w)}\right), \tag{13}\]
where \(\lambda_{ent}\) and \(\lambda_{kld}\) denote balancing parameters.
Fig. 2: Label conversion function \(\psi\). The colors of the bars represent the corresponding target labels. For each target class, the highest score among a group of corresponding source classes is selected. The scores are then normalized to form a probability distribution.
### _Prototype-based pseudo-label rectification_
Along with the soft pseudo-labels, we employ ProDA [12] to make the training process robust to noisy pseudo-labels. The core of the method is to save soft pseudo-labels \(\{\hat{y}_{j}\}_{j=1}^{N_{T}}\) from the predictions of the source model, and rectify them with the class-wise weights indicating its likelihood of being a specific class using the _prototypes_, which are the representative features for each class, during training. For details, the readers are referred to [12].
The feature-wise weights are calculated as follows:
\[\omega_{j}^{(h,w,c)}=\frac{\exp\left(-\left\|\right.^{T}\!\!\tilde{f}_{j}^{(h,w )}-\eta^{(c)}\!\left\|/\tau\right)}{\sum_{c^{\prime}}\exp\left(-\left\|\right. ^{T}\!\!\tilde{f}_{j}^{(h,w)}-\eta^{(c^{\prime})}\!\left\|/\tau\right)}, \tag{14}\]
where \(\eta^{(c)}\) denotes the prototype for class \(c\), initialized as a mean of the features predicted as \(c\) and updated during training. \(\left\|\cdot\right\|\) denotes the Euclidean norm, and \(\tau\) denotes a temperature parameter that controls the degree of bias of the distribution. \({}^{T}\!\!\tilde{f}_{j}\) is a feature vector from a momentum encoder [23], i.e., a model identical to \(F\left(\cdot;\theta_{T}\right)\) whose parameters are updated via exponential moving average (EMA).
Using \(\omega_{j}\), the soft pseudo-labels are rectified as follows:
\[\hat{y}_{j}^{(h,w,c)}=\frac{\omega_{j}^{(h,w,c)}\hat{y}_{j}^{(h,w,c)}}{\sum_{c ^{\prime}}\omega_{j}^{(h,w,c^{\prime})}\hat{y}_{j}^{(h,w,c^{\prime})}} \tag{15}\]
In our method, the soft pseudo-labels are generated by the method described in IV and rectified by eq. (15) in training.
## VI Experiments
### _Experimental setting_
#### Vi-A1 Training environment
We used PyTorch implementation of ESPNetv2 [24] with modifications of adding an auxiliary classification branch, and normalizing the features and classification weights. The models were trained and evaluated on one NVIDIA Quadro RTX 8000 with 48GB RAM. The network is trained with an initial learning rate of \(2\times 10^{-2}\) and cyclic learning rate scheduling [25].
#### Vi-A2 Datasets
As source datasets, we use CamVid [2], Cityscapes [1], and Freiburg Forest [21]. As target datasets, we use _Greenhouse A_[6], _TUT Park_, a dataset of images around a in-campus park, and _Toyohashi Trail_, images in unstructured mountain paths in Toyohashi Nature Trail. The datasets used in the experiments are summarized in Table I.
#### Vi-A3 Baselines
In our comparative studies, we employ baseline methods as follows.
**Training without DA** As baselines without DA, we use two methods, namely _supervised_ and _ensemble_. In _supervised_, a segmentation model is trained with the three source datasets with labels converted to the common target label set using the label conversions. _Ensemble_ merges the predicted object probabilities from the source models in the same way as the proposed soft pseudo-label generation. In other words, the soft pseudo-labels are directly used as prediction results.
**Single-source DA** We also evaluate single-source DA methods. As source datasets, we use CV, CS, and FR in Greenhouse A, and CS and FR in TUT Park dataset. Pseudo-labels are generated from a model trained on each source followed by label conversion \(\psi_{i}\).
Fig. 3: Predictions of the source models, domain similarity (inverse domain gap), and pseudo-labels (labels with the maximum probability, and inverse of label entropy). We used CamVid [2], Cityscapes [1], and Freiburg Forest [21] as source datasets (descriptions about the datasets are in Table I). In inverse entropy images, a darker pixel indicates a lower weight, i.e., higher entropy.
**Multi-source DA** We use our previous method [6] as a baseline of multi-source DA, referred to as _MSPL_. We also use the method by He et al. [15], a state-of-the-art MDA for semantic segmentation, hereafter referred to as _MSDA_CL_. We used our own implementation of the method [15]. In multi-source DA, we evaluate both double-source and triple-source settings. For double-source training, we use Cityscapes (CS) and Freiburg Forest (FR) as source datasets.
### _Comparison with the baselines_
**Greenhouse A dataset** Table II shows the results of the baselines and the proposed method. The performance of the proposed method did not reach those of our previous method (MSPL). However, the proposed method resulted in the second-best mean IoU, and outperformed MSDA_CL [15]. In MSDA_CL, pseudo-labels are generated by simply adding predicted scores from the source models, followed by confidence-based label selection [7]. In contrast, the proposed method considers domain similarity between the source datasets and the target. Moreover, by loss weighting using label entropy, the effect of noise-prone pseudo-labels is suppressed, resulting in better performance. We further evaluate the effect of considering domain similarity in pseudo-label generation and label entropy weight in training in VI-C.
**TUT Park dataset** In this target dataset, we use five object classes that dominate the scenes, namely _plant_ (trees etc.), _grass_ (ground vegetation etc.), _artificial object_, _road_, and _aky_. CS and FR have a distinction between _plant_ (_vegetation_ in CS, and _tree_ in FR) and _grass_ (_terrain_ in CS, and _grass_ in FR). CamVid, however, only has _tree_ class that corresponds to _plant_ in the target dataset.
Table III shows the results. Interestingly, the proposed method trained with three sources did not perform the best on any classes, but resulted in the best mean IoU. While the baseline methods were biased towards a specific class and poorly performed on others, the proposed method realized well-balanced training. Notably, while MSPL by nature failed to capture _grass_ class due to its absence in one of the source datasets (CamVid), the proposed method successfully learned it. Unlike the previous method which excludes from pseudo-labels the pixels on which the source models did not agree with each other, our soft pseudo-labeling allowed for learning classes induced by only part of source datasets.
**Toyohashi Trail dataset** We use the same label set as TUT Park dataset. Table IV shows the comparative results. The proposed method resulted in the best mean IoU. Although the domain gap estimation was highly unreliable due to large domain gap between the all source datasets and the target datasets, and thus it is difficult to give theoretical justification to the effect of considering domain similarity, they show a better capability of the proposed method to transfer knowledge even from source datasets that are very different from the target dataset compared to the baselines.
### _Ablation studies_
Next, we conducted an ablation study on loss weighting based on entropy of the pseudo-labels during training, and the domain similarity-based integration of source predictions in pseudo-label generation. We use TUT Park as the target dataset, and train the network using three source datasets.
Table V shows the results. Using domain similarity in pseudo-label generation improved the performance in both training settnings with and without label entropy weight. Fig. 4 shows an example of pseudo-label generated with and without domain similarity weighting. While a large area of
_grass_ is wrongly assigned with _ground_ class when source predictions are simply summed as shown in Fig. 4(c), part of the area is correctly classified when domain similarity is incorporated as shown in Fig. 4(c). By considering domain similarity, the prediction of a source model with larger domain similarity was emphasized and pseudo-labels becomes more accurate. The effect of using domain similarity is more evident when label entropy weight is not employed in training because the improvement of pseudo-labels occurs on pixels where the source predictions are fuzzy, which are suppressed in training by label entropy weight. We conclude that incorporating domain similarity brings about positive effect on pseudo-label generation.
## VII Conclusion
We proposed a method of soft pseudo-label generation for training semantic segmentation models on datasets of a variety of scenes without ground-truth labels using not very relevant source datasets. Unlike our previous method [6] using the unanimity criterion for pseudo-label selection, our method allows for taking into account the domain similarity between each source dataset and the target dataset, utilizing the information of the prediction certainty of the source models, and involving target classes that does not have a corresponding class in some source datasets, which increase the applicability of the method to a variety of scenes.
|
2310.00841 | Drug Discovery with Dynamic Goal-aware Fragments | Fragment-based drug discovery is an effective strategy for discovering drug
candidates in the vast chemical space, and has been widely employed in
molecular generative models. However, many existing fragment extraction methods
in such models do not take the target chemical properties into account or rely
on heuristic rules. Additionally, the existing fragment-based generative models
cannot update the fragment vocabulary with goal-aware fragments newly
discovered during the generation. To this end, we propose a molecular
generative framework for drug discovery, named Goal-aware fragment Extraction,
Assembly, and Modification (GEAM). GEAM consists of three modules, each
responsible for goal-aware fragment extraction, fragment assembly, and fragment
modification. The fragment extraction module identifies important fragments
contributing to the desired target properties with the information bottleneck
principle, thereby constructing an effective goal-aware fragment vocabulary.
Moreover, GEAM can explore beyond the initial vocabulary with the fragment
modification module, and the exploration is further enhanced through the
dynamic goal-aware vocabulary update. We experimentally demonstrate that GEAM
effectively discovers drug candidates through the generative cycle of the three
modules in various drug discovery tasks. Our code is available at
https://github.com/SeulLee05/GEAM. | Seul Lee, Seanie Lee, Kenji Kawaguchi, Sung Ju Hwang | 2023-10-02T01:30:42Z | http://arxiv.org/abs/2310.00841v3 | # Drug Discovery with
###### Abstract
Fragment-based drug discovery is an effective strategy for discovering drug candidates in the vast chemical space, and has been widely employed in molecular generative models. However, many existing fragment extraction methods in such models do not take the target chemical properties into account or rely on heuristic rules. Additionally, the existing fragment-based generative models cannot update the fragment vocabulary with goal-aware fragments newly discovered during the generation. To this end, we propose a molecular generative framework for drug discovery, named _Goal-aware fragment Extraction, Assembly, and Modification_ (GEAM). GEAM consists of three modules, each responsible for goal-aware fragment extraction, fragment assembly, and fragment modification. The fragment extraction module identifies important fragments that contribute to the desired target properties with the information bottleneck principle, thereby constructing an effective goal-aware fragment vocabulary. Moreover, GEAM can explore beyond the initial vocabulary with the fragment modification module, and the exploration is further enhanced through the dynamic goal-aware vocabulary update. We experimentally demonstrate that GEAM effectively discovers drug candidates through the generative cycle of the three modules in various drug discovery tasks.
## 1 Introduction
The problem of drug discovery aims to find molecules with desired properties within the vast chemical space. Fragment-based drug discovery (FBDD) has been considered as an effective strategy in the recent decades as a means of exploring the chemical space and has led to the discovery of many potent compounds against various targets (Li, 2020). Inspired by the effectiveness of FBDD, many molecular generative models have also adopted it as a strategy to narrow down the search space and simplify the generation process, resulting in meaningful success (Jin et al., 2018, 2020; Xie et al., 2020; Maziarz et al., 2022; Kong et al., 2022; Geng et al., 2023).
In FBDD, the first step, fragment library construction, directly impacts the final generation results (Shi and von Itzstein, 2019) as the constructed fragments are used in the entire generation process. However, existing fragment extraction or motif mining methods suffer from two limitations: they 1) do not take the target chemical properties of drug discovery problems into account and/or 2) rely on heuristic fragment selection rules. For example, it is a common strategy to randomly select fragments (Yang et al., 2021) or extract fragments based on frequency (Kong et al., 2022; Geng et al., 2023) without considering the target properties. Jin et al. (2020) proposed to find molecular substructures that satisfy the given properties, but the extraction process is computationally very expensive and the substructures cannot be assembled together.
To this end, we first propose a novel deep learning-based goal-aware fragment extraction method, namely, _Fragment-wise Graph Information Bottleneck_ (FGIB, Figure 1(a)). There is a strong connection between molecular structures and their activity, which is referred to as structure-activity relationship (SAR) (Crum-Brown and Fraser, 1865; Bohack et al., 1996). Inspired by SAR, FGIB utilizes the graph information bottleneck theory to identify important subgraphs in the given molecular graphs for predicting the target chemical property. These identified subgraphs then serve as building blocks in the subsequent generation. As shown in Figure 1(b), the proposed usage of goal
aware fragments extracted by FGIB improves the optimization performance by a significant margin compared to existing FBDD methods.
To effectively utilize the extracted fragments in molecular generation, we next construct a generative model consisting of a fragment assembly module and a fragment modification module. In this work, we employ soft-actor critic (SAC) for the assembly module and a genetic algorithm (GA) for the modification module. Through the interplay of the two modules, the generative model can both exploit the extracted goal-aware fragments and explore beyond the initial fragment vocabulary. Moreover, to further enhance molecular novelty and diversity, we propose to extract new fragments on-the-fly during the generation using FGIB and dynamically update the fragment vocabulary.
Taken as a whole, the fragment extraction module, the fragment assembly module, and the fragment modification module in the form of FGIB, SAC, and GA, respectively, collectively constitute the generative framework which we refer to as _Goal-aware fragment Extraction_, _Assembly, and Modification_ (GEAM). As illustrated in Figure 2, GEAM generates molecules through the iterative process that sequentially runs each module as follows: 1) After FGIB constructs an initial goal-aware fragment vocabulary, SAC assembles these fragments and generates a new molecule. 2) GEAM keeps track of the top generated molecules as the initial population of GA, and GA generates an offspring molecule from the population. 3) As a consequence of the crossover and mutation procedures, the offspring molecule contains new subgraphs that cannot be constructed from the current fragment vocabulary, and FGIB extracts the meaningful subgraphs from the offspring molecule and update the vocabulary. Through the collaboration of the three modules where FGIB provides goal-aware fragments to SAC, SAC provides high-quality population to GA, and GA provides novel fragments to FGIB, GEAM effectively explores the chemical space to discover novel drug candidates.
We experimentally validate the proposed GEAM on various molecular optimization tasks that simulate real-world drug discovery scenarios. The experimental results show that GEAM significantly outperforms existing state-of-the-art methods, demonstrating its effectiveness in addressing real-world drug discovery problems. We summarize our contributions as follows:
* We propose FGIB, a novel goal-aware fragment extraction method that applies the GIB theory to construct a fragment vocabulary for target chemical properties.
* We propose to leverage SAC and GA jointly as a generative model to effectively utilize the extracted fragments while enabling exploration beyond the vocabulary.
* We propose GEAM, a generative framework that combines FGIB, SAC, and GA to dynamically update the fragment vocabulary by extracting goal-aware fragments on-the-fly to further improve diversity and novelty.
* We experimentally demonstrate that GEAM is highly effective in discovering drug candidates, outperforming existing molecular optimization methods.
Figure 1: (a) **The architecture of FGIB.** Using the graph information bottleneck theory, FGIB aims to identify the important subgraphs that contribute much to the target chemical property in the given molecular graphs. The trained FGIB is then used to extract fragments in a molecular dataset in the goal-aware manner. (b) **Performance comparison of GEAM and other FBDD methods** on the jak2 ligand generation task.
## 2 Related Work
Fragment extractionFragment extraction methods fragmentze the given molecules into molecular substructures, i.e., fragments, for subsequent generation. Yang et al. (2021) chose to randomly select fragments after breaking bonds in the given molecules with a predefined rule. Xie et al. (2020) and Maziarz et al. (2022) proposed to obtain fragments by breaking some of the bonds with a pre-defined rule (e.g., acyclic sing bonds), then select the most frequent fragments. Kong et al. (2022) and Geng et al. (2023) utilized merge-and-update rules to find the frequent fragments in the given molecules. All of these methods do not consider the target properties. On the other hand, Jin et al. (2020) proposed to find molecular substructures that satisfy the given properties, but the approach requires an expensive oracle call to examine each building block candidate in a brute-force manner, and the substructures are not actually fragments in that they are already full molecules that have chemical properties and are not assembled together. Consequently, the found substructures are large in size and often few in number, resulting in low novelty and diversity of the generated molecules.
Fragment-based molecule generationFragment-based molecular generative models denote the models that use the extracted fragments as building blocks and learn to assemble the blocks into molecules. Xie et al. (2020) proposed to use MCMC sampling when assemble or delete the fragments. Yang et al. (2021) proposed to use a reinforcement learning (RL) model and view fragment addition as actions. Maziarz et al. (2022), Kong et al. (2022) and Geng et al. (2023) proposed to use a VAE to assemble the fragments. The model of Jin et al. (2020) learns to complete the obtained molecular substructures into final molecules by adding molecular branches.
Subgraph recognitionGiven a graph, subgraph recognition aims to find a compressed subgraph that contains salient information to predict the property of the graph. Graph information bottleneck (GIB) (Wu et al., 2020) approached this problem by considering the subgraph as a bottleneck random variable and applying the information bottleneck theory. Yu et al. (2022) proposed to utilize Gaussian noise injection into node representations to confine the information and recognize important subgraphs, while Miao et al. (2022) proposed to consider the subgraph attention process as the information bottleneck. Lee et al. (2023) applied the GIB principle to molecular relational learning tasks. To the best of our knowledge, subgraph recognition by GIB has been only employed in classification and regression tasks, and this is the first work that applies GIB to fragment extraction.
## 3 Method
We now introduce our Goal-aware fragment Extraction, Assembly, and Modification (GEAM) framework which aims to generate molecules that satisfy the target properties with goal-aware fragments. We first describe the goal-aware fragment extraction method in Section 3.1. Then we describe the fragment assembly method in Section 3.2. Finally, we describe the fragment modification method, the dynamic vocabulary update, and the resulting GEAM in Section 3.3.
Figure 2: **The overall framework of GEAM.** GEAM consists of three modules, FGIB, SAC, and GA for fragment extraction, fragment assembly, and fragment modification, respectively.
### Goal-aware Fragment Extraction
Assume that we are given a set of \(N\) molecular graphs \(G_{i}\) with its corresponding properties \(Y_{i}\in[0,1]\), denoted as \(\mathcal{D}=\{(G_{i},Y_{i})\}_{i=1}^{N}\). Each graph \(G_{i}=(\mathbf{X}_{i},\mathbf{A}_{i})\) consists of \(n\) nodes with a node feature matrix \(\mathbf{X}_{i}\in\mathbb{R}^{n\times d}\) and an adjacency matrix \(\mathbf{A}_{i}\in\mathbb{R}^{n\times n}\). Let \(\mathcal{V}\) be a set of all nodes from the graphs \(\mathcal{G}=\{G_{i}\}_{i=1}^{N}\) and let \(\mathcal{E}\) be a set of all edges from \(\mathcal{G}\). Our goal is to extract goal-aware fragments from \(\mathcal{G}\) such that we can assemble these fragments to synthesize graphs with desired properties. In order to achieve this goal, we propose Fragment-wise Graph Information Bottleneck (FGIB), a model that learns to identify salient fragments of \(G_{i}\) for predicting the target property \(Y_{i}\).
Concretely, we first decompose a set of the graphs \(\mathcal{G}\) into \(M\) candidate fragments, denoted as \(\mathcal{F}\) with BRICS (Degen et al., 2008), a popular method that fragmentizes molecules into retrosynthetically interesting substructures. Each fragment \(F=(V,E)\in\mathcal{F}\) is comprised of vertices \(V\subset\mathcal{V}\) and edges \(E\subset\mathcal{E}\). Then each graph \(G\) can be represented as \(m\) fragments, \(\{F_{j}=(V_{j},E_{j})\}_{j=1}^{m}\), with \(F_{j}\in\mathcal{F}\). Inspired by graph information bottleneck (Wu et al., 2020), FGIB identifies a subgraph \(G^{\text{sub}}\) that is maximally informative for predicting the target property \(Y\) while maximally compressing the original graph \(G\):
\[\min_{G^{\text{sub}}}-I(G^{\text{sub}},Y)+\beta I(G^{\text{sub}},G), \tag{1}\]
where \(\beta>0\) and \(I(X,Y)\) denotes the mutual information between the random variables \(X\) and \(Y\).
FGIB first calculates the node embeddings \(\{\mathbf{h}\}_{i=1}^{n}\) from the graph \(G\) with an MPNN (Gilmer et al., 2017) and use average pooling to obtain the fragment embedding \(\mathbf{e}_{j}\) of the fragment \(F_{j}\) as follows:
\[[\mathbf{h}_{1}\cdots\mathbf{h}_{n}]^{\top}=\text{MPNN}(\mathbf{X},\mathbf{A}),\quad \mathbf{e}_{j}=\text{AvgPool}(\{\mathbf{h}_{l}:v_{l}\in V_{j}\})\in\mathbb{R} ^{d}, \tag{2}\]
where \(v_{l}\) denotes the node whose corresponding node embedding is \(\mathbf{h}_{l}\). Using an MLP with a sigmoid activation function, we obtain \(w_{j}\in[0,1]\), the importance of the fragment \(F_{j}\) for predicting the target property \(Y\), as \(w_{j}=\text{MLP}(\mathbf{e}_{j})\). We denote \(\theta\) as the parameters of the MPNN and the MLP. Following Yu et al. (2022), we inject a noise to the fragment embedding \(\mathbf{e}_{j}\) according to \(w_{j}\) to control the information flow from \(G\) as follows:
\[\hat{\mathbf{e}}_{j}=w_{j}\mathbf{e}_{j}+(1-w_{j})\hat{\mathbf{\mu}}_{j}+\mathbf{ \epsilon},\quad w_{j}=\text{MLP}(\mathbf{e}_{j}),\quad\mathbf{\epsilon}\sim \mathcal{N}(\mathbf{0},(1-w_{j})\hat{\mathbf{\Sigma}}), \tag{3}\]
where \(\hat{\mathbf{\mu}}_{j}\in\mathbb{R}^{d}\) and \(\hat{\mathbf{\Sigma}}\in\mathbb{R}^{d\times d}\) denote an empirical mean vector and a diagonal covariance matrix estimated from \(\{\mathbf{e}_{j}\}_{j=1}^{m}\), respectively. Intuitively, the more a fragment is considered to be irrelevant for predicting the target property (i.e., small weight \(w\)), the more the transmission of the fragment information is blocked. Let \(Z=\text{vec}([\hat{\mathbf{e}}_{1}\cdots\hat{\mathbf{e}}_{m}])\) be the embedding of the perturbed fragments, which is a Gaussian-distributed random variable, i.e., \(p_{\theta}(Z|G)=\mathcal{N}(\mathbf{\mu}_{\theta}(G),\mathbf{\Sigma}_{\theta}(G))\). Here vec denotes a vectorization of a matrix, and \(\mathbf{\mu}_{\theta}(G)\) and \(\mathbf{\Sigma}_{\theta}(G)\) denote the mean and the covariance induced by the MPNN and the MLP with the noise \(\mathbf{\epsilon}\), respectively. Assuming that there is no information loss in the fragments after encoding them, our objective function in Eq. (1) becomes optimization the parameters \(\theta\) such that we can still predict the property \(Y\) from the perturbed fragment embedding \(Z\) while minimizing the mutual information between \(G\) and \(Z\) as follows:
\[\min_{\theta}\underbrace{-I(Z,Y;\theta)+\beta I(Z,G;\theta)}_{\mathcal{L}_{ \text{IB}}(\theta)} \tag{4}\]
Following Alemi et al. (2017), we can derive the upper bound of \(\mathcal{L}_{\text{IB}}(\theta)\) with variational inference:
\[\mathcal{L}(\theta,\phi)\coloneqq\frac{1}{N}\sum_{i=1}^{N}\big{(}-\log q_{ \phi}(Y_{i}|Z_{i})+\beta D_{\text{KL}}(p_{\theta}(Z|G_{i})\parallel u(Z)) \big{)}, \tag{5}\]
where \(q_{\phi}\) is a property predictor that takes the perturbed fragment embedding \(Z\) as an input, \(u(Z)\) is a variational distribution that approximates the marginal \(p_{\theta}(Z)\), and \(Z_{i}\) is drawn from \(p_{\theta}(Z|G_{i})=\mathcal{N}(\mathbf{\mu}_{\theta}(G_{i}),\mathbf{\Sigma}_{\theta}(G _{i}))\) for \(i\in\{1,\dots,N\}\). We optimize \(\theta\) and \(\phi\) to minimize the objective function \(\mathcal{L}(\theta,\phi)\). Note that the variational distribution \(u(\cdot)\) is chosen to be Gaussian with respect to \(Z\), enabling analytic computation of the KL divergence. A detail proof is included in Appendix B.
After training FGIB, we score each fragment \(F_{j}=(V_{j},E_{j})\in\mathcal{F}\) with FGIB as follows:
\[\texttt{score}(F_{j})=\frac{1}{|S(F_{j})|}\sum_{(G,Y)\in S(F_{j})}\frac{w_{j}(G,F_{j})}{\sqrt{|V_{j}|}}\cdot Y\in[0,1], \tag{6}\]
where \(S(F_{j})=\{(G,Y)\in\mathcal{D}:F_{j}\text{ is a subgraph of }G\}\) and \(w_{j}(G,F_{j})\) is an importance of the fragment \(F_{j}\) in the graph \(G\), computed as Eq. (3). Intuitively, the score quantifies the extent to which a fragment contributes to achieving a high target property. Specifically, the term \(w_{j}(G,F_{j})/\sqrt{|V_{j}|}\) measures how much a fragment contributes to its whole molecule in terms of the target property, while the term \(Y\) measures the property of the molecule. As the number of nodes of the fragment becomes larger, FGIB is more likely to consider it important when predicting the property. In order to normalize the effect of the fragment size, we include \(\sqrt{|V_{j}|}\) in the first term. Based on the scores of all fragments, we choose the top-\(K\) fragments as the goal-aware vocabulary \(\mathcal{S}\subset\mathcal{F}\) for the subsequent generation of molecular graphs with desired properties.
### Fragment Assembly
The next step is to generate molecules with the extracted goal-aware fragment vocabulary. For generation, we introduce the fragment assembly module, which is a soft-actor critic (SAC) model that learns to assemble the fragments to generate molecules with desired properties.
We formulate fragment assembly as an RL problem, following Yang et al. (2021). Given a partially generated molecule \(g_{t}\) which becomes a state \(\mathbf{s}_{t}\) at time step \(t\), a policy network adds a fragment to \(g_{t}\) by sequentially selecting three actions: 1) the attachment site of \(g_{t}\) to use in forming a new bond, 2) the fragment \(F\in\mathcal{S}\) to be attached to \(g_{t}\), and 3) the attachment site of \(F\) to use in forming a new bond. Following Yang et al. (2021), we encode the nodes of the graph \(g_{t}\) with a GCN (Kipf and Welling, 2017) as \(\mathbf{H}=\text{GCN}(g_{t})\) and obtain the graph embedding with sum pooling as \(\mathbf{h}_{g_{t}}=\text{SumPool}(\mathbf{H})\). Given \(\mathbf{H}\) and \(\mathbf{h}_{g_{t}}\), we parameterize the policy network \(\pi\) with three sub-policy networks to sequentially choose actions conditioned on previous ones:
\[p_{\pi_{1}}(\cdot|\mathbf{s}_{t}) =\pi_{1}(\mathbf{Z}_{1}),\;\mathbf{Z}_{1}=[\mathbf{z}_{1,1}\cdots\mathbf{ z}_{1,n_{1}}]^{\top}=f_{1}(\mathbf{h}_{g_{t}},\mathbf{H}_{\text{att}}) \tag{7}\] \[p_{\pi_{2}}(\cdot|a_{1},\mathbf{s}_{t}) =\pi_{2}(\mathbf{Z}_{2}),\;\mathbf{Z}_{2}=[\mathbf{z}_{2,1}\cdots\mathbf{ z}_{2,n_{2}}]^{\top}=f_{2}(\mathbf{z}_{1,a_{1}},\text{ECFP}(\mathcal{S}))\] (8) \[p_{\pi_{3}}(\cdot|a_{1},a_{2},\mathbf{s}_{t}) =\pi_{3}(\mathbf{Z}_{3}),\;\mathbf{Z}_{3}=[\mathbf{z}_{3,1}\cdots\mathbf{ z}_{3,n_{3}}]^{\top}=f_{3}(\text{SumPool}(\text{GCN}(F_{a_{2}})),\mathbf{H}_{\text{att},F_{a _{2}}}), \tag{9}\]
where \(\mathbf{H}_{\text{att}}\) denotes the node embeddings of the attachment sites. We employ multiplicative interactions (Jayakumar et al., 2020) for \(f_{1},f_{2}\) and \(f_{3}\) to fuse two inputs from heterogeneous spaces. The first policy network \(\pi_{1}\) outputs categorical distribution over attachment sites of the current graph \(g_{t}\) conditioned on \(\mathbf{h}_{g_{t}}\) and \(\mathbf{H}_{\text{att}}\), and chooses the attachment site with \(a_{1}\sim p_{\pi_{1}}(\cdot|\mathbf{s}_{t})\). The second policy network \(\pi_{2}\) selects the fragment \(F_{a_{2}}\in\mathcal{S}\) with \(a_{2}\sim p_{\pi_{2}}(\cdot|a_{1},\mathbf{s}_{t})\), conditioned on the embedding of the previously chosen attachment site \(\mathbf{z}_{1,a_{1}}\) and the ECFPs of all the fragments \(\text{ECFP}(\mathcal{S})\). Then we encode the node embeddings of the fragment \(F_{a_{2}}\) with the same GCN as \(\mathbf{H}_{F_{a_{2}}}=\text{GCN}(F_{a_{2}})\), and get the fragment embedding \(\mathbf{h}_{F_{a_{2}}}=\text{SumPool}(\mathbf{H}_{F_{a_{2}}})\). The policy network \(\pi_{3}\) chooses the attachment site of the fragment \(F_{a_{2}}\) with \(a_{3}\sim p_{\pi_{3}}(\cdot|a_{1},a_{2},\mathbf{s}_{t})\), conditioned on the fragment embedding \(\mathbf{h}_{F_{a_{2}}}\) and the attachment site embeddings of the fragment \(\mathbf{H}_{\text{att},F_{a_{2}}}\). Finally, we attach the fragment \(F_{a_{2}}\) to the current graph \(g_{t}\) with the chosen attachment sites \(a_{1}\) and \(a_{3}\), resulting in a new graph \(g_{t+1}\). With \(T\) steps of sampling actions \((a_{1},a_{2},a_{3})\) using the policy network, we generate a new molecule \(g_{T}=G\), call the oracle to evaluate the molecule \(G\) and calculate the reward \(r_{T}\).
With the SAC objective (Haarnoja et al., 2018), we train the policy network \(\pi\) as follows:
\[\pi^{*}=\operatorname*{arg\,max}_{\pi}\sum_{t}\mathbb{E}_{(\mathbf{s}_{t}, \mathbf{a}_{t})\sim\rho_{\pi}}[r(\mathbf{s}_{t},\mathbf{a}_{t})+\alpha\mathcal{ H}(\pi(\cdot|\mathbf{s}_{t}))], \tag{10}\]
where \(r(\mathbf{s}_{t},\mathbf{a}_{t})\) is a reward function1, \(\mathcal{H}(\pi(\cdot|\mathbf{s}_{t}))\) is entropy of action probabilities given \(\mathbf{s}_{t}\) with a temperature parameter \(\alpha>0\), and \(\rho_{\pi}(\mathbf{s}_{t},\mathbf{a}_{t})\) is a state-action marginal of the trajectory distribution induced by the policy \(\pi(\mathbf{a}_{t}|\mathbf{s}_{t})=p_{\pi_{3}}(a_{3,t}|a_{2,t},a_{1,t},\mathbf{s} _{t})\cdot p_{\pi_{2}}(a_{2,t}|a_{t,1},\mathbf{s}_{t})\cdot p_{\pi_{3}}(a_{t,1}| \mathbf{s}_{t})\) with \(\mathbf{a}_{t}=(a_{1,t},a_{2,t},a_{3,t})\). In order to sample discrete actions differentiable to backpropagation, we use Gumbel-Softmax (Jang et al., 2017; Maddison et al., 2017) to optimize Eq. (10).
Footnote 1: We set the intermediate rewards to 0.05, so that only final molecules are evaluated by the oracle.
### Fragment Modification and Dynamic Vocabulary Update
With the fragment assembly module only, we are unable to generate molecules consisting of fragments not included in the predefined vocabulary, which hinders generation of diverse molecules and
precludes exploration beyond the vocabulary. In order to overcome this problem, we introduce the fragment modification module, which utilizes a genetic algorithm (GA) to generate molecules that contain novel fragments.
Specifically, we employ a graph-based genetic algorithm (GA) (Jensen, 2019). At the first round of the GA, we initialize the population with the top-\(P\) molecules generated by the fragment assembly module. The GA then selects parent molecules from the population and generates offspring molecules by performing crossover and mutation. As a consequence of the crossover and mutation operations, the generated offspring molecules contain novel fragments not in the initial vocabulary. In the subsequent rounds, we choose the top-\(P\) molecules generated so far by both SAC and GA to construct the GA population of the next round.
We iteratively run the fragment assembly module described in Section 3.2 and the fragment modification in turn, and this generative scheme is referred to as GEAM-static. To further enhance molecular diversity and novelty, we propose incorporating the fragment extraction module into this generative cycle. Concretely, in each cycle after the fragment assembly and the fragment modification modules generate molecules, FGIB extracts novel goal-aware fragments \(\mathcal{S}^{\prime}\) from the offspring molecules as described in Section 3.1. Then the vocabulary is dynamically updated as \(\mathcal{S}\cup\mathcal{S}^{\prime}\). When the size of the vocabulary becomes larger than the maximum size \(L\), we choose the top-\(L\) fragments as the vocabulary based on the scores in Eq. (6). The fragment assembly module assembles fragments of the updated vocabulary in the next iteration, and we refer to this generative framework as GEAM. The single generation cycle of GEAM is described in Algorithm 1 in Section A.
## 4 Experiments
We demonstrate the efficacy of our proposed GEAM in two sets of multi-objective molecular optimization tasks that simulate real-world drug discovery problems. We first conduct the experiment to generate novel molecules that have high binding affinity, drug-likeness, and synthesizability in Section 4.1. We then experiment on the practical molecular optimization (PMO) benchmark in Section 4.2. We further conduct extensive ablation studies and qualitative analysis in Section 4.3.
### Optimization of Binding Affinity under QED, SA and Novelty Constraints
Experimental setupFollowing Lee et al. (2023), we validate GEAM in the five docking score (DS) optimization tasks under the quantitative estimate of drug-likeness (QED) (Bickerton et al., 2012), synthetic accessibility (SA) (Ertl and Schuffenhauer, 2009), and novelty constraints. In these tasks, the goal is to generate novel, drug-like, and synthesizable molecules that have a high absolute value of the docking score. Following Lee et al. (2023), we set the property \(Y\) as follows:
\[Y(G)=\widehat{\text{DS}}(G)\times\text{QED}(G)\times\widehat{\text{SA}}(G) \in[0,1], \tag{11}\]
where \(\widehat{\text{DS}}\) and \(\widehat{\text{SA}}\) are the normalized DS and the normalized SA, respectively (Eq. (16)). We use ZINC250k (Irwin et al., 2012) to train FGIB to predict \(Y\) and extract initial fragments. Optimization performance is evaluated with 3,000 generated molecules using the following metrics. **Novel hit ratio (%)** measures the fraction of unique and novel hits among the generated molecules. Here, _novel_ molecules is defined as the molecules that have the maximum Tanimoto similarity less than \(0.4\) with the molecules in the training set, and _hit_ is the molecules that satisfy the following criteria: DS \(<\) (the median DS of known active molecules), QED \(>0.5\) and SA \(<5\). **Novel top 5% DS (kcal/mol)** measures the average DS of the top 5% unique, novel hits. parp1, fa7, 5ht1b, braf and jak2 are used as the protein targets the docking scores are calculated for. In addition, we evaluate the fraction of novel molecules, **novelty (%)**, and the extent of chemical space covered, **#Circles**(Xie et al., 2023) of the generated hits. The details are provided in Section C.1 and Section C.2.
BaselinesREINVENT(Oliverona et al., 2017) is a SMILES-based RL model with a pretrained prior. **Graph GA**(Jensen, 2019) is a GA-based model that utilizes predefined crossover and mutation rules. **MORLD**(Jeon and Kim, 2020) is an RL model that uses the MolDQN algorithm (Zhou et al., 2019). **HierVAE**(Jin et al., 2020) is a VAE-based model that uses the hierarchical motif representation of molecules. **RationaleRL**(Jin et al., 2020) is an RL model that first identifies subgraphs that are likely responsible for the target properties (i.e., rationale) and then extends those
to complete molecules. **FREED**(Yang et al., 2021) is an RL model that assembles the fragments obtained using CReM (Polishchuk, 2020). **PS-VAE**(Kong et al., 2022) is a VAE-based model that uses the mined principal subgraphs as the building blocks. **MOOD**(Lee et al., 2023b) is a diffusion model that incorporates an out-of-distribution (OOD) control to enhance novelty. The details are provided in Section C.2, and the results of additional baselines are included in Table 7 and Table 8.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c}{Target protein} \\ \cline{2-7} & parpl & fa7 & Stntlb & braf & jak2 \\ \hline REINVENT (Olivercoma et al., 2017) & 44.2 (\(\pm\) 15.5) & 23.2 (\(\pm\) 6.60) & 138.8 (\(\pm\) 19.16) & 18.0 (\(\pm\) 2.1) & 59.6 (\(\pm\) 8.1) \\ MORLD (Jeon \& Kim, 2020) & 1.4 (\(\pm\) 1.5) & 0.2 (\(\pm\) 0.40) & 22.2 (\(\pm\) 16.1) & 1.4 (\(\pm\) 1.2) & 6.6 (\(\pm\) 3.7) \\ HierVAE (Jin et al., 2020a) & 4.8 (\(\pm\) 1.6) & 0.8 (\(\pm\) 0.7) & 5.8 (\(\pm\) 1.0 & 3.6 (\(\pm\) 1.4) & 4.8 (\(\pm\) 0.7) \\ RationalERL (Jin et al., 2020b) & 61.3 (\(\pm\) 1.2) & 2.0 (\(\pm\) 0.0) & **312.7** (\(\pm\) 6.3) & 1.0 (\(\pm\) 0.0) & **199.3** (\(\pm\) 7.1) \\ FEED (Yang et al., 2021) & 34.8 (\(\pm\) 4.9) & 21.2 (\(\pm\) 4.0) & 88.2 (\(\pm\) 13.4) & 34.4 (\(\pm\) 1.8) & 59.6 (\(\pm\) 8.2) \\ PS-VAE (Kong et al., 2022) & 38.0 (\(\pm\) 6.4) & 18.0 (\(\pm\) 5.5) & 180.7 (\(\pm\) 11.6) & 16.0 (\(\pm\) 0.8) & 83.7 (\(\pm\) 11.9) \\ MOOD (Lee et al., 2023b) & 86.4 (\(\pm\) 11.2) & 19.2 (\(\pm\) 4.0) & 144.4 (\(\pm\) 15.1) & 50.8 (\(\pm\) 3.8) & 81.8 (\(\pm\) 5.7) \\ \hline GEAM-static (ours) & 114.0 (\(\pm\) 2.9) & 60.7 (\(\pm\) 4.0) & 134.7 (\(\pm\) 8.5) & 70.0 (\(\pm\) 2.2) & 99.3 (\(\pm\) 1.7) \\ GEAM (ours) & **123.0 (\(\pm\) 7.8)** & **79.0 (\(\pm\) 9.2)** & 144.3 (\(\pm\) 8.6) & **84.7** (\(\pm\) 8.6) & 118.3 (\(\pm\) 0.9) \\ \hline \hline \end{tabular}
\end{table}
Table 4: **#Circles of generated hit molecules.** The #Circles threshold is set to 0.75. The results are the means and the standard deviations of 3 runs. The results for the baselines except for RationaleRL and PS-VAE are are highlighted in bold.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c}{Target protein} \\ \cline{2-7} & parpl & fa7 & Stntlb & braf & jak2 \\ \hline REINVENT (Olivercoma et al., 2017) & 8.70 (\(\pm\) 0.523) & 7.26 (\(\pm\) 0.264) & 8.70 (\(\pm\) 0.316) & 8.92 (\(\pm\) 0.460) & -8.16 (\(\pm\) 0.277) \\ Graph GA (Jensen, 2019) & -10.949 (\(\pm\) 0.532) & -7.365 (\(\pm\) 0.236) & -10.422 (\(\pm\) 0.620) & -10.789 (\(\pm\) 0.341) & -10.167 (\(\pm\) 0.535) \\ MORLD (Jeon \& Kim, 2020) & -7.532 (\(\pm\) 0.204) & -6.263 (\(\pm\) 0.165) & 7.589 (\(\pm\) 0.650) & 8.40 (\(\pm\) 0.373) & -7.816 (\(\pm\) 0.133) \\ HFeVAE (Jin et al., 2020a) & 9.487 (\(\pm\) 0.279) & 8.612 (\(\pm\) 0.274) & 8.001 (\(\pm\) 0.253) & 8.978 (\(\pm\) 0.335) & -8.285 (\(\pm\) 0.330) \\ RationaleRL (Jin et al., 2020b) & -10.663 (\(\pm\) 0.086) & -8.129 (\(\pm\) 0.084) & -9.005 (\(\pm\) 0.155) & 8.0 (\(\pm\) 19.6) & -9.735 (\(\pm\) 0.202) \\ FRED (Yang et al., 2021) & -10.579 (\(\pm\) 0.091) & -8.028 (\(\pm\) 0.080) & -9.887 (\(\pm\) 0.115) & -9.637 (\(\pm\) 0.049) & -9.464 (\(\pm\) 1.219) \\ PS-VAE (Kong et al., 2022) & -9.978 (\(\pm\) 0.091) & -8.028 (\(\pm\) 0.060) & -9.887 (\(\pm\) 0.115) & -9.637 (\(\pm\) 0.049) & -9.464 (\(\pm\) 1.219) \\ MOOD (Lee et al., 2023b) & -10.865 (\(\pm\) 0.113) & -8.160 (\(\pm\) 0.071) & -11.145 (\(\pm\) 0.042) & -11.063 (\(\pm\) 0.034) & -10.147 (\(\pm\) 0.060) \\ \hline GEAM-static (ours) & -12.810 (\(\pm\) 0.124) & -9.682 (\(\pm\) 0.250) & -12.369 (\(\pm\) 0.086) & -12.336 (\(\pm\) 0.157) & -11.812 (\(\pm\) 0.085) \\ GEAM (ours) & **-12.891** (\(\pm\) 0.159) & **-9.890** (\(\pm\) 0.116) & **-12.374** (\(\pm\) 0.060) & **-12.342** (\(\pm\) 0.095) & **-11.816** (\(\pm\) 0.05) \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Novel top 5% docking score (kcal/mol) results.** The results are the means and the standard deviations of 3 runs. The results for the baselines except for RationaleRL and PS-VAE are taken from Lee et al. (2023b). The best results are highlighted in bold.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{6}{c}{Target protein} \\ \cline{2-7} & parpl & fa7 & Stntlb & braf & jak2 \\ \hline REINVENT (Olivercoma et al., 2017) & 44.2 (\(\pm\) 15.5) & 23.2 (\(\pm\) 6.60) & 138.8 (\(\pm\) 19.16) & 18.0 (\(\pm\) 2.1) & 59.6 (\(\pm\) 8.1) \\ MORLD (Jeon \& Kim, 2020) & 1.4 (\(\pm\) 1.5) & 0.2 (\(\pm\) 0.40) & 22.2 (\(\pm\) 16.1) & 1.4 (\(\pm\) 1.2) & 6.6 (\(\pm\) 3.7) \\ HierVAE (Jin et al., 2020a) & 4.8 (\(\pm\) 1.6) & 0.8 (\(\pm\) 0.7) & 5.8 (\(\pm\) 1.0 & 3.6 (\pm\) 1.4) & 4.8 (\(\pm\) 0.7) \\ RationaleRL (Jin et al., 2020b) & 61.3 (\(\pm\) 1.2) & 2.0 (\(\pm\) 0.0) & **312.7** (\(\pm\) 6.3) & 1.0 (\(\pm\) 0.0) & **199.3** (\(\pm\) 7.1) \\ FEED (Yang et al., 2021) & 34.8 (\(
ResultsThe results are shown in Table 1 and Table 2. GEAM and GEAM-static significantly outperform all the baselines in all the tasks, demonstrating that the proposed goal-aware extraction method and the proposed combination of SAC and GA are highly effective in discovering novel, drug-like, and synthesizable drug candidates that have high binding affinity. GEAM shows comparable or better performance than GEAM-static, and as shown in Table 3 and Table 4, the usage of the dynamic vocabulary update enhances novelty and diversity without degrading optimization performance. There is a general trend that the more powerful the molecular optimization model, the less likely it is to generate diverse molecules (Gao et al., 2022), but GEAM effectively overcomes this trade-off by discovering novel and high-quality goal-aware fragments on-the-fly. Note that the high novelty values of MORLD are trivial due to its poor optimization performance and very low diversity. In the same vein, the high diversity values of RationaleRL on the target proteins 5ht1b and jak2 are not meaningful due to its poor optimization performance and novelty.
### Optimization of Multi-property Objectives in PMO Benchmark
Experimental setupWe validate GEAM in the seven multi-property objective (MPO) optimization tasks in the practical molecular optimization (PMO) benchmark (Gao et al., 2022), which are the tasks in the Guacamol benchmark (Brown et al., 2019) that additionally take the number of oracle calls into account for realistic drug discovery. The details are provided in Section C.1 and C.3.
BaselinesWe use the top three models reported by Gao et al. (2022) as our baselines. In addition to **REINVENT**(Olivercona et al., 2017) and **Graph GA**(Jensen, 2019), **STONED**(Nigam et al., 2021) is a GA-based model that manipulates SELFIES strings.
ResultsAs shown in Table 5, GEAM outperform the baselines in most of the tasks, demonstrating its applicability to various drug discovery problems. Note that GEAM distinctly improves the performance of GEAM-static in some tasks. Furthermore, as shown in Table 6, GEAM shows higher novelty and diversity than other methods. Especially, GEAM generates more novel and diverse molecules than GEAM-static, again verifying the dynamic vocabulary update of GEAM effectively improves novelty and diversity without degrading optimization performance.
ulary, respectively. GEAM (property) is GEAM which only uses the property instead of Eq. (6) when scoring fragments, i.e., \(\texttt{score}(F_{j})=\frac{1}{|S(F_{j})|}\sum_{(G,Y)\in S(F_{j})}Y\). GEAM significantly outperforms all the variants, verifying the importance of our goal-aware fragment vocabulary. Notably, GEAM (property) uses the topmost fragments in terms of the target property, but performs worse than GEAM because it does not use FGIB to find important subgraphs that contribute to the property.
Effect of the fragment assembly and modificationTo examine the effect of the proposed combinatorial use of the assembly and the modification modules, we compare GEAM with GEAM-w/o A and GEAM-w/o M in Figure 3(c). GEAM-w/o A does not use the assembly module and constructs its population as the top-\(P\) molecules from ZINC250k, while GEAM-w/o M does not use the modification module. GEAM-random A uses random fragment assembly instead of SAC. We can observe GEAM-w/o A significantly underperforms as the fragment modification module alone cannot take the advantage of the goal-aware fragments, and GEAM-random A largely improves over GEAM-w/o A. GEAM outperforms all the ablated variants, demonstrating that jointly leveraging the fragment assembly module and the fragment modification module is crucial to the performance.
Effect of the dynamic vocabulary updateTo thoroughly examine the effect of the proposed dynamic update of the fragment vocabulary, we compare the generation progress of GEAM with that of GEAM-static in Figure 4. GEAM-static-1000 is GEAM-static with the vocabulary size \(K=\) 1,000. When the initial vocabulary size \(K=300\) and the maximum vocabulary size \(L=\) 1,000, the vocabulary size of GEAM increases during generation from 300 to 1,000 as GEAM dynamically collects fragments on-the-fly while the vocabulary sizes of GEAM-static and GEAM-static-1000 are fixed to 300 and 1,000, respectively. As expected, GEAM-static-1000 shows the worst optimization performance since its vocabulary consists of top-1000 fragments instead of top-300 from the same training molecules, and shows the highest diversity as it utilizes more fragments than GEAM and GEAM-static throughout the generation process. GEAM shows the best optimization performance and novelty thanks to its vocabulary update strategy that constantly incorporates novel fragments outside the training molecules, as well as improved diversity compared to GEAM-static.
Qualitative analysisWe qualitatively analyze the extracted goal-aware fragments. In Figure 3(d), we present an example of the binding interactions of a molecule and the target protein jak2 using the protein-ligand interaction profiler (PLIP) (Adasme et al., 2021). Additionally, we show the fragments of the molecule and \(w\) of the fragments calculated by FGIB. We observe that the important fragments identified by FGIB with high \(w\) (red and blue) indeed play crucial role for interacting with the target protein, while the fragments with low \(w\) (gray) are not involved in the interactions. This analysis validates the efficacy of the proposed goal-aware fragment extraction method using FGIB and suggests the application of FGIB as a means to improve the explainability of drug discovery.
Figure 4: **The generation progress of GEAM and GEAM-static** on the ligand generation task against jak2.
Figure 3: (a-c) **Ablation studies** on **FGIB**, **SAC and GA** on the ligand generation task with the target protein jak2 and (d) **the PLIP image** showing hydrophobic interactions between an example molecule and jak2.
## 5 Conclusion
In this paper, we proposed GEAM, a fragment-based molecular generative framework for drug discovery. GEAM consists of three modules, FGIB, SAC, and GA, responsible for goal-aware fragment extraction, fragment assembly, and fragment modification, respectively. In the generative cycle of the three modules, FGIB provides goal-aware fragments to SAC, SAC provides high-quality population to GA, and GA provides novel fragments to FGIB, enabling GEAM to achieve superior optimization performance with high molecular novelty and diversity on a variety of drug discovery tasks. These results highlight its strong applicability to real-world drug discovery.
|
2304.02111 | Observational signatures of a static $f(R)$ black hole with thin
accretion disk | In this study, we focus on a static spherically symmetric $f(R)$ black hole
spacetime characterized by a linear dark matter-related parameter. Our
investigation delves into understanding the influence of different assumed
values of this parameter on the observable characteristics of the black hole.
To fulfill this task, we investigate the light deflection angles, which are
inferred from direct analytical calculations of null geodesics.} To examine the
black hole's properties further, we assume an optically thin accretion disk and
explore various emission profiles. Additionally, we investigate the shadow cast
by the illuminated black hole when affected by the disk. Furthermore, we
simulate the brightness of an infalling spherical accretion in the context of
silhouette imaging for the black hole. Our findings indicate that, except for
some specific cases, the observed brightness of the accretion disk
predominantly arises from direct emission, rather than lensing and photon
rings. Moreover, we reveal that the linear dark parameter of the black hole
significantly influences the shadow size and brightness. Our discussion covers
both analytical and numerical approaches, and we utilize ray-tracing methods to
produce accurate visualizations. | Mohsen Fathi, Norman Cruz | 2023-04-04T20:27:41Z | http://arxiv.org/abs/2304.02111v4 | Study of deflection angles, thin accretion structure, and the observational signatures of a static \(f(R)\) black hole
###### Abstract
In this paper, we constrain the linear dark-matter related parameter of a static spherically symmetric \(f(R)\) black hole spacetime regarding the observed angular diameters of M87* and Sgr A* from the EHT. We then investigate the light deflection angles inferred from direct analytical calculation of null geodesics and that obtained from the Gauss-Bonnet theorem. Assuming an optically thin accretion disk for the black hole and after discussing its properties, we conceive different emission profiles and investigate the shadow cast of this black hole when it is illuminated by the disk. Furthermore, we simulate the brightness of an infalling spherical accretion in the context of the silhouette imaging of the black hole. We find that, excluding some specific cases, the specific observed brightness of the accretion disk consists of the direct emission, rather than that for the lensing and photon rings. Furthermore, it is revealed that the linear dark parameter of the black hole has considerable effects on the size of the shadow and its brightness. The discussion is done both analytically and numerically, and ray-tracing methods are employed to generate proper visualizations.
_keywords_: Black holes, \(f(R)\) gravity, thin accretion, shadow
###### Contents
* I Introduction and Motivation
* II A particular model of \(f(R)\) gravity and its black hole solution
* III Propagation of light and unstable photon orbits
* III.1 Motion of mass-less particles
* III.1.1 Deflecting trajectories
* III.1.2 Deflection angle
* III.1.3 Critical trajectories
* IV Weak deflection angle using the GBT
* V Thin accretion disk model and emission from the black hole
* V Shadow and rings of the black hole with thin accretion
* V.1.1 Direct emission, lensing rings and photon rings
* V.1.2 Transfer functions and the observed intensities
* V.1.3 Observational signatures of emissions from the accretion disk
* V.1.4 Observational signatures of infalling spherical accretion
* VI Conclusion
* A The full expression of \(\mathscr{J}(r)\)
Introduction and motivation
Ever since the fundamental concepts of black holes were theoretically established by Schwarzschild [1] and Finkelstein [2], the search for the identification of these strange objects has been on an uplifting course. In this sense, from the first observational evidences obtained for Cygnus X-1 in 1971 [3; 4], to the recent shadow images of M87* [5] and Sgr A* [6] captured by the Event Horizon Telescope (EHT), the quest for gaining more black hole knowledge has been constantly continued and advanced. In fact, by doing comparisons between theoretical predictions and observed shadows, one can obtain invaluable knowledge about how light behaves in extremely gravitating systems. Furthermore, the EHT results revealed the existence of a magnetic field around M87* which could be related to the formation of jets emerging from the black hole [7; 8; 9]. The shadow images can also provide information about the geometric structure in the near-horizon regions [10] and the black hole physical characteristics [11]. On the other hand, although these findings have provided strong evidences advocating the general theory of relativity, nevertheless, there are some limitations in the cosmological context that general relativity ceases to be quite illustrative. These include the problems of flat galactic rotation curves, anti-lensing, the universe's accelerated expansion, the observed anisotropies on the cosmic microwave background radiation, and the coincidence problem [12; 13; 14; 15; 16; 17].
Many scientists believe that the above phenomena emerge from the dark side of the universe which so far has not been yet explained properly. For example, by adding the cosmological constant term to the Einstein field equations as a nonzero vacuum energy, the acceleration of the universe could be regenerated, but the reason for the small value of the cosmological constant has not yet been unveiled. This is why some believe that to explain the unresolved cosmological problems such as the late time acceleration of the universe, we should turn to the modified theories of gravity to mimic the effects of dark matter and dark energy, and can provide an effective time varying equation of state. In such models, and in accordance with the necessity, the Einstein-Hilbert action is generalized or extended, in order to be able to explain the dynamics of the universe in cosmic, galactic or astrophysical scales. For example, by replacing the Einstein-Hilbert action with a generic \(f(R)\) theory, one of the most intuitive extensions of general relativity is obtained [18; 19; 20]. Hence, the \(f(R)\) theories of gravity have been of interest during the last decades and have been being scrutinized to check their consistency (see for example Refs. [21; 22; 23; 24; 25; 26; 27] and the reviews [28; 29]). On the other hand, as in general relativity, we are also interested in the black hole solutions that are proposed by \(f(R)\) gravity. One primary solution which is obtained from this theory in its Palatini formalism, is the Schwarzschild-(anti-)de Sitter metric with an effective cosmological constant, which appears to suffer from incompatibilities with primary general relativistic tests, since the cosmological constant plays no effective role in solar system scales [30]. In fact, one can avoid this problem by manipulating the action, in a way that the effectiveness of the cosmological constant becomes ignorable in the solar system scales and significant in the cosmological scales [31; 32]. Accordingly, a proper \(f(R)\) action model has been proposed in Refs. [33; 34; 35] which is consistent with both galactic and cosmological scales, and in Ref. [36], this model has been elaborated by means of a generic function in the gravitational action, for it to be consistent with the solar system tests, as well as with galactic rotation curves and late time acceleration of the universe. There, the authors also propose a static spherically symmetric black hole solution which is also of our interest in this paper, regarding the light propagation in its geometry and its shadow.
In fact, theoretical constraining of black hole shadows regarding the observational data has been of special interest to scientists and numerous publications have been devoted to this subject (see for example Refs. [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71]). But the recent silhouette imaging of the EHT made it more important to the scientific community, to have in hand reliable methods of visualizing black holes with accretion disks as their illumination sources. This was in fact ignited by Luminet in 1979 [72], when he calculated the radiation emitted from a thin accretion disk surrounding a Schwarzschild black hole and proposed a ray-traced image of the disk. In general, this type of accretion is based on the Shakura-Sunyaev [73], Novikov-Thorne [74], and Page-Thorne [75] models, in which, the disk is assumed to be thin, geometrically and optically. Based on these assumptions and along the interest in black hole imaging, a new method of simulating the light rings of higher order for a black hole with thin accretion disk was proposed in Ref. [76], and ever since, has been applied in several publications (for example see Refs. [77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87]). This method also founds an important part of our paper.
We organize our discussion as follows: In Sect. II, we have a brief review on the \(f(R)\) black hole solution and introduce its cosmological parameters. In Sect. III, we begin our investigation by studying the casual structure of the spacetime which is followed by applying a Lagrangian formalism to derive the equations of motion for mass-less particles (light rays). There, we calculate the critical impact parameter of photon trajectories, at which, the orbits become unstable. This way, we will be able to constrain the first order dark parameter \(\beta\) of the spacetime, by comparing the angular diameter of the theoretical black hole with those inferred from the shadow images of M87* and Sgr A* by the EHT. We continue this section by finding the turning points of the light ray trajectories as they approach the black hole, and then obtain the exact analytical solutions to the angular equation of motion for deflecting and critical trajectories. These solutions are applied to find the lens equation and the deflection angle is calculated analytically in terms of the Weierstrass elliptic function. In fact, gravitational lensing is a remarkable tool in examining black hole solutions in strong field regime [88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98]. This is while weak lensing is of importance to astrophysicists and cosmologists as it enables them to estimate the matter distribution profiles inside galaxies, as well as inside other portions of the observable universe [99; 100; 101; 102; 103]. Hence, weak lensing appears as a powerful tool in studying dark matter and dark energy properties. Accordingly, we once again turn to the calculation of the light deflection angle in Sect. IV, however, through a different mathematical method. This method has been proposed by Gibbons and Werner in Ref. [104], where they apply the
Gauss-Bonnet theorem (GBT) to calculate the weak deflection angle of light. This geometrical theorem has appeared to be of significant applicability in mathematics and physics and here, we apply it to calculate the weak deflection angle of light around the \(f(R)\) black hole through direct calculations. In Sect. V, we construct a thin accretion disk for the black hole in the context of the Novikov-Thorne model. We calculate the dynamical characteristics of accreting particles in their stable orbits and obtain the radial profiles of disk's radiation flux and temperature. We continue this section by employing the method introduced in Ref. [76] to visualize the light rings and the accretion disk of the \(f(R)\) black hole for three different disk emission profiles. Furthermore, we also calculate the thickness of the rings which is also inferred from the observed effective intensity profiles. At the end of this section, the black hole is assumed to be endowed with a spherically symmetric infalling accretion, and the observed disk emission is calculated. We close our discussion by simulating the shadow of the black hole under this condition. We conclude in Sect. VI. Throughout this work, we use the signature convention \((-+++)\), and wherever appeared on functions, primes denote differentiations with respect to the radial coordinate. We apply the geometrized unit system, in which \(G=c=1\).
## II A particular model of \(f(R)\) gravity and its black hole solution
The gravitational action of the theory can be written in its most generic form as
\[\mathcal{S}=\frac{1}{2\kappa}\int\mathrm{d}x^{4}\sqrt{-g}\;f(R)+\mathcal{S}_{ m}, \tag{1}\]
in which, \(\kappa\) is a coupling constant, \(f(R)\) is a function of the Ricci scalar of the spacetime with the metric determinant \(g\), and \(\mathcal{S}_{m}\) is a matter field action. Accordingly, the field equations are derived as
\[F(R)R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}f(R)-\left(\nabla_{\mu}\nabla_{\nu}-g_{\mu \nu}\Box\right)F(R)=\kappa T_{\mu\nu}, \tag{2}\]
by varying the action \(\mathcal{S}\) with respect to the metric, where \(F(R)=\frac{\mathrm{d}f(R)}{\mathrm{d}R}\), \(\Box=\nabla_{\lambda}\nabla^{\lambda}\) and \(T_{\mu\nu}\) is the energy-momentum tensor. In Ref. [36], the particular expression
\[f(R)=R+\Lambda+\frac{R+\Lambda}{\frac{R}{R_{0}}+\frac{2}{\alpha}}\ln\frac{R+ \Lambda}{R_{c}}, \tag{3}\]
was considered, where \(\Lambda\) is the cosmological constant having the value \(|\Lambda|\leq 10^{-52}\) m\({}^{-2}\)[105]1, \(R_{0}=\frac{6\alpha^{2}}{d^{2}}\) with \(\alpha\) and \(d\) being free parameters of the action, and \(R_{c}\) is an integration constant. Here \(\alpha\) is dimensionless whereas \([d]=\mathrm{m}\). The proposed static spherically symmetric solution to the field equations (2) is given by the line element
Footnote 1: Unless otherwise is stated, this value of \(\Lambda\) is considered in the forthcoming sections of the paper.
\[\mathrm{d}s^{2}=B(r)\mathrm{d}t^{2}+B(r)^{-1}\mathrm{d}r^{2}+r^{2}\left( \mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}\right), \tag{4}\]
in the usual Schwarzschild coordinates \(x^{\mu}=(t,r,\theta,\phi)\), where the lapse function is given by [36]
\[B(r)=1-\frac{2M}{r}+\beta r-\frac{1}{3}\Lambda r^{2}, \tag{5}\]
in the first order estimation with respect to the free parameters of action, and describes the exterior geometry of a static spherically symmetric object of mass \(M\), where \(\beta=\frac{\alpha}{d}\) is a non-negative real constant. Note that, the above expression has remarkable similarities with the Mannheim-Kazanas vacuum solution to the fourth order Weyl conformal gravity [106], where the linear term \(\beta r\) plays the role of an extra potential compensating for the flat galactic rotation curves. In the same sense, the model given in Eq. (5) propose that small values of \(\alpha\) can provide the flat galactic rotation curves for a typical galaxy. Note that the model (3) is reduced to
\[f(R)=R+R_{0}\ln\frac{R}{R_{c}}, \tag{6}\]
for strong curvature, where \(R\gg\Lambda\) and \(\frac{R}{R_{0}}\gg\frac{2}{\alpha}\) which is related to the case of stellar black holes. This is while in the cosmological scales where \(R\simeq R_{0}\simeq\Lambda\) and \(\alpha\ll 1\), the model reduces to \(f(R)=R+\Lambda\) which corresponds to the Einstein-Hilbert action with a cosmological constant that described the accelerated expansion of the universe. Hence, the small-valued
free parameter \(\beta\) in Eq. (5) can cover both the large and small scale phenomena in the universe. In what follows, we perform an estimation of this parameter in the context of the data obtained from the EHT for M87* and Sgr A*, to have in hand the desired calibration for the spacetime parameters in the context of the strong gravity regime at the vicinity of a stellar black hole.
We begin with studying the black hole exterior geometry and its casual structure. For the sake of convenience in the calculations and demonstrations, we adimensionalize the parameters by introducing the quantities
\[\tilde{r}\rightarrow\frac{r}{M},\quad\tilde{\beta}\rightarrow\beta M,\quad \tilde{\Lambda}\rightarrow\frac{1}{3}\Lambda M^{2}. \tag{7}\]
In the forthcoming sections, however, we remove the "tilde" overscript from the dimensionless parameter in Eq. (7), which is equivalent to letting \(M=1\).
## III Propagation of light and unstable photon orbits
The causal structure of the spacetime described by the lapse function (5) can be studied in terms of the hypersurfaces at which the condition \(B(r)=0\) is satisfied, namely, the black hole horizons. This latter results in a cubic equation, whose solutions are
\[r_{1} =\frac{\beta}{3\Lambda}-\frac{4}{\Lambda}\sqrt{\frac{g_{2}}{3}} \cos\left(\frac{1}{3}\arccos\left(\frac{3g_{3}}{g_{2}}\sqrt{\frac{3}{g_{2}}} \right)-\frac{4\pi}{3}\right), \tag{8}\] \[r_{2} =\frac{\beta}{3\Lambda}-\frac{4}{\Lambda}\sqrt{\frac{g_{2}}{3}} \cos\left(\frac{1}{3}\arccos\left(\frac{3g_{3}}{g_{2}}\sqrt{\frac{3}{g_{2}}} \right)-\frac{2\pi}{3}\right),\] (9) \[r_{3} =\frac{\beta}{3\Lambda}-\frac{4}{\Lambda}\sqrt{\frac{g_{2}}{3}} \cos\left(\frac{1}{3}\arccos\left(\frac{3g_{3}}{g_{2}}\sqrt{\frac{3}{g_{2}}} \right)\right), \tag{10}\]
where
\[g_{2} =\frac{1}{12}\left(\beta^{2}+3\Lambda\right), \tag{11a}\] \[g_{3} =-\frac{1}{16}\left(\frac{2\beta^{3}}{27}+\frac{\beta\Lambda}{3} -2\Lambda^{2}\right). \tag{11b}\]
The existence of real values for the above radii, however, depends on the sign of the polynomial's discriminant, i.e. \(\Delta=g_{2}^{3}-27g_{3}^{2}=\frac{\Lambda^{2}}{256}[\beta^{2}(1+8\beta)+4(1+9 \beta)-108\Lambda^{2}]\), which is always of positive values for \(\beta,\Lambda\ll 1\). So, we infer that all the solutions (8)-(10) are real-valued. It is straightforward to verify that \(r_{1}>r_{2}>0\) and \(r_{3}<0\). Hence we identify \(r_{++}=r_{1}\), as the cosmological horizon of the black hole where the infinite blueshift happens, and \(r_{+}=r_{2}\), as its event horizon, where the infinite redshift happens. This way, the lapse function can be rewritten as
\[B(r)=\frac{\Lambda}{r}\left(r_{++}-r\right)\left(r-r_{+}\right)\left(r-r_{3} \right). \tag{12}\]
### Motion of mass-less particles
The motion of test particles can be described by the Lagrangian
\[2\mathscr{L} = g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu} \tag{13}\] \[= -B(r)\dot{t}^{2}+\frac{\dot{r}^{2}}{B(r)}+r^{2}\dot{\theta}^{2}+r ^{2}\sin^{2}\theta\dot{\phi}^{2},\]
where "dot" stands for differentiation with respect to the affine parameter \(\tau\) of the geodesic curves. Enjoying the spherical symmetry of the spacetime, we confine the motion of particles to the equatorial plane (i.e. \(\theta=\frac{\pi}{2}\)) without loss of generality. One can then define the conjugate momenta
\[\Pi_{\mu}=\frac{\partial\mathscr{L}}{\partial\dot{x}^{\mu}}, \tag{14}\]
which provides the two constants of motion
\[\Pi_{t}=-B(r)\dot{t}=-E, \tag{15a}\] \[\Pi_{\phi}=r^{2}\dot{\phi}=L, \tag{15b}\]
in accordance with the Killing symmetries of the spacetime, and we name them, respectively, as the energy and the angular momentum of the test particles. These two quantities allow us to define the impact parameter \(b\equiv\frac{L}{E}\). This parameter corresponds to the vertical distance between the tangent to the null geodesic curves and the line passing the black hole singularity, and is of importance in the identification of possible photon trajectories. In fact, motion of photons can be described by the equation \(\mathscr{L}=0\) which characterizes the null geodesics. Thus, by means of Eq. (13), the equations of motion are obtained as
\[\dot{r}^{2}=E^{2}-V(r), \tag{16}\] \[\left(\frac{\mathrm{d}r}{\mathrm{d}\phi}\right)^{2}=\frac{r^{4}} {b^{2}}\left[1-\frac{V(r)}{E^{2}}\right], \tag{17}\]
in which
\[V(r)=L^{2}\frac{B(r)}{r^{2}}, \tag{18}\]
represents the effective potential for photons. Then, the turning points \(r_{t}\) in the orbits correspond to \(\dot{r}=0\), which is encountered when \(V(r_{t})=E^{2}\). This potential has a maximum at
\[r_{p}=\frac{\beta_{0}-1}{\beta}, \tag{19}\]
given that \(\beta_{0}=\sqrt{1+6\beta}\), which is where the photon orbits become unstable. Hence, this maximum corresponds to the radius of the photon sphere. As it is observed, this radius is independent of \(\Lambda\), and decreases as \(\beta\) increases. One can also verify that
\[\lim_{\beta\to 0}r_{p}=3, \tag{20}\]
which is the radius of unstable photon orbits for a Schwarzschild-de Sitter black hole. It is straightforward to calculate the critical value of the impact parameter which is obtained as
\[b_{p}=\frac{\sqrt{3}\left(\beta_{0}-1\right)}{\sqrt{\beta^{2}\left(2\beta_{0} -1\right)-18\beta\Lambda+6\left(\beta_{0}-1\right)\Lambda}}, \tag{21}\]
and identifies the radius of the black hole shadow for a distant observer. In fact, the shadow of black holes is a dark region confined by the lensed images of their luminous background, which constitute bright rings. For an observer at the distance \(D\) (in Mpc) from the black hole, the shadow is identified by its angular diameter (in \(\mu\)as) [107]
\[\Omega=6.191165\times 10^{-8}\,\frac{\gamma b_{p}}{\pi D}, \tag{22}\]
where \(\gamma\) is the mass ratio of the black hole and the Sun, which is \(\gamma=6.2\times 10^{9}\) for M87* at the distance \(D=16.8\,\mathrm{Mpc}\)[5], and is \(\gamma=4.14\times 10^{6}\) for Sgr A* at \(D=8.127\,\mathrm{kpc}\)[6]. Hence, one can use Eq. (21) in Eq. (22) in order to put some constraints on the parameter \(\beta\). In Fig. 1, we have used these equations to obtain the profiles of \(\Omega(\beta)\) for the cases of the above black holes. As it is observed from the figure, the value of \(\Omega\) decreases along the curves and constrains \(0<\beta<0.023\) (with the mean value \(\beta\approx 0.011\)) for M87*, and \(0.022<\beta<0.041\) (with the mean value \(\beta\approx 0.031\)) for Sgr A*. Hence, it is now possible to visualize the behavior of the effective potential (18), which is shown in Fig. 2. In the left panel, five radial profiles of the effective potential have been plotted which correspond to different allowed values for the \(\beta\)-parameter. As it is inferred from the diagram, the potential possesses one maximum, at which, unstable orbits can occur. This is shown in more details in the right panel of the Fig. 2, where the orbits are categorized in accordance with the values of the impact parameter \(b\). When \(b>b_{p}\), the photons may approach from either of the turning points \(r_{d}\) (where they are recessively deflected by the black hole) or \(r_{f}\) (where they are deflected towards the event horizon). In fact the turning points can be obtained analytically for the spacetime of the \(f(R)\) black hole, by solving the equation \((\frac{\mathrm{d}r}{\mathrm{d}\phi})^{2}=0\). Applying the Eqs. (17) and (18), this results in
\[\left(\frac{\mathrm{d}r}{\mathrm{d}\phi}\right)^{2}=\mathcal{P}_{4}(r)\equiv r \left(\frac{r^{3}}{\lambda^{2}}-\beta r^{2}-r+2\right)=0, \tag{23}\]
which beside the trivial solution at \(r=0\), has one negative root and the two positive roots \(r_{d}=x_{d}^{-1}\) and \(r_{f}=x_{f}^{-1}\), where
\[x_{f}=\frac{1}{6}-2\sqrt{\frac{\bar{g}_{2}}{3}}\sin\left(\frac{1 }{3}\arcsin\left(\frac{3\bar{g}_{3}}{\bar{g}_{2}}\sqrt{\frac{3}{\bar{g}_{2}}} \right)-\frac{2\pi}{3}\right), \tag{24}\] \[x_{d}=\frac{1}{6}-2\sqrt{\frac{\bar{g}_{2}}{3}}\sin\left(\frac{1 }{3}\arcsin\left(\frac{3\bar{g}_{3}}{\bar{g}_{2}}\sqrt{\frac{3}{\bar{g}_{2}}} \right)\right), \tag{25}\]
with \(\frac{1}{\lambda^{2}}=\frac{1}{b^{2}}+\Lambda\), and
\[\bar{g}_{2} =\frac{1}{4}\left(\frac{1}{3}+2\beta\right), \tag{26a}\] \[\bar{g}_{3} =-\frac{1}{4}\left(\frac{1}{\lambda^{2}}-\frac{\beta}{6}-\frac{1} {54}\right). \tag{26b}\]
For the case of \(b=b_{p}\), the photons encounter the turning point \(r_{p}\) (see the right panel of Fig. 2), which has been determined in Eq. (19). At this stage, the photons travel on unstable (or critical) orbits, which form the black hole shadow. In Table. 1, the turning points have been given for different values of \(\beta\). As it is inferred from the table, increase in the \(\beta\)-parameter leads to a smaller black hole (decrease in \(r_{+}\)), a wider effective potential (increase in the distance between \(r_{d}\) and \(r_{f}\)), and a lower potential maximum. So unstable orbits are less likely to happen for larger \(\beta\) (decrease in \(b_{p}\)), and the black hole shadow decreases in size. In such cases, the light rays detected by a distant observer are mostly dominated by the direct emission, which is a simply lensed image of the black hole's emitting disk or its luminous background. This will be discussed in more details in the forthcoming sections.
Now that the turning points have been obtained and analyzed, we proceed with the determination of the exact solutions for
Figure 1: The black and blue curves show the \(\beta\)-profiles of the angular diameter \(\Omega\) in Eq. (22). The green and red regions correspond, respectively, to the observed angular diameters of M87* (\(42\pm 3\)\(\mu\)as [5]) and Sgr A* (\(51.8\pm 2.3\)\(\mu\)as [6]). The intersection of the \(\beta\)-profiles with the aforementioned regions, constrains the \(\beta\)-parameter within \(0<\beta<0.023\) for M87*, and \(0.022<\beta<0.041\) for Sgr A*.
Figure 2: In panel (a), the radial profile of the effective potential is shown for various values of \(\beta\). In panel (b), by adopting \(\beta=0.022\), a typical effective potential has been shown together with the turning points and their corresponding impact parameters.
the aforementioned possible photon orbits around the \(f(R)\) black hole. In fact, the null geodesics for this black hole have been studied in their most general form in Ref [108]. In what follows, however, we base our exact solutions on the analytically known turning points, which helps us investigating, separately, the deflecting and critical trajectories that are of importance for the purpose of this paper.
#### ii.1.1 Deflecting trajectories
As mentioned above, photonic trajectories become deflected at the turning points \(r_{d}\) and \(r_{f}\) which lead to different fates for the photons. Accordingly, the possible deflecting trajectories can be ramified as the orbit of the first kind (OFK) at \(r_{d}\), and the orbit of the second kind (OSK) at \(r_{f}\). Since both of these turning points have been identified analytically, hence, by applying the change of variable \(z\doteq\frac{r_{i}}{r}\) (\(r_{i}=r_{d},r_{f}\)), one can rewrite the differential equation (23) as
\[\left(\frac{\mathrm{d}z}{\mathrm{d}\phi}\right)^{2}=\mathcal{P}_{3}(z)\equiv \frac{2}{r_{i}}z^{3}-z^{2}-r_{i}\beta z+\frac{r_{i}^{2}}{\lambda^{2}}. \tag{27}\]
A further change of variable \(u\doteq\frac{1}{2}(\frac{z}{r_{i}}-\frac{1}{6})\) provides us with the Weierstrassian differential equation
\[\left(\frac{\mathrm{d}u}{\mathrm{d}\phi}\right)^{2}=\tilde{\mathcal{P}}_{3}(u )\equiv 4u^{3}-\tilde{g}_{2}u-\tilde{g}_{3}, \tag{28}\]
in which
\[\tilde{g}_{2} =\frac{1}{12}(1+6\beta), \tag{29a}\] \[\tilde{g}_{3} =-\frac{1}{216}\left(\frac{54}{\lambda^{2}}-9\beta-1\right), \tag{29b}\]
are known as the Weierstrass invariants. This leads to the integrals
\[\phi-\phi_{0} =\int_{u_{d}}^{u}\frac{\mathrm{d}u^{\prime}}{\sqrt{\tilde{ \mathcal{P}}_{3}(u^{\prime})}}\;\;(\mathrm{with}\;u_{d}<u), \tag{30}\] \[\phi-\phi_{0} =\int_{u}^{u_{f}}\frac{\mathrm{d}u^{\prime}}{\sqrt{\tilde{ \mathcal{P}}_{3}(u^{\prime})}}\;\;(\mathrm{with}\;u_{f}>u), \tag{31}\]
respectively, for the OFK and OSK, in which \(\phi_{0}\) is the initial azimuth angle, and \(u_{i}=\frac{1}{2}(\frac{1}{r_{i}}-\frac{1}{6})\). Taking into account the applied changes of variables, the above integrals yield
\[r(\phi)=\frac{6}{1+12\wp\left(\omega_{d}-(\phi-\phi_{0})\right)}, \tag{32}\]
for the OFK and
\[r(\phi)=\frac{6}{1+12\wp\left(\omega_{f}+(\phi-\phi_{0})\right)}, \tag{33}\]
for the OSK, where \(\wp(x)\equiv\wp(x;\tilde{g}_{2},\tilde{g}_{3})\) is the \(\wp\)-Weierstrassian elliptic function [109], and we have defined
\[\omega_{i}=\wp^{-1}\left(\frac{1}{2r_{i}}-\frac{1}{12}\right). \tag{34}\]
\begin{table}
\begin{tabular}{c|c c c c c} \hline \(\beta\) & 0.0 & 0.011 & 0.022 & 0.031 & 0.041 \\ \hline \hline \(r_{+}\) & 2.0001 & 1.9563 & 1.9196 & 1.8880 & 1.8583 \\ \(r_{p}\) & 3.0000 & 2.9503 & 2.9077 & 2.8704 & 2.8350 \\ \(b_{p}\) & 5.1968 & 4.9468 & 4.7445 & 4.5762 & 4.4228 \\ \(r_{d}\) & 6.3692 & 6.8091 & 7.2156 & 7.5957 & 7.9813 \\ \(r_{f}\) & 2.1797 & 2.1076 & 2.0544 & 2.0103 & 1.9699 \\ \hline \end{tabular}
\end{table}
Table 1: The turning points of photonic trajectories together with their corresponding values of \(b_{p}\).
Deflection angle
The OFK is in fact related to the gravitational lensing that is caused by the black hole and is, in part, responsible for the formation of the black hole shadow. Hence, by having at hand the integral equation (30), one can calculate the deflection angle \(\hat{\Theta}\) that an observer \(\mathbb{O}\) at the radial position \(r_{\mathbb{O}}\) from the black hole (the lens) measures. Accordingly, we have [110]
\[\hat{\Theta} = 2\Delta\phi-\pi \tag{35}\] \[= 2\int_{u_{\mathbb{O}}}^{u_{d}}\frac{\mathrm{d}u^{\prime}}{\sqrt{ \hat{\mathcal{P}}_{3}(u)}}-\pi=2\left[\wp^{-1}(u_{\mathbb{O}})-\wp^{-1}(u_{d} )\right]-\pi,\]
with the Weierstrass invariants given in Eqs. (29). Using the analytical expression for \(r_{d}\), we have plotted the behavior of \(\hat{\Theta}\) in terms of the impact parameter \(b\) in Fig. 3.
As it can be inferred from the diagram, there is no significant sensitivity in the behavior of the deflection angle, given the small changes in the \(\beta\)-parameter. However, in general, \(\hat{\Theta}\) decreases for a given \(b\), when the \(\beta\)-parameter increases from 0 to its maximum value. In this sense, the Schwarzschild-de Sitter spacetime causes the highest deflection angle. Naturally, by approaching the black hole (smaller \(b\)) the deflection angle increases until it diverges at \(b_{p}\) for each of the cases. So, strong lensing occurs in the near-horizon regions whereas by receding the black hole, the light deflection process will correspond to weak lensing. Since the weak lensing is an interesting phenomena with many astrophysical applications, in Sect. IV we apply another mathematical approach to recalculate the weak deflection angle and will compare the results with the direct integration method which was used here.
#### iii.2.3 Critical trajectories
The unstable circular orbits at \(r_{p}\) have the proper and coordinate periods
\[T_{\tau} =\frac{2\pi}{b_{p}}r_{p}^{2}, \tag{36}\] \[T_{t} =2\pi b_{p}, \tag{37}\]
satisfying the relation
\[T_{\tau}=\frac{1}{3\beta^{2}}\left[\beta^{2}(2\beta_{0}-1)-18\beta\Lambda+6( \beta_{0}-1)\Lambda\right]T_{t}, \tag{38}\]
which implies that \(T_{\tau}<T_{t}\) (see also Fig. 4). Similar to the deflecting trajectories, unstable orbits can also lead to different fates, which we name after as the critical orbits of the first kind (COFK) which happen when photons approach \(r_{p}\) from an initial distance \(r_{p}<r_{\mathrm{in}}<r_{++}\), and the critical orbits of the second kind (COSK) which correspond to photons approaching \(r_{p}\) from \(r_{+}<r_{\mathrm{in}}<r_{p}\). When \(\frac{1}{\lambda^{2}}\rightarrow\frac{1}{\lambda_{p}^{2}}=\frac{1}{b_{p}^{2}}+\Lambda\), the point \(r=r_{p}\) is a double root of \(\mathcal{P}_{4}(r)\) in Eq. (23). The differential equation of motion can then be factorized as
\[\frac{\mathrm{d}r}{\mathrm{d}\phi}\equiv\mathcal{P}_{4}^{p}(r)=\left|r-r_{p} \right|\sqrt{\frac{r^{2}}{\lambda_{p}^{2}}+\left(\frac{r_{p}}{\lambda_{p}^{2} }+\chi_{1}\right)r+\frac{r_{p}^{2}}{\lambda_{p}^{2}}+\chi_{1}r_{p}+\chi_{0}}, \tag{39}\]
Figure 3: The plot of the deflection angle \(\hat{\Theta}\) (in \(\mu\)as) versus the changes in the impact parameter \(b\), plotted for the allowed values of the \(\beta\)-parameter and \(r_{\mathbb{O}}=10^{5}\).
by means of the method of synthetic division, in which
\[\chi_{0} =r_{p}\left(\frac{r_{p}}{\lambda_{p}^{2}}-\beta\right)-1, \tag{40a}\] \[\chi_{1} =\frac{r_{p}}{\lambda_{p}^{2}}-\beta. \tag{40b}\]
One can therefore obtain the exact solutions to Eq. (39), by means of direct integration and applying the inversion. This yields the two solutions \(r_{\rm I}(\phi)\) for the COFK and \(r_{\rm II}(\phi)\) for the COSK, which are given as
\[r_{\rm I}(\phi)=\frac{1}{\mathcal{A}^{2}+8r_{p}^{2}+(\mathcal{A }^{2}-4\mathcal{B})\cosh\Phi}\Big{[}2r_{p}\left(\mathcal{A}^{2}-4\mathcal{B} \right)\cosh^{2}\Phi\pm[r_{p}(r_{p}+\mathcal{A})+\mathcal{B}]\\ \times\sqrt{(\mathcal{A}^{2}-4\mathcal{B})\operatorname{sech}^{ 2}\Phi\tanh^{2}\Phi}\big{(}\cosh(2\Phi)\mp 2\mathcal{A}\big{)}\Big{]}, \tag{41}\]
where \(\Phi=\lambda_{p}^{-1}(\phi-\phi_{0})\sqrt{r_{p}(\mathcal{A}+r_{p})+\mathcal{B}}\), and
\[\mathcal{A} =r_{p}+\chi_{1}\lambda_{p}^{2}, \tag{42a}\] \[\mathcal{B} =r_{p}^{2}+\lambda_{p}^{2}\left(\chi_{0}+\chi_{1}r_{p}\right). \tag{42b}\]
In Fig. 5, some examples of the possible photon orbits have been shown in the context of allowed values for the \(\beta\)-parameter. As it can be inferred from the diagrams, the boundary of the black hole shadow is based on these four types of orbits. Together, these photon orbits are able to produce the bright ring surrounding the dark shadow in the observer's sky, which is produced as a result of the strong gravitational lensing in the near-horizon regions. On the other hand, the weakly lensed luminous background of black holes (such as galaxies, stars, etc.), is of great importance in the context of extragalactic astronomy, because it helps astrophysicists to estimate the matter distribution in distant quasars, or in active galactic nuclei (AGNs) in general. Hence, in the next section we investigate weak gravitational lensing for the \(f(R)\) black hole.
## IV Weak deflection angle using the GBT
On the equatorial plane and for mass-less particles, Eq. (13) can be recast as
\[\mathrm{d}t^{2}=\frac{\mathrm{d}r^{2}}{B(r)^{2}}+\frac{r^{2}}{B(r)}\mathrm{d} \phi^{2}, \tag{43}\]
which is known as the optical line element, and describes a two-dimensional space-like subspace of the base manifold given by the line element (4). This way, the optical metric is inferred as \(\mathfrak{g}_{ij}=\mathrm{diag}(B^{-2},r^{2}B^{-1})\) with \(i=1,2\), that possesses the determinant \(\mathfrak{g}=r^{2}B^{-3}\). In this section, following the method introduced in Ref. [104], we apply the GBT as a method of calculating the light deflection angle. In this method, the light ray trajectories are supposed to exist in a domain described by the optical metric, and hence, they are treated as spatial geodesics. As explained in Ref. [104], to calculate the deflection angle one can consider a non-singular domain \((\mathcal{D},\chi,\mathbf{g})\) as a subset of an oriented Riemannian surface with the Euler characteristic \(\chi\) and
Figure 4: The \(\beta\)-profile of the ratio \(\frac{T_{r}}{T_{t}}\). The green region corresponds to the allowed values of \(\beta\) within the M87* and Sgr A* observational data, which implies that \(\frac{T_{r}}{T_{t}}<1\).
the metric \(\mathbf{g}\) whose relevant Gaussian curvature is \(\mathcal{K}\). Now if the boundary of this domain, \(\partial\mathcal{D}\), has a geodesic curvature \(\mathscr{K}\), then the GBT can be given as [111, 112]
\[\int\int_{\mathcal{D}}\mathcal{K}\,\mathrm{d}S+\int_{\partial\mathcal{D}} \mathscr{K}\,\mathrm{d}t+\sum_{i}\theta_{i}=2\pi\chi(\mathcal{D}), \tag{44}\]
in which \(\mathrm{d}S\) is the infinitesimal area element of \(\mathcal{D}\), \(\theta_{i}\) is an exterior angle at the \(i\)th vertex, and \(\chi(\mathcal{D})=1\) since \(\mathcal{D}\) is non-singular. In this sense, a smooth congruence \(\mathbf{\gamma}:\{t\}\to\mathcal{D}\) of unit speed, i.e. \(\mathbf{g}(\dot{\mathbf{\gamma}},\dot{\mathbf{\gamma}})\equiv\dot{\mathbf{\gamma}}\cdot\dot{ \mathbf{\gamma}}=1\), and its acceleration \(\ddot{\mathbf{\gamma}}\), span a Frenet frame. This way, the geodesic curvature of \(\mathbf{\gamma}\) is given by [111]
\[\mathscr{K}=\mathbf{g}(\nabla_{\dot{\mathbf{\gamma}}}\dot{\mathbf{\gamma}},\ddot{\mathbf{ \gamma}}), \tag{45}\]
which vanishes iff \(\mathbf{\gamma}\) is a geodesic congruence. We assume that the observer \(\mathbb{O}\) at the interior angle \(\theta_{\mathbb{O}}\), the lens \(\mathbb{L}\) and the source \(\mathbb{S}\) at \(\theta_{\mathbb{S}}\), are located in the same two-dimensional surface. Then these two angles sum up to
\[\theta_{\mathbb{O}}+\theta_{\mathbb{S}}=\int\int_{\mathcal{D}_{1}}\mathcal{K} \,\mathrm{d}S, \tag{46}\]
where \(\mathcal{D}_{1}\) is a non-singular subspace of \(\mathcal{D}\) which is bounded by the geodesics that connect \(\mathbb{S}\) to \(\mathbb{O}\). In Ref. [113], it has been proved that for a static spherically symmetric spacetime with the optical metric of the form (43), the weak deflection angle can be given by means of Eq. (46), which yields
\[\hat{\vartheta}=\int\int_{\mathbb{O}\nabla_{\mathbb{S}}^{\mathbb{S}}}\mathcal{ K}\,\mathrm{d}S, \tag{47}\]
Figure 5: Examples of (a) OFK, (b) OSK, (c) COFK and (d) COSK, plotted for \(\beta=0.022\). In the panels (a) and (b), the red circle at the center denotes \(r_{+}\), while the blue and purple dashed circles correspond to \(r_{f}\) and \(r_{d}\). In panels (c) and (d), the exterior dashed circles indicates \(r_{p}\).
where \({}^{\rm 0}\nabla\!\nabla\!^{\rm 5}_{\rm L}\) is the triangle formed by the observer, the lens and the source. According to the characteristics of the spacetime under study, this can be recast as
\[\hat{\vartheta}=\int_{0}^{\pi}\int_{r_{\rm c}}^{r_{\rm 0}}\mathcal{K}\,{\rm d }S, \tag{48}\]
where \(r_{\rm c}\) is the distance of closest approach, \({\rm d}S=\sqrt{\mathfrak{g}}\,{\rm d}r{\rm d}\phi\), and the Gaussian curvature can be related to the Riemann tensor as [113; 114]
\[\mathcal{K}=\frac{R_{r\phi r\phi}}{\mathfrak{g}}=-\frac{1}{4}B^{ \prime}(r)^{2}+\frac{1}{2}B(r)B^{\prime\prime}(r). \tag{49}\]
Since we are concerned with deflecting trajectories that escape from the black hole, we can then identify \(r_{\rm c}=r_{d}\). This way, and using Eq. (5), direct integration of Eq. (50) results in the following exact value for the weak deflection angle:
\[\hat{\vartheta}=\frac{\pi}{2}\left(\left.\frac{r^{2}\beta+2r-6}{ \sqrt{r(r^{3}\Lambda r^{2}\beta-r+2)}}\right)\right|_{r_{d}}^{r_{\rm 0}}, \tag{50}\]
with \(r_{d}=x_{d}^{-1}\), and \(x_{d}\) given analytically in Eq. (25). In Fig. 6, we have used the expressions in Eqs. (25) and (50), to plot the \(b\)-profile of \(\hat{\vartheta}\).
Comparing the behaviors of the deflection angles demonstrated in Figs. 6 and 3, one can observe the steeper fall of the \(b\)-profile curves for \(\hat{\Theta}\) compared to those for \(\hat{\vartheta}\). Hence, when \(\beta\) increases, the decrease in the value of the deflection angle for a fixed \(b\) is of a larger rate in the case of \(\hat{\vartheta}\). This difference stems from the employed methods in the calculation of the deflection angles \(\hat{\Theta}\) and \(\hat{\vartheta}\), which were respectively the direct integration of the angular geodesic and the GBT.
As mentioned before, the lensing phenomena and the critical photon orbits are responsible for confining the black hole shadow and the formation of the photon rings. In the next section, we first construct a thin accretion model and discuss, analytically and numerically, the emission process from this disk that constitutes the photon rings.
## V Thin accretion disk model and emission from the black hole
In this section we study the observational signatures of the black hole in the case of the existence of thin accretion disk. We assume that the accretion process is explained by a generalized version of the well-known Shakura-Sunyaev model [73], proposed by Novikov and Thorne in Ref. [74]. To proceed with applying this model, let us return to the Lagrangian (13), which is now specified as \(2\mathscr{L}=-1\), for massive particles of energy \(\mathcal{E}\) and angular momentum \(\mathcal{L}\) that constitute the accretion disk. This way, one can rewrite the equations of motion (16) and (17) as
\[\dot{r}=\mathcal{E}^{2}-\mathcal{V}(r), \tag{51}\] \[\left(\frac{{\rm d}r}{{\rm d}\phi}\right)^{2}=\frac{\mathscr{P}_ {6}(r)}{\mathcal{L}^{2}}, \tag{52}\]
Figure 6: The plot of the weak deflection angle \(\hat{\vartheta}\) (in \(\mu\)as) versus the changes in the impact parameter \(b\), plotted for the allowed values of the \(\beta\)-parameter and \(r_{\rm 0}=10^{5}\).
in which \(\mathscr{P}_{6}(r)=r\left[\Lambda r^{5}-\beta r^{4}-(1-\mathcal{E}^{2}-\mathcal{L}^ {2}\Lambda)r^{3}+(2-\mathcal{L}^{2}\beta)r^{2}-\mathcal{L}^{2}r-2\mathcal{L}^{2 }\right]\), and
\[\mathcal{V}(r)=B(r)\left(1+\frac{\mathcal{L}^{2}}{r^{2}}\right), \tag{53}\]
is the effective potential for massive particles orbiting the black hole in the equatorial plan. The left panel of Fig. 7 shows a typical radial profile of \(\mathcal{V}(r)\) which has been plotted for the allowed values of \(\beta\).
According to the diagram, the effective potential possesses a minimum which allows for stable circular orbits for the particles. The latter is a necessary condition for the formation of accretion disks in the context of innermost stable circular orbits (ISCO), whose corresponding radius, \(r_{c}\), can be obtained by the conditions \(\mathscr{P}_{6}(r)=0=\mathscr{P}_{6}^{\prime}(r)\). This radius corresponds to the inner edge of the accretion disk and as we move away from the black hole, particles appear to moving on Keplerian bound orbits. In the right panel of Fig. 7, the position of the ISCO has been indicated for each of the cases. As it can be inferred, increase in the \(\beta\)-parameter decreases \(r_{c}\), and hence, affects the structure of the accretion disk. Furthermore, the above conditions make it possible to obtain the analytical expressions
\[\mathcal{E}_{c}(r)=\frac{B(r)}{\sqrt{B(r)-r^{2}\varpi_{c}(r)^{2}}} =\frac{\sqrt{2}\Lambda(r_{++}-r)(r-r_{+})(r-r_{3})}{r\sqrt{\frac{3r _{++}+r_{+}r_{3}}{r}+r(r_{++}+r_{+}r_{3})-2\left[r_{++}(r_{+}+r_{3})+r_{+}r_{3 }\right]}}, \tag{54}\] \[\mathcal{L}_{c}(r)=\frac{r^{2}\varpi_{c}(r)}{\sqrt{B(r)-r^{2} \varpi_{c}(r)^{2}}} =\frac{r^{\frac{3}{2}}\sqrt{\left(-\frac{r_{++}+r_{+}r_{3}}{r^{2}} -2r+r_{++}+r_{+}r_{3}\right)}}{\sqrt{\frac{3r_{++}+r_{+}r_{3}}{r}+r(r_{++}+r_{ +}+r_{3})-2\left[r_{++}(r_{+}+r_{3})+r_{+}r_{3}\right]}}, \tag{55}\]
for the energy and angular momentum of particles residing in the ISCO, in which
\[\varpi_{c}(r)=\frac{\mathrm{d}\phi}{\mathrm{d}t}=\sqrt{\frac{B^{\prime}(r)}{2 r}}=\sqrt{\frac{\Lambda}{2}}\left(\frac{r_{++}+r_{+}+r_{3}}{r}-\frac{r_{++}r_{+}r_{3 }}{r^{3}}-2\right)^{\frac{1}{2}}, \tag{56}\]
is the angular velocity of orbiting particles, and we have used the expression given in Eq. (12). In Fig. 8, the radial profile of the above quantities has been plotted for given values of the \(\beta\)-parameter. It is observed that by increasing \(\beta\), all the quantities increase which is a consequence of the relevant changes in the effective potential. Note that, on the ISCO, one can recast the characteristic polynomial as \(\mathscr{P}_{6}(r)=\Lambda r(r-r_{c})^{3}(r-r_{4})(r-r_{5})\), where \(r_{4}>0\) and \(r_{5}<0\) are the remaining two real roots of the characteristic polynomial, which can be expressed in terms of \(r_{c}\) by means of the method of synthetic division.
For an accretion disk to be thin, its radius must be large compared to its thickness. In addition, the disk is considered to be in local hydrodynamical equilibrium at each point, which implies low pressure and vertical gradients within the disk. We assume that the cooling process in the disk is fast enough to prevent heat buildup due to particle friction. To ensure the stability of the disk, we assume that the accretion rate along the radial axis, \(\mathscr{A}^{r}\), is constant all the time, in the way that
\[\mathscr{A}^{r}=-2\pi\sqrt{-g}\,\Sigma\,\mathcal{U}^{r}=\mathrm{const.}, \tag{57}\]
in which \(\sqrt{-g}=r^{2}\), \(\Sigma\) is the surface density of the disk, and \(\mathcal{U}^{r}=\dot{r}\) is radial component of the four-velocity of the accreting particles. From the conservation of energy and angular momentum, one can obtain the differential of the luminosity as [115; 75]
\[\frac{\mathrm{d}\ell}{\mathrm{d}\ln r}=4\pi r\sqrt{-g}\,\mathcal{E}_{c}(r) \mathcal{F}(r), \tag{58}\]
Figure 7: (a) The radial profile of the effective potential \(\mathcal{V}(r)\) plotted for the allowed values of the \(\beta\)-parameter and \(\mathcal{L}=10\). (b) The position of ISCO (shown by a point on each of the curves) for the same values of \(\beta\). From bottom to top, the corresponding values of the angular momentum are \(\mathcal{L}=3.46,3.64,3.74,3.81\) and \(3.86\).
where \(\mathcal{F}(r)\) is the flux of the radiated energy from the disk, and is given by
\[\mathcal{F}(r)=-\frac{\mathscr{A}^{r}}{4\pi\sqrt{-g}}\frac{1}{\left[\mathcal{E}_ {c}(r)-\varpi_{c}(r)\mathcal{L}_{c}(r)\right]^{2}}\varpi_{c}^{\prime}(r)\int_{ r_{c}}^{r}\left[\mathcal{E}_{c}(r)-\varpi_{c}(r)\mathcal{L}_{c}(r)\right] \mathcal{L}_{c}^{\prime}(r)\,\mathrm{d}r. \tag{59}\]
By considering the fact that \(\mathcal{E}_{c}^{\prime}(r)=\varpi_{c}(r)\mathcal{L}_{c}^{\prime}(r)\), one can rewrite [75]
\[\int_{r_{c}}^{r}\left[\mathcal{E}_{c}(r)-\varpi_{c}(r)\mathcal{L}_{c}(r) \right]\mathcal{L}_{c}^{\prime}(r)\,\mathrm{d}r=\mathcal{E}_{c}(r)\mathcal{L }_{c}(r)-\mathcal{E}_{c}(r_{c})\mathcal{L}_{c}(r_{c})-2\int_{r_{c}}^{r} \mathcal{L}_{c}(r)\mathcal{E}_{c}^{\prime}(r)\,\mathrm{d}r. \tag{60}\]
Now taking the expressions in Eqs. (54), (55) and (56) up to the first order in \(\Lambda\), and employing them in the integrand of the above relation, one can obtain the analytical solution
\[\mathcal{F}(r)=-\frac{\mathscr{A}^{r}}{4\pi\sqrt{-g}}\frac{\varpi_{c}^{\prime} (r)}{\left[\mathcal{E}_{c}(r)-\varpi_{c}(r)\mathcal{L}_{c}(r)\right]^{2}} \left[\mathcal{E}_{c}(r)\mathcal{L}_{c}(r)-\mathcal{E}_{c}(r_{c})\mathcal{L }_{c}(r_{c})-2\mathscr{J}(r)+2\mathscr{J}(r_{c})\right], \tag{61}\]
for the flux, for which \(\mathscr{J}(r)\) has been given in appendix A. Since the disk is thin, we can assume that the emission follows the radiation of a black body whose temperature profile is given by
\[\mathcal{T}(r)^{4}=\frac{\mathcal{F}(r)}{\sigma}, \tag{62}\]
in which, \(\sigma\) is the Stefan-Boltzmann constant. In Fig. 9, the above relations have been employed to plot the radial profiles of the flux, temperature and differential luminosity for different values of the \(\beta\)-parameter. It can be observed that by increasing \(\beta\), all of the above quantities will increase, which implies that the more the black hole alters from the Schwarzschild-de Sitter solution, the radiation from the accretion disk will be more intense, and the disk's temperature becomes higher.
### Shadow and rings of the black hole with thin accretion
To a distant observer, a real black hole appears as a dark shaded region surrounded by an illuminated area. This area is generated by the light rays that initiate from different parts of the accretion disk, but have the possibility to escape from the black hole. As discussed in Subsect. III.1, photons with certain impact parameters can escape from the black hole, in the context of the OFK and COFK (see diagrams (a) and (c) of Fig. 5). In this sense, the incoming photons from the accretion disk can perform different number of orbits around the black hole before leaving it, and together, they may generate several light rings that confine the shadow.
Figure 9: Radial profiles of (a) flux, (b) differential luminosity and (c) temperature, for the allowed values of the \(\beta\)-parameter and the same color-coding as in Fig. 7.
#### iv.3.1 Direct emission, lensing rings and photon rings
Here, we follow the method introduced in Ref. [76], to characterize the light rings in the observer's sky, where the number of orbits is defined as
\[n=\frac{\phi}{2\pi}, \tag{63}\]
in which \(\phi\) is now the final azimuth angle of photons right before escaping the black hole. In this sense, \(n\) corresponds to the number of times that the light ray geodesics cross the plane of the accretion disk. In Ref. [76], these rays have been classified to the cases of \(0.25<n<0.75\) where the light rays hit the accretion disk only once and constitute the direct emission, \(0.75<n<1.25\) for which the rays cross the accretion disk twice and together they form the lensed (lensing) ring, and \(n>1.25\) that corresponds to the formation of the photon ring for which, the rays intersect the accretion disk more than twice. In Fig. 10, the behavior of \(n\) with respect to the impact parameter \(b\) has been demonstrated for the allowed values of the parameter \(\beta\), in which the domains of direct emissions, lensing rings and photon rings have been color-coded distinctively. In this diagram, we have also simulated a large number of geodesics for each of the cases, that include the OFK, OSK, COFK and COSK. Note that, from now on until the end of this section, the value \(\Lambda=10^{-8}\) is assumed2. As it can be inferred from the diagrams, by increasing \(b\), the total number of orbits increases in the domain \(b<b_{p}\) until it reaches a narrow peak, and then it decreases in the domain \(b>b_{p}\).
Footnote 2: In fact, switching to this value does not make any sensible changes in the photon orbit properties and the characteristic distances of the spacetime. It, however, is important for our ray-tracing codes to work properly.
On the other hand, for larger values of the \(\beta\)-parameter, the width of the lensing rings and photon rings shrinks. In Table 2, this has also been shown numerically by writing down the the range of \(b\) for the direct, lensing ring, and photon ring emissions, for different values of \(\beta\). According to the data presented in this table, it can be checked that by increase in the \(\beta\)-parameter, the range of \(b\) for all emission types is shrunk. Therefore, the thickness of the photon and lensing rings is decreased in this sense. Accordingly, the angular size of the shadow is also decreased for larger \(\beta\), and hence, the contribution in the brightness of the rings is reduced. We continue by studying the observed emission intensity from the thin accretion disk in the framework of the \(f(R)\) black hole model.
#### iv.3.2 Transfer functions and the observed intensities
The radiation of the accretion disk is supposed to be isotropic in its rest frame. By \(I_{\rm e}(r)\), we denote the specific intensity of an emitted radiation of frequency \(\nu_{\rm e}\) from the disk. From the Liouville's theorem, we know that the quantity \(\frac{I_{\rm e}(r)}{\nu_{\rm e}^{2}}\) is conserved along the entire path of light propagation. Hence, the observed intensity \(I_{\rm o}\) of frequency \(\nu_{\rm o}\) are related in terms of the relation \(\frac{I_{\rm e}(r)}{\nu_{\rm e}^{2}}=\frac{I_{\rm o}(r)}{\nu_{\rm o}^{2}}\)[116]. Accordingly, we have
\[I_{\rm o}(r)=\mathfrak{h}^{3}I_{\rm e}(r), \tag{64}\]
which in our model \(\mathfrak{h}=\sqrt{B(r)}\). Now by integrating over the range of all the observed frequencies, the total observed specific intensity is obtained as
\[I_{\rm o}^{t}(r)=\int I_{\rm o}(r)\,{\rm d}\nu_{\rm o}=\mathfrak{h}^{4}I_{ \rm emit}(r), \tag{65}\]
in which, the total emission intensity is given by \(I_{\rm emit}(r)=\int I_{e}(r){\rm d}\nu_{e}\). Note that, since each intersection of the light rays with the accretion disk generates an additional brightness, the reliable total observed intensity of the direct emission and the rings is
\begin{table}
\begin{tabular}{c c c c c} \hline \(\beta\) & Direct emission (\(0.25>n>0.75\)) & Lensing ring (\(0.75<n<1.25\)) & Photon ring (\(n>1.25\)) \\ \hline \hline
0.0 & \(b<5.01685\); \(b>6.14685\) & \(5.01685<b_{p};5.23685<b<6.14685\) & \(b_{p}<b<5.23685\) \\
0.011 & \(b<4.80681\); \(b>5.71683\) & \(4.80681<b<b_{p};4.98682<b<5.71683\) & \(b_{p}<b<4.98682\) \\
0.022 & \(b<4.62449\); \(b>5.38449\) & \(4.62449<b<b_{p};4.78449<b<5.38449\) & \(b_{p}<b<4.78449\) \\
0.031 & \(b<4.47621\); \(b>5.13621\) & \(4.47621<b<b_{p};4.61621<b<5.13621\) & \(b_{p}<b<4.61621\) \\
0.041 & \(b<4.33281\); \(b>4.91281\) & \(4.33281<b<b_{p};4.46281<b<4.91281\) & \(b_{p}<b<4.46281\) \\ \hline \end{tabular}
\end{table}
Table 2: The impact parameter domains corresponding to the direct emission, lensing rings and photon rings of the black hole given for different values of the \(\beta\)-parameter.
given by
\[I_{\rm obs}(r)=\sum_{m}I_{\rm o}^{t}(r)|_{r=r_{m}(b)}, \tag{66}\]
where \(r_{m}(b)\) is the transfer function that relates the impact parameter of the light ray trajectories with the radial coordinate of the \(m\)th intersection of light rays with the accretion disk3. Hence, the slope of the transfer function indicates its (de)magnification
Figure 10: The \(b\)-profile of the total number of photon orbits \(n\), together with the behavior of the null geodesics in the near-horizon regions. In these diagrams, the direct, lensing ring and photon ring emissions have been color-coded appropriately. The black disk indicates the event horizon of the black hole whereas the green dashed circle denotes the radius of unstable (critical) orbits, \(r_{p}\). The diagrams correspond to the cases of (a,d) \(\beta=0\), (b,e) \(\beta=0.011\), (c,f) \(\beta=0.022\), (g,i) \(\beta=0.031\), and (h,j) \(\beta=0.041\).
scale [117, 76]. Therefore, this slope is called the (de)magnification factor. In Fig. 11, we have demonstrated the \(b\)-profile of the transfer function for different values of \(\beta\). In these diagrams, the black points lying on a line with an approximately constant slope correspond to the case of \(m=1\) and indicate direct emission. This slope is almost equal to one and therefore indicates a redshifted source. In the case of the lensing ring for \(m=2\), the impact parameter \(b_{c}\) is approached but in its course the slope increases significantly from one. This shows that the back side image of the accretion disk is demagnified. For the case of \(m=3\), the photon ring is formed and the slope tends to infinity. Hence, the front side image of the accretion disk is extremely demagnified. Based on the above notes, one can infer that the contribution of the lensing ring and photon ring in the observed intensity is negligible and it mainly consists of direct emission. Note that, higher order rings for which \(n\geq 4\) (black hole subrings) do not have significant observational features, although they have appeared to produce some interferometric signatures [118].
#### iv.2.3 Observational signatures of emissions from the accretion disk
In this part of the paper, we apply a ray-tracing procedure to produce the shadow of the black hole together with its accretion disk image. We consider a face-on view which is of more generality and is sufficiently informative regarding the silhouette imaging of black holes.
To a distant observer, the accretion disk constitutes the main light source that illuminates the black hole. The brightness of this source is only a function of \(r\) and as discussed earlier, it can be expressed in terms of the emitted intensity \(I_{\rm emit}\). To proceed further with the observational signatures of the \(f(R)\) black hole, we consider three toy models for the intensity profile of the thin accretion disk, which are described as follows:
* **Model 1**: In this model, the emission comes from the ISCO and the intensity profile is given by the decaying function \[I_{\rm emit}(r)=\begin{cases}\frac{1}{[r-(r_{p}-1)]^{2}}&\text{for }r>r_{c}\\ 0&\text{for }r\leq r_{c}\end{cases}.\] (67)
* **Model 2**: We assume that the radiation is originated from the photon sphere of the radius \(r_{p}\), and the emission intensity profile is expressed as \[I_{\rm emit}(r)=\begin{cases}\frac{1}{[r-(r_{p}-1)]^{3}}&\text{for }r>r_{p}\\ 0&\text{for }r\leq r_{p}\end{cases}.\] (68)
* **Model 3**: For the case that the emission starts from the event horizon radius \(r_{+}\), we consider an emission profile in which,
Figure 11: The transfer function \(r_{m}(b)\) plotted for different values of \(\beta\). The panels (a)–(e) correspond respectively to the cases of \(\beta=0.011,0.022,0.031\) and \(0.041\). The color coding is the same as that in Fig. 10.
the decay is more moderate compared with the last two models, and is given by
\[I_{\rm emit}(r)=\begin{cases}\frac{\frac{\pi}{2}-\arctan(r-|r_{r}-1|)}{\frac{\pi} {2}-\arctan(r_{p})}&\text{for }r>r_{+}\\ 0&\text{for }r\leq r_{+}\end{cases}. \tag{69}\]
The above models have their own specific properties respecting the black hole shadow, and the second model emission profile shows the largest decay. These models, despite being rather idealized, they however can provide useful insights into the light propagation in the exterior of black holes. In Fig. 12-16, the observational appearance of the accretion disk around the \(f(R)\) black hole has been shown for each of the above models, together with the plots of the emitted and observed intensities for each of the cases.
The first, second and third row in each of the figures, correspond respectively to the emitted and observed intensity and the shadow of the \(f(R)\) black hole, for models 1, 2 and 3. For model 1, the emitted intensity has an asymptotic behavior near \(b_{c}\) and afterwards, it falls off by the radial distance and approaches zero. In this case, the spherical photon orbits occur inside the disk's emission part. For this model, the observed intensity has two independent peaks within the domain of lensing ring and photon ring. For all values of the \(\beta\)-parameter, except for the case of \(\beta=0.031\), the photon ring intensity is smaller than that of
Figure 12: Observational signatures of the accretion disk around the \(f(R)\) black hole for the case of \(\beta=0\) (the Schwarzschild-de Sitter black hole). From top to bottom, the panels correspond to (a) model 1, (b) model 2, and (c) model 3 emission profiles. The left and the middle panels in each row show, respectively, the \(b\)-profiles of emitted and observed intensities. The right panels present a 2-dimensional faced-on ray-traced shadow image for each of the models.
the direct emission. This is while the lensing ring intensity is always larger than the latter4. On the other hand, both peaks are of a remarkably narrow observational range, which means that at long distances (where the observer is located), the contribution of the lensing and photon rings in the observed intensity is dominated by that of the direct emission. So the observed emission from the black hole in this model mostly consists of the direct emission, and this can also be inferred from the shadow images presented for this model. Moreover, by comparing the diagrams for different values of \(\beta\), we can infer that the size of the shadow in decreased by increase in this parameter. Hence, the Schwarzschild-de Sitter black hole represents the largest shadow for this model. In model 2, the emitted intensity peaks at \(r_{p}\) and then drops sharply by increasing the radial distance. For the observed intensity, the first peak corresponds to direct emission and then decreases until it reaches another peak that corresponds to a ring that is a combination of lensing and photon rings. This second peak is more intense than the first, but has a much smaller observational range. An exception is the case of \(\beta=0.031\), where both peaks have significant ranges and are approximately equal in intensity. In this particular case, the contributions of the direct emission and the rings are relatively equal in the observed intensity (as is also evident from the corresponding shadow image). In the other cases of \(\beta\), this contribution is clearly dominated by the direct emission, although the rings are strongly demagnified. For model 3, the peak in the emitted intensity happens at the event horizon \(r_{+}\), and declines by increase in the radial distance. In this case, direct emission, lensing ring and photon ring merge and occupy a significant range in the observational domain. According to the diagrams of the observed intensity for this model, there is a smooth uplift in the profiles in the regions outside the event horizon where the direct emission dominates, and accordingly, the shadow of the black hole in this model is bounded by the direct emission. Afterwards, the profiles reach an
Figure 13: The case of \(\beta=0.011\).
intense peak in the region corresponding to the lensing ring and photon ring. In the case of \(\beta=0\) (the Schwarzschild-de Sitter black hole), the profile first reaches a rather narrow but intense peak for the photon ring, and then falls into a smaller peak where both of the rings contribute, and form a wide and bright ring. This latter is observed in the observed intensities and the shadow images for all the cases of the \(\beta\)-parameter. Furthermore, we can notice that the brightness of the accretion disk is elevated by increase in the \(\beta\)-parameter. It is important to note that a thin accretion has a remarkable influence on the size of the observed black hole shadow. For example in Fig. 17, we have reconsidered the shadow image in Fig. 14(a) as a reference, on which, we have applied a Gaussian filter in order to simulate the angular resolution generated by the EHT. According to the image, the radius of the direct emission is estimated as \(5.69\), which appears to be the size of the black hole shadow after applying Gaussian blurring, when the photon and lensing rings disappear. This radius is significantly larger than the theoretical value (\(b_{p}=4.7445\) for \(\beta=0.022\)). Note that, this difference stems both from the changes in the \(\beta\)-parameter, as well as changes in the disk's emission profile. Accordingly, the value of \(b_{p}\) cannot be inferred directly from the size of the black hole shadow, and therefore, it makes it difficult testing general relativity using the results from the EHT.
#### iv.2.4 Observational signatures of infalling spherical accretion
Here, we investigate the shadow cast of the \(f(R)\) black hole when it is accreting, spherically, the radiative gas that constitute its thin emission disk [119]. In this model, the observed intensity is expressed as
\[I_{\rm obs}=\int_{\mathcal{T}}\mathscr{R}^{3}\mathcal{J}(\nu_{\rm e})\,{\rm d }I_{\rm prop}, \tag{70}\]
over the null geodesic congruence \(\mathbf{\gamma}\), in which \(\mathscr{R}\) is the redshift factor, \(\nu_{\rm e}\) is the frequency of emitted photons from the accretion disk, \({\rm d}I_{\rm prop}\) is the infinitesimal proper length, and
\[\mathcal{J}(\nu_{\rm e})\propto\frac{\delta(\nu_{\rm e}-\nu_{\rm f})}{r^{2}}, \tag{71}\]
is the permittivity per unit volume in the emitter's rest frame, in which \(\nu_{\rm f}\) is the monochromatic rest-frame emission frequency, and \(\delta\) is the delta function. In this construction, the redshift factor is given by
\[\mathscr{R}=\frac{\Pi_{\mu}u_{\rm o}^{\mu}}{\Pi_{\nu}u_{\rm e}^{\nu}}, \tag{72}\]
where \(\mathbf{u}_{\rm o}\) and \(\mathbf{u}_{\rm c}\) are, respectively, the four-velocities associated with a distant static observer, and the infalling accreting matter. Accordingly, \(u_{\rm o}^{\mu}=(1,0,0,0)\), and in the spacetime of the \(f(R)\) black hole we can write
\[u_{\rm e}^{\mu}=\left(\frac{1}{B(r)},-\sqrt{1-B(r)},0,0\right). \tag{73}\]
The \(\mathbf{\Pi}\) covector in Eq. (72) is the four-momentum of the emitted photons from the accretion disk, and has the same definition as in Eq. (14). Since the accretion is supposed to be only in the radial direction, it is then sufficient to recalculate the fraction of the temporal and radial components of \(\mathbf{\Pi}\), which yields [119]
\[\frac{\Pi_{r}}{\Pi_{t}}=\pm\frac{1}{B(r)}\sqrt{1-B(r)}\frac{b^{2}}{r^{2}}, \tag{74}\]
Figure 15: The case of \(\beta=0.031\).
Figure 16: The case of \(\beta=0.041\).
Figure 17: Blurring the shadow in Fig. 14(a) for \(\beta=0.022\) using a Gaussian filter, to emulate the EHT nominal resolutions for the images of M87* and Sgr A*. In the left panel, the starting radius of the direct emission has been shown to be about 5.69, which forms the boundary of the shadow. After applying the Gaussian filter, the lensing and photon rings disappear in the right panel, and hence, the radius of the black hole shadow is estimated as 5.69. This value is much larger than the radius of the photon ring, which is \(b_{p}=4.74\) for the case of \(\beta=0.022\).
in which the \(\pm\) signs correspond, respectively, to whether the photons approach or recede from the black hole. One can therefore recast the redshift factor as
\[\mathscr{R} = \left(u_{\rm e}^{t}+\frac{\Pi_{r}}{\Pi_{t}}u_{\rm e}^{r}\right)^{-1} \tag{75}\] \[= \left[\frac{1}{B(r)}\pm\sqrt{\left(\frac{1}{B(r)}-1\right)\left( \frac{1}{B(r)}-\frac{b^{2}}{r^{2}}\right)}\,\right]^{-1},\]
and from here, the infinitesimal proper length is obtained as
\[{\rm d}I_{\rm prop}=\Pi_{\mu}u_{\rm e}^{\mu}{\rm d}\tau=\frac{\Pi_{t}}{\mathscr{ R}|\Pi_{r}|}{\rm d}r. \tag{76}\]
Accordingly, Eq. (70) takes the form
\[I_{\rm obs}\propto\int_{\mathbf{\gamma}}\frac{\mathscr{R}^{3}}{r^{2}}\frac{\Pi_{ t}}{|\Pi_{r}|}{\rm d}r, \tag{77}\]
and the observed intensity is, therefore, obtained by doing the above integration over all the frequencies. Using this method, we can study the brightness of infalling accretion and the shadow of the \(f(R)\) black hole. As shown in Fig. 18, for all cases of the \(\beta\)-parameter, by increase in the impact parameter and by moving away from the origin, the specific observed intensity increases until it reaches a peak around \(b_{p}\), and falls of remarkably in region \(b>b_{c}\) and goes to zero. On the other hand, by altering the \(\beta\)-parameter from zero, the peak becomes slightly higher and the bottom line of the profile is lifted by the same value, whereas its width is decreased relevantly. This means that for larger \(\beta\), the accretion disk appears brighter to the observer, and the silhouette becomes less dark but smaller in size. Hence, to a distant observer, the Schwarzschild-de Sitter black hole has the darkest and the largest silhouette, and the least bright accretion disk. In Fig. 19, these effects have been visualized for the \(f(R)\) black hole.
## VI Conclusion
Theoretical study of black holes with accretion disks provide a more realistic realm to carryout reliable comparisons with observational data and constraining the spacetime's theoretical parameters. In this work, we concerned with a static spherically symmetric black hole inferred from a special \(f(R)\) theory of gravity which is compatible with both small and large scale structure tests. This black hole has a linear first order term with a coefficient \(\beta\), as well as a cosmological constant. While the latter is supposed to compensate for the vacuum energy and the accelerated expansion of the universe, the former is responsible to mimic the flat galactic rotation curves. First of all, we aimed at constraining the \(\beta\)-parameter through calculating the theoretical predictions of the angular size of the black hole shadow and doing comparisons with the EHT results for M87* and Sgr A*. This way, we found that this parameter lies in the domains \(0<\beta<0.023\) for M87* and \(0.022<\beta<0.041\) for Sgr A*. We then solved the angular equations of motion for deflecting and critical trajectories, and presented exact analytical solutions for each of them. These types of orbits are of fundamental importance in the formation of the shadow of black holes when they are illuminated by an accretion disk. We used the obtained analytical solutions for deflecting trajectories to find the lens equation, and the deflection angle calculated to be about 10 \(\mu\)as for the allowed values of the \(\beta\)-parameter. This deflection angle was then
Figure 18: (a) The \(b\)-profiles of the observed intensity of the infalling spherical accretion, which from bottom to top correspond to the cases of \(\beta=0,0.011,0.022,0.031\) and \(0.041\). (b) The same as panel (a), but showing only a part of the \(b\) domain within the positive values.
recalculated by means of the GBT. In the last section of the paper, we constructed a geometrically and optically thin accretion disk around the black hole based on the Novikov-Thorne model, and calculated the characteristics of the disk. We then applied the method introduced in Ref. [76], to classify the light rings and the types of accretion emission profiles. In this sense, and based on the number of half orbits \(n\) that the light rays complete around the black hole, the rings are ramified to lensing rings and photon rings, both of which, are demagnified because of the extreme gravitational lensing. We calculated the ranges of the impact parameter for each of these rings which give their proper thickness. Taking into account three types of disk emission profiles, we found that by increase in the \(\beta\)-parameter, the size of the shadow is decreased from that in the case of Schwarzschild-de Sitter black hole, but the brightness of the rings may alter in accordance with the value of \(\beta\), as well as the radial position of the direct emission. Furthermore, the brightness of the accretion disk is elevated by increase in the \(\beta\)-parameter. As an example, we applied a Gaussian filter to a particular case to mimic the EHT images, and showed that the size of the black hole cannot be inferred directly from the observations. Finally, we considered a spherically asymmetric infalling accretion for the black hole and obtained the observed intensity profiles for different values of \(\beta\). We showed that by increase in this parameter, the black hole becomes brighter regarding the accretion disk, but the silhouette shrinks gradually. This, as well, was shown visually by employing appropriate ray-tracing methods. It would be an interesting subject to consider a plastic medium with specific index profile to surround the black hole. This way, the results from the EHT could help with the determination of the components of plasma as a more reliable candidate to constitute the black hole accretion disks.
## Acknowledgements
We would like to thank Mert Okyay and Ali Ovgun for providing us with some _Mathematica_ codes they have used in Ref. [79]. The authors acknowledge Universidad de Santiago de Chile for financial support through the Proyecto POSTDOCIDCYT, Codigo 042331 CM-Postdoc.
## Appendix A The full expression of \(\mathcal{J}(r)\)
Direct integration of the integral in Eq. (60) results in
\[\int\mathcal{L}_{c}(r)\mathcal{E}^{\prime}_{c}(r)\,\mathrm{d}r\equiv \mathcal{J}(r)\]
Figure 19: The images of the disk and the silhouette of the black hole with infalling spherical accretion, given for (a) \(\beta=0\), (b) \(\beta=0.011\), (c) \(\beta=0.022\), (d) \(\beta=0.031\) and (e) \(\beta=0.041\).
\[+225\,2^{\frac{3}{4}}\Lambda\,{\bf\Pi}\left(-\frac{3\sqrt{2\beta}}{ \sqrt{6\beta+1}-1};\,\mbox{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{\tau\beta\left(6\beta+1\right)\left(\beta r^{2}+2 \right)}\] \[+15\,2^{\frac{3}{4}}\,{\bf\Pi}\left(-\frac{3\sqrt{2\beta}}{ \sqrt{6\beta+1}-1};\,\mbox{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{\tau\beta^{3}\left(\beta r^{2}+2\right)}\] \[+270\,2^{\frac{3}{4}}\Lambda\,{\bf\Pi}\left(-\frac{3\sqrt{2\beta}} {\sqrt{6\beta+1}-1};\,\mbox{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{\tau\beta^{3}\left(\beta r^{2}+2\right)}\] \[+90\,2^{\frac{3}{4}}\Lambda\,{\bf\Pi}\left(-\frac{3\sqrt{2\beta}} {\sqrt{6\beta+1}-1};\,\mbox{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{\tau\beta^{5}\left(\beta r^{2}+2\right)}\] \[+60\,2^{\frac{1}{4}}\Lambda\,{\bf\Pi}\left(-\frac{3\sqrt{2\beta} }{\sqrt{6\beta+1}-1};\,\mbox{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{\tau\beta^{5}\left(\beta r^{2}+2\right)}\] \[+135\,2^{\frac{3}{4}}\Lambda\,{\bf\Pi}\left(-\frac{3\sqrt{2\beta} }{\sqrt{6\beta+1}-1};\,\mbox{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{\tau\beta(6\beta+1)\left(\beta r^{2}+2)}\]
\[+135\;2^{\frac{3}{4}}\Lambda\,\mathbf{\Pi}\left(\frac{3\sqrt{2\beta}}{ \sqrt{6\beta+1}+1};\,\mathrm{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{r\beta(6\beta+1)\left(\beta r^{2}+2\right)}\] \[+15\;2^{\frac{3}{4}}\,\mathbf{\Pi}\left(-\frac{3\sqrt{2\beta}}{ \sqrt{6\beta+1}-1};\,\mathrm{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{r\beta^{3}(6\beta+1)\left(\beta r^{2}+2\right)}\] \[+15\;2^{\frac{3}{4}}\,\mathbf{\Pi}\left(\frac{3\sqrt{2\beta}}{ \sqrt{6\beta+1}+1};\,\mathrm{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{r\beta^{3}(6\beta+1)\left(\beta r^{2}+2\right)}\] \[+60\;2^{\frac{3}{4}}\,\mathbf{\Pi}\left(-\frac{3\sqrt{2\beta}}{ \sqrt{6\beta+1}-1};\,\mathrm{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{r\beta^{5}(6\beta+1)\left(\beta r^{2}+2)}\] \[+60\;2^{\frac{3}{4}}\,\mathbf{\Pi}\left(\frac{3\sqrt{2\beta}}{ \sqrt{6\beta+1}+1};\,\mathrm{arcsinh}\left(\frac{1}{\sqrt[4]{2\beta}\sqrt{r}} \right)\right|-1\right)\sqrt{r\beta^{5}(6\beta+1)\left(\beta r^{2}+2)}\bigg{]} \bigg{\}}\,, \tag{101}\]
where \(\mathbf{F}(\varphi|\mathfrak{m})\), \(\mathbf{E}(\varphi|\mathfrak{m})\) and \(\mathbf{\Pi}(\mathfrak{n};\varphi|\mathfrak{m})\) are, respectively, the incomplete elliptic integrals of the first, second and third kind of argument \(\varphi\), modulus \(\mathfrak{m}\) and characteristic \(\mathfrak{n}\)[109]. Note that the above expression does not account for the case of \(\beta=0\), so that the corresponding profile has to be obtained by doing numerical integration of Eq. (60).
|
2307.14223 | Rewriting and Completeness of Sum-Over-Paths in Dyadic Fragments of
Quantum Computing | The "Sum-Over-Paths" formalism is a way to symbolically manipulate linear
maps that describe quantum systems, and is a tool that is used in formal
verification of such systems. We give here a new set of rewrite rules for the
formalism, and show that it is complete for "Toffoli-Hadamard", the simplest
approximately universal fragment of quantum mechanics. We show that the
rewriting is terminating, but not confluent (which is expected from the
universality of the fragment). We do so using the connection between
Sum-over-Paths and graphical language ZH-calculus, and also show how the
axiomatisation translates into the latter. We provide generalisations of the
presented rewrite rules, that can prove useful when trying to reduce terms in
practice, and we show how to graphically make sense of these new rules. We show
how to enrich the rewrite system to reach completeness for the dyadic fragments
of quantum computation, used in particular in the Quantum Fourier Transform,
and obtained by adding phase gates with dyadic multiples of $\pi$ to the
Toffoli-Hadamard gate-set. Finally, we show how to perform sums and
concatenation of arbitrary terms, something which is not native in a system
designed for analysing gate-based quantum computation, but necessary when
considering Hamiltonian-based quantum computation. | Renaud Vilmart | 2023-07-26T14:40:21Z | http://arxiv.org/abs/2307.14223v4 | # Rewriting and completeness of sum-over-paths in dyadic fragments of quantum computing\({}^{*}\)
###### Abstract.
The "Sum-Over-Paths" formalism is a way to symbolically manipulate linear maps that describe quantum systems, and is a tool that is used in formal verification of such systems.
We give here a new set of rewrite rules for the formalism, and show that it is complete for "Toffoli-Hadamard", the simplest approximately universal fragment of quantum mechanics. We show that the rewriting is terminating, but not confluent (which is expected from the universality of the fragment). We do so using the connection between Sum-over-Paths and graphical language ZH-calculus, and also show how the axiomatisation translates into the latter. We provide generalisations of the presented rewrite rules, that can prove useful when trying to reduce terms in practice, and we show how to graphically make sense of these new rules.
We show how to enrich the rewrite system to reach completeness for the dyadic fragments of quantum computation, used in particular in the Quantum Fourier Transform, and obtained by adding phase gates with dyadic multiples of \(\pi\) to the Toffoli-Hadamard gate-set.
Finally, we show how to perform sums and concatenation of arbitrary terms, something which is not native in a system designed for analysing gate-based quantum computation, but necessary when considering Hamiltonian-based quantum computation.
Key words and phrases:Quantum Computation, Verification, Sum-Over-Paths, Rewrite Strategy, Toffoli-Hadamard, Completeness. \({}^{*}\) Extended version of the CSL'23 paper [20]. The author acknowledges support from the PEPR integrated project EPiQ ANR-22-PETQ-0007 part of Plan France 2030, the ANR projects TaQC ANR-22-CE47-0012 and HQI ANR-22-PNCQ-0002, as well as the European project HPCQS.
Despite its links [1, 11] with graphical languages such as the ZH-calculus [1] - which will be used in the following -, it provides a different view on the quantum processes, representing them as weighted sums of Dirac kets and bras (a very familiar notation in quantum mechanics).
The formalism has seen several applications, the first of which being verification. Verification is a crucial aspect of computations in the quantum realm, where physical constraints (like no-cloning, or the fundamental probabilistic nature of quantum) make it impossible to do debugging the way we do on classical algorithms. More specifically, the SOP formalism was introduced as a solution to circuit equivalence: To check the equivalence between two circuits \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\), the system represents \(\mathcal{C}_{2}^{\dagger}\circ\mathcal{C}_{1}\) as an SOP term (where \(\mathcal{C}_{2}^{\dagger}\) can be seen as the inverse of \(\mathcal{C}_{2}\), easy to describe from it). It then tries to reduce it to the identity. If successful, this proves \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\) represent the same unitary. Otherwise, the system searches for a witness that the term at hand does not represent the identity. As such, the system has been used in several different projects (e.g. [1, 11]) to check precisely for circuit equivalence. It was later extended to account for families of morphisms and used within the Qbricks environment [3, 4] together with automated solvers to verify algorithms and routines such as quantum phase estimation, Grover's search and Shor's algorithm.
Amongst other applications of the Sum-Over-Paths, we may cite noiseless simulation of quantum processes, where the rewrite strategy is used to reduce the number of variables in the term, effectively decreasing the number of summands when expanding the term to actually compute its semantics. It is for instance one of the simulators implemented in the supercomputer Atos QLM [12].
While the initial suggestion for Sum-Over-Paths focussed on the Clifford+T fragment - a universal fragment of quantum computing, i.e. a restriction still capable of approximating with arbitrary precision any quantum process -, it also provided some interesting result for the Clifford fragment. It is known that the latter is not universal [1], and actually efficiently simulable with a classical computer, so it is a good test for the relevance of a formalism to check how it handles them. And indeed, it was shown [1] a "weak" form of confluence of the rewrite system in the Clifford fragment. More precisely, in this fragment, \(\mathcal{C}_{2}^{\dagger}\circ\mathcal{C}_{1}\) reduces (in polynomial time) to the identity if and only if \(\mathcal{C}_{2}\) and \(\mathcal{C}_{1}\) represent the same unitary operator.
However, SOP terms may represent more than unitary operator, but actually any linear map. With those, it is still possible to define the above restrictions, and the rewrite system was extended in [13] to get confluence for the - not necessarily unitary - Clifford fragment. When moving to a universal fragment - like Clifford+T - it is expected that we cannot provide a rewrite system with all the good properties of the Clifford case: either reduction is not polynomial, or there is no confluence, or we need an infinite number of rewrites,... The reason for this is that if we could provide such a system, deciding circuit equivalence would become polynomial, while we know that it is QMA-complete - a quantum variant of NP-complete - [1, 2]. A weaker property than that of confluence we can ask for is completeness: the question here is to decide whether two equivalent terms can be turned into one another, _with the assumption that rewrites can be used in both directions_ (in that case, we rather speak of an equational theory, or axiomatisation, than a rewrite system).
Contributions.In this paper, we address the problem of completeness first for arguably the simplest universal fragment of quantum computing, which is _Toffoli-Hadamard_. We
provide a fairly simple rewrite system that we show complete for the fragment, and also exhibit two important drawbacks: the non-confluence of the rewrite strategy and the potential explosion of the size of the morphisms during the rewrite. We then show how the rewrite strategy can be tweaked to reach completeness for every dyadic fragment - where we allow phase gates with phase a multiple of \(\frac{\pi}{2^{k}}\) for some \(k\) -, a restriction that encompasses Clifford, Clifford+T and Toffoli-Hadamard, and is crucially used in the Quantum Fourier Transform, a central block for algorithms such as Shor's and Quantum Phase Estimation. This paper extends [23] which was presented at the Computer Science and Logic 2023 (CSL'23). Here we slightly simplify the rewrite system, provide further potential simplification rules, of which we give a graphical treatment, and finally show how to perform two non-trivial operations on SOP terms, namely their sum and their concatenation, both very useful when building arbitrary maps or when considering Hamiltonian-based computation, and which remain in the dyadic fragment at hand. Since the presentation of [23], a complete rewrite system was given to Sum-Over-Paths with arbitrary phases and complex entries [1], although in the "unbalanced" framework, which is noticeably different.
### Structure of the paper
We first review the Sum-Over-Paths formalism in section 2, providing a rewrite strategy for Toffoli-Hadamard, as well as generalised rules. We then present the ZH-calculus in section 3 and the links between the two in section 4, and we show how to interpret the new rewrite rules of SOP graphically. We then show the completeness result for the Toffoli-Hadamard fragment in section 5. The extension to the dyadic fragments is then handled in section 6. Finally, in section 7, we show a way to "control" arbitrary terms allowing for their sum and their concatenation.
## 2. Sums-Over-Paths
### The Morphisms
Sums-Over-Paths [1] are a way to symbolically describe linear maps of dimensions powers of \(2\) over the complex numbers. These linear maps form a \(\dagger\)-compact monoidal category [11, 23] denoted **Qubit** where the objects are natural numbers (this makes the category a PROP [14, 15]), where morphisms from \(n\) to \(m\) are linear maps \(\mathbb{C}^{2^{n}}\to\mathbb{C}^{2^{m}}\), and where \(\big{(}\cdot\circ\cdot\big{)}\) (resp. \((\cdot\otimes\cdot)\)) is the usual composition (resp. tensor product) of linear maps. The category is endowed with a _symmetric braiding_\(\sigma_{n,m}:n+m\to m+n\), as well as a _compact structure_\((\eta_{n}:0\to 2n,\epsilon_{n}:2n\to 0)\). Furthermore, there exists an inductive contravariant endofunctor \((\cdot)^{\dagger}\), that behaves properly with the symmetric braiding and the compact structure. For more information on these structures, see [23].
The formalism of SOP relies heavily on the Dirac notation for quantum states and operators of **Qubit**. The two canonical states of a single qubit are denoted \(\ket{0}\) and \(\ket{1}\). They form a basis of \(\mathbb{C}^{2}\), and can be viewed as vectors \(\ket{0}=\begin{pmatrix}1&0\end{pmatrix}^{\intercal}\) and \(\ket{1}=\begin{pmatrix}0&1\end{pmatrix}^{\intercal}\). A \(1\)-qubit state is then merely a normalised linear combination of these two elements. Using \((\cdot\otimes\cdot)\), they can be used to build the basis states of larger systems, e.g. \(\ket{010}:=\ket{0}\otimes\ket{1}\otimes\ket{0}\) is a basis state of a \(3\)-qubit system. Again, the state of an arbitrary \(n\)-qubit system is a normalised linear combination of the \(2^{n}\) basis states. We will use extensively the following notation \(\langle x|\) to represent the dagger (transpose conjugate) of \(\ket{x}\). The identity on a qubit can then be expressed in Dirac notation as \(\mathbb{I}:=\ket{0}\!\!\bra{0}+\ket{1}\!\!\bra{1}\), where \(\ket{x}\!\!\bra{y}:=\ket{x}\circ\bra{y}\).
We give in the following the definition of Sum-Over-Paths of [21], which differs from [1] in the way the input qubits are treated, by making them more symmetric with the outputs. This makes some concepts, like the \(\dagger\) or the compact structure, more natural.
**Definition 2.1** (**Sop**).: We define **SOP** as the collection of objects \(\mathbb{N}\) and morphisms between them that are tuples \(f:n\to m:=(s,\vec{y},P,\vec{O},\vec{I})\), which we write:
\[s\sum_{\vec{y}\in V^{k}}e^{2i\pi P(\vec{y})}\left|\vec{O}(\vec{y})\right\rangle \!\!\left\langle\vec{I}(\vec{y})\right|\]
where \(s\in\mathbb{R}\), \(\vec{y}\in V^{k}\) with \(V\) a set of variables, \(P\in\mathbb{R}[X_{1},\ldots,X_{k}]/(X_{i}^{2}-X_{i})\) is called the _phase polynomial_ of \(f\)1, \(\vec{O}\in(\mathbb{F}_{2}[X_{1},\ldots,X_{k}])^{m}\) and \(\vec{I}\in(\mathbb{F}_{2}[X_{1},\ldots,X_{k}])^{n}\) - \(\mathbb{F}_{2}\) being the binary field, whose only two elements are its additive and multiplicative identities, denoted \(0\) and \(1\).
Footnote 1: The quotient in the phase polynomial means that we consider each occurrence of the square of a variable to be equal to the variable itself \(X_{i}^{2}-X_{i}=0\), since they will be evaluated over \(\{0,1\}\). We can further constrain the polynomial by taking it modulo \(1\), but only when considered as an element of a group, once all the products have been evaluated, as otherwise all phase polynomials would be evaluated to \(0\) as \(P=P\times 1=P\times 0=0\).
Compositions are obtained as2:
Footnote 2: To avoid further clutter, we may not specify the variables of polynomials, e.g. \(P_{g}\) actually stands for \(P_{g}(\vec{y}_{g})\), \(\vec{O}_{g}\) for \(\vec{O}_{g}(\vec{y}_{g})\) etc...
* \(f\circ g:=\frac{s_{f}s_{g}}{2^{m}}\sum\limits_{\begin{subarray}{c}\vec{y}_{f}, \vec{y}_{g}\\ \vec{y}\in V^{m}\end{subarray}}e^{2i\pi\left(P_{g}+P_{f}+\frac{\widehat{ \partial_{g}}\cdot g+\widehat{I}_{f}\cdot g}{2}\right)}\left|\vec{O}_{f}\right \rangle\!\!\left\langle\vec{I}_{g}\right|\) where \(m=\left|\vec{I}_{f}\right|=\left|\vec{O}_{g}\right|\)
* \(f\otimes g:=s_{f}s_{g}\sum\limits_{\begin{subarray}{c}\vec{y}_{f},\vec{y}_{g }\end{subarray}}e^{2i\pi(P_{g}+P_{f})}\left|\vec{O}_{f}\vec{O}_{g}\right\rangle \!\!\left\langle\vec{I}_{f}\vec{I}_{g}\right|\)
We distinguish particular morphisms:
* Identity morphisms \(id_{n}:\sum\limits_{\begin{subarray}{c}\vec{y}\in V^{n}\end{subarray}}|\vec{ y}\rangle\!\!\left\langle\vec{y}\right|\)
* Symmetric braidings \(\sigma_{n,m}=\sum\limits_{\begin{subarray}{c}\vec{y}_{1},\vec{y}_{2}\end{subarray}}| \vec{y}_{2},\vec{y}_{1}\rangle\!\!\left\langle\vec{y}_{1},\vec{y}_{2}\right|\)
* Morphisms for compact structure \(\eta_{n}=\sum\limits_{\begin{subarray}{c}\vec{y}\end{subarray}}|\vec{y}, \vec{y}\rangle\!\!\left\langle\right|\) and \(\epsilon_{n}=\sum\limits_{\begin{subarray}{c}\vec{y}\end{subarray}}|\! \left\langle\vec{y},\vec{y}\right|\)
We also distinguish two functors that have **SOP** as a domain:
* The \(\dagger\)-functor is given by: \(f^{\dagger}:=s\sum\limits_{\begin{subarray}{c}\vec{y}\end{subarray}}e^{-2i\pi P }\left|\vec{I}\right\rangle\!\!\left\langle\vec{O}\right|\)
* The functor \(\llbracket\cdot\rrbracket:\textbf{SOP}\rightarrow\textbf{Qubit}\) is defined as: \(\llbracket f\rrbracket:=s\sum\limits_{\begin{subarray}{c}\vec{y}\in\{0,1\}^{k }\end{subarray}}e^{2i\pi P(\vec{y})}\left|\vec{O}(\vec{y})\right\rangle\!\! \left\langle\vec{I}(\vec{y})\right|\)
The \(\dagger\)-functor is particularly important to characterise maps that are unitary - the pure transformations that are allowed by quantum mechanics: \(f\) is called unitary if \(\llbracket f^{\dagger}\circ f\rrbracket=id\).
**Remark 2.2**.: In the sequential composition \((\cdot\circ\cdot)\), the term \(\widehat{\vec{O}_{g}}\cdot\vec{y}\) is to be understood as
\[\widehat{\vec{O}_{g}}\cdot\vec{y}=\widehat{O_{g1}}\cdot y_{1}+...+\widehat{O_ {gm}}\cdot y_{m}\]
where \(\vec{O_{g}}=(O_{g1},...,O_{gm})\) and \(\vec{y}=(y_{1},...,y_{m})\); and where the map \(\widehat{(\cdot)}:\mathbb{F}_{2}[X_{1},\ldots,X_{k}]\rightarrow\mathbb{R}[X_{1 },\ldots,X_{k}]/(X_{i}^{2}-X_{i})\), which translates a boolean polynomial into a phase polynomial,
is inductively defined as:
\[\widehat{Q_{1}Q_{2}}=\widehat{Q_{1}Q_{2}}\qquad\qquad\widehat{Q_{1}\oplus Q_{2}}= \widehat{Q_{1}}+\widehat{Q_{2}}-2\widehat{Q_{1}Q_{2}}\qquad\qquad\widehat{y_{i}} =y_{i}\qquad\qquad\widehat{\alpha}=\alpha\]
This translation will also be used in most of the upcoming rewrite rules of the system.
**Example 2.3**.: The Hadamard and Toffoli gates (which justify the name of the first fragment we will consider in the following), can be represented in this formalism as:
\[H:=\frac{1}{\sqrt{2}}\sum_{y_{0},y_{1}}e^{2i\pi\frac{y_{0}y_{1}}{2}}\,|y_{1} \rangle\!\langle y_{0}|\qquad\text{Tof}:=\sum_{y_{0},y_{1},y_{2}}|y_{0},y_{1}, y_{2}\oplus y_{0}y_{1}\rangle\!\langle y_{0},y_{1},y_{2}|\]
It can be checked that both operators are unitary. To illustrate the compositions of **SOP** terms, we can focus on the following quantum circuit:
which is a graphical representation of \((H\otimes id\otimes H)\circ\text{Tof}\) (spatial composition represents the tensor product, while sequential composition represents the usual composition). The term \(H\otimes id\otimes H\) can be built as follows (renaming the variables so as to avoid collisions):
\[H\otimes id\otimes H =\Bigg{(}\tfrac{1}{\sqrt{2}}\!\sum_{y_{0},y_{1}}e^{2i\pi\frac{y_ {0}y_{1}}{2}}\,|y_{1}\rangle\!\langle y_{0}|\Bigg{)}\!\otimes\!\Bigg{(}\sum_{ y_{2}}e^{2i\pi\times 0}\,|y_{2}\rangle\!\langle y_{2}|\Bigg{)}\!\otimes\!\Bigg{(} \tfrac{1}{\sqrt{2}}\!\sum_{y_{3},y_{4}}e^{2i\pi\frac{y_{3}y_{4}}{2}}\,|y_{4} \rangle\!\langle y_{3}|\Bigg{)}\] \[=\frac{1}{2}\sum_{y_{0},\ldots,y_{4}}e^{2i\pi(\frac{y_{0}y_{1}}{ 2}+\frac{y_{3}y_{4}}{2})}\,|y_{1},y_{2},y_{4}\rangle\!\langle y_{0},y_{2},y_{3}|\]
The **SOP** term \(t=(H\otimes id\otimes H)\circ\text{Tof}\) is then computed as:
\[t =\Bigg{(}\frac{1}{2}\sum_{y_{0},\ldots,y_{4}}e^{2i\pi(\frac{y_{0} y_{1}}{2}+\frac{y_{3}y_{4}}{2})}\,|y_{1},y_{2},y_{4}\rangle\!\langle y_{0},y_{2},y_{3}| \Bigg{)}\circ\Bigg{(}\sum_{y_{5},y_{6},y_{7}}|y_{5},y_{6},y_{7}\oplus y_{5}y_ {6}\rangle\!\langle y_{5},y_{6},y_{7}|\Bigg{)}\] \[=\frac{1}{2^{4}}\sum_{y_{0},\ldots,y_{10}}e^{2i\pi(\frac{y_{0}y_ {1}}{2}+\frac{y_{3}y_{4}}{2}+y_{8}\frac{y_{0}+y_{5}}{2}+y_{9}\frac{y_{2}+y_{6 }}{2}+y_{10}\frac{y_{3}+y_{7}+y_{5}y_{6}}{2})}\,|y_{1},y_{2},y_{4}\rangle\! \langle y_{5},y_{6},y_{7}|\] \[=\frac{1}{2^{4}}\sum_{y_{0},\ldots,y_{10}}e^{2i\pi(\frac{y_{0}y_ {1}}{2}+\frac{y_{3}y_{4}}{2}+y_{8}\frac{y_{0}+y_{5}}{2}+y_{9}\frac{y_{2}+y_{6 }}{2}+y_{10}\frac{y_{3}+y_{7}+y_{5}y_{6}}{2})}\,|y_{1},y_{2},y_{4}\rangle\! \langle y_{5},y_{6},y_{7}|\]
where the last equality is obtained thanks to \(y_{10}\frac{y_{7}\widehat{\oplus y_{6}y_{6}}}{2}=y_{10}(\frac{y_{7}+y_{5}y_{6} }{2}-y_{5}y_{6}y_{7})=y_{10}\frac{y_{7}+y_{5}y_{6}}{2}\) when taken modulo 1.
As is customary, and as was already done in the previous example, we consider equality of the SOP morphisms up to \(\alpha\)-conversion, i.e. renaming of the variables. Notice that the definition of the composition \((\cdot\circ\cdot)\) gets somewhat involved. This is to cope with the way we deal with the inputs, which can be any boolean polynomial. The additional terms \(\frac{\partial_{g}\cdot\vec{y}+I_{f}\cdot\vec{y}}{2}\) enforce that \(O_{gi}=I_{fi}\) for all \(0\leq i<m\). Indeed, when summing over the variable \(y_{i}\), we get \((1+e^{i\pi(O_{gi}+I_{fi})})\) - which is non-null only when \(O_{gi}=I_{fi}\) - as a factor of the whole morphism. This presentation has the advantage of keeping the size of the morphism polynomial with the size of the quantum circuit - or ZH-diagram, see below - it can be built from, no matter what gate set is used. A downside, however, is that the above does not directly constitute a category, as for instance \(id\circ id\neq id\). However, it suffices to quotient the formalism with
rewrite rules to turn it into a proper category [20], hence justifying the use of the term "functor" for the last two maps.
### A Rewrite System
We hence give in Figure 1 a set of rewrite rules denoted that induces an equational theory \(\underset{\text{TH}}{\sim}\) (the symmetric and transitive closure of \(\underset{\text{TH}}{\longrightarrow}\)).
\[\sum_{\vec{y}}e^{2i\pi P}\left|\vec{O}\right\rangle\!\!\left\langle \vec{I}\right|\xrightarrow[y_{0}\notin\text{Var}(P,\vec{O},\vec{I})]{}2\sum_{ \vec{y}\setminus\{y_{0}\}}e^{2i\pi P}\left|\vec{O}\right\rangle\!\!\left\langle \vec{I}\right|\] (Elim) \[t=\sum e^{2i\pi\left(\frac{y_{0}}{2}(y_{i}\widehat{Q}+\widehat{Q ^{\prime}}+1)+R\right)}\left|\vec{O}\right\rangle\!\!\left\langle\vec{I} \right|\xrightarrow[y_{0}\notin\text{Var}(Q,Q^{\prime},R,\vec{O},\vec{I})]{}t [y_{i}\gets 1\oplus Q^{\prime}]\] (HHgen) \[t=\sum_{\vec{y}}e^{2i\pi(P)}|\cdots,\overset{O_{i}}{\overrightarrow {y_{0}\oplus O_{i}^{\prime}}},\cdots\left\langle\vec{I}\right| \xrightarrow[y_{0}\notin\text{Var}(O_{1},\ldots,O_{i-1},O_{i}^{\prime})]{}t[y_ {0}\leftarrow I_{i}]\] (ket) \[t=\sum_{\vec{y}}e^{2i\pi(P)}\left|\vec{O}\right\rangle\!\!\left\langle \cdots,\overset{I_{i}}{\overrightarrow{y_{0}\oplus I_{i}^{\prime}}},\cdots \left|\xrightarrow[y_{0}\notin\text{Var}(\vec{O},I_{1},\ldots,I_{i-1},I_{i}^{ \prime})]{}t[y_{0}\leftarrow I_{i}]\right.\] (bra) \[s\sum_{\vec{y}}e^{2i\pi\left(\frac{y_{0}}{2}+R\right)}\left| \vec{O}\right\rangle\!\!\left\langle\vec{I}\right|\xrightarrow[y_{0}\notin \text{Var}(R,\vec{O},\vec{I})]{}\sum_{y_{0}}e^{2i\pi\left(\frac{y_{0}}{2} \right)}|0,\cdots,0\rangle\!\!\left\langle 0,\cdots,0\right|\] (Z)
We need in the conditions of all the rules the function \(\text{Var}\), that, given a set or list of polynomials, gives the set of all variables used in them. We call _internal variable_ a variable that is present in the morphism \(t\) but not in its inputs/outputs, i.e. a variable \(y_{0}\) such that \(y_{0}\in\text{Var}(t)\setminus\text{Var}(\vec{O},\vec{I})\). It is worth noting that searching for an occurrence of, and applying any of these rules _once_, can be done in polynomial time.
The rules (ket) and (bra) correspond to changes of variables that are necessary to get a unique normal form in the Clifford case [20], and the rule (Elim) simply gets rid of a variable that is used nowhere in the term and simply contributes to a global phase (since that variable is supposed to range over two values, it contributes to a multiplicative scalar 2).
The cornerstone rule of [1] and [20] denoted (HH) was given as:
\[t=\sum e^{2i\pi\left(\frac{y_{0}}{2}(y_{i}+\widehat{Q})+R\right)}\left|\vec{O }\right\rangle\!\!\left\langle\vec{I}\right|\xrightarrow[y_{0}\notin\text{ Var}(Q,R,\vec{O},\vec{I})]{}t[y_{i}\gets Q]\] (HH)
It has been generalised into (HHgen) here ((HH) is the special case where \(Q=1\)). It is important to notice that the rule (HHgen) requires that \(QQ^{\prime}=Q\). Rule (HH) is so often used that we may, in the following, distinguish it from (HHgen).
**Example 2.4**.: Recall the term:
\[t=\frac{1}{2^{4}}\sum_{y_{0},\ldots,y_{10}}e^{2i\pi(\frac{y_{0}y_{1}}{2}+\frac{y_{ 3}y_{4}}{2}+y_{8}\frac{y_{0}+y_{5}}{2}+y_{9}\frac{y_{2}+y_{6}}{2}+y_{10}\frac{y _{3}+y_{7}+y_{5}y_{6}}{2})}\,|y_{1},y_{2},y_{4}\rangle\!\langle y_{5},y_{6},y_ {7}|\]
we got from Example 2.3. Its internal variables are \(y_{0},y_{3},y_{8},y_{9},y_{10}\). It is often the case that the variables created by the sequential composition \((y_{8},y_{9}\) and \(y_{10}\) here), can be used right away to rewrite the term using (HH). For instance, the term \(\frac{y_{8}}{2}(y_{0}+y_{5})\) where \(y_{8}\) appears nowhere else in the morphism, allows for an application of the rule, that will replace \(y_{0}\) by \(y_{5}\) (or vice-versa). We get:
\[t\underset{\text{HH}}{\longrightarrow}\frac{1}{2^{4}}\sum_{y_{1},\ldots,y_{1 0}}e^{2i\pi(\frac{y_{5}y_{1}}{2}+\frac{y_{3}y_{4}}{2}+y_{9}\frac{y_{2}+y_{6}} {2}+y_{10}\frac{y_{3}+y_{7}+y_{5}y_{6}}{2})}\,|y_{1},y_{2},y_{4}\rangle\! \langle y_{5},y_{6},y_{7}|\]
The rule (HH) can always be followed by (Elim): here \(y_{8}\) remains in the variables, but isn't used anywhere anymore. \(t\) can then be further reduced to:
\[\frac{1}{2^{3}}\sum_{\begin{subarray}{c}y_{i}\\ i\in\{1,2,3,4,5,6,7,9,10\}\end{subarray}}e^{2i\pi(\frac{y_{5}y_{1}}{2}+\frac{y _{3}y_{4}}{2}+y_{9}\frac{y_{2}+y_{6}}{2}+y_{10}\frac{y_{3}+y_{7}+y_{5}y_{6}}{2 })}\,|y_{1},y_{2},y_{4}\rangle\!\langle y_{5},y_{6},y_{7}|\]
Proceeding similarly for \(y_{9}\) and \(y_{10}\), we get:
\[t\rightarrow^{*}\frac{1}{2^{2}}\sum_{\begin{subarray}{c}y_{i}\\ i\in\{1,3,4,5,6,7,10\}\end{subarray}}e^{2i\pi(\frac{y_{5}y_{1}}{2}+\frac{y_{3 }y_{4}}{2}+y_{10}\frac{y_{3}+y_{7}+y_{5}y_{6}}{2})}\,|y_{1},y_{6},y_{4}\rangle\! \langle y_{5},y_{6},y_{7}|\] \[\rightarrow^{*}\frac{1}{2}\sum_{\begin{subarray}{c}y_{i}\\ i\in\{1,4,5,6,7\}\end{subarray}}e^{2i\pi(\frac{y_{5}y_{1}}{2}+\frac{y_{4}y_{7} }{2}+\frac{y_{4}y_{5}y_{6}}{2})}\,|y_{1},y_{6},y_{4}\rangle\!\langle y_{5},y_{6},y_{7}|\]
The term cannot be reduced further with the rewrite system \(\underset{\text{TH}}{\longrightarrow}\).
In the following, we assume that (Elim) is always applied after (HH), without necessarily mentioning it.
The rules (HH), (HHgen) and (Z) all stem from a particular observation: In the morphism \(t=\sum e^{2i\pi(\frac{y_{0}}{2}\vec{Q}+R)}\,\Big{|}\vec{O}\Big{\rangle}\! \Big{\langle}\vec{I}\Big{|}\) where \(y_{0}\) is internal and not in \(R\), if \(Q\) is evaluated to \(1\), then the whole morphism is interpreted as null. This is exactly what (Z) captures - and the conditions on \(R\), \(\vec{O}\) and \(I\) are simply here to avoid applying the rule indefinitely.
The rule (HH) deals with a case where the polynomial \(Q\) can be forced to \(0\), whilst the rule (HHgen) more generally deals with a case where one of the variables in the polynomial \(Q\) is forced to get a precise value due to the form of the polynomial.
The rule (HH) is of particular interest. It was introduced in [1] and gives enough power to the formalism to become a \(\dagger\)-compact PROP [20]. We can extend this result here thanks to:
**Proposition 2.5**.: \[\forall t_{1},t_{2}\in\mathbf{SOP},\ t_{1}\underset{\text{TH}}{\sim}t_{2}\implies \begin{cases}A\circ t_{1}\circ B\underset{\text{TH}}{\sim}A\circ t_{2}\circ B& \text{for all $A$, $B$ composable}\\ A\otimes t_{1}\otimes B\underset{\text{TH}}{\sim}A\otimes t_{2}\otimes B&\text{ for all $A$, $B$}\end{cases}\]
Proof.: The result is obvious for the tensor product \((.\otimes.)\). For the composition, we show that if \(t_{1}\xrightarrow[\mathrm{TH}]{}t_{2}\) in one step, then \(A\circ t_{1}\circ B\underset{\mathrm{TH}}{\sim}A\circ t_{2}\circ B\). In other words, we have to show it for every rule in \(\xrightarrow[\mathrm{TH}]{}\):
\(\bullet\) (Elim): Obvious.
\(\bullet\) (HHgen):
\[A\circ t_{1}\circ B=\sum e^{2i\pi\left(P_{A}+P_{B}+\frac{y_{0}}{2}(\widehat{ Q}+1)+R[y_{i}+\widehat{1\oplus Q^{\prime}}]+\frac{\mathcal{O}[y_{i}\leftarrow \widehat{1\oplus Q^{\prime}}]\cdot\mathcal{E}+\underline{I}_{A}\cdot\underline {x}+\widehat{I}[y_{i}+\widehat{1\oplus Q^{\prime}}]\cdot\mathcal{E}+\underline {G}_{B}\cdot\underline{x}^{\prime}}{2}\right)}\left|\vec{O}_{A}\right\rangle \!\!\left\langle\vec{I}_{B}\right|\]
\(\bullet\) (ket):
\(A\circ t_{1}\circ B=\)
\[\sum e^{2i\pi\left(P_{A}+P_{B}+P+\frac{(\widehat{Q}_{1}+\widehat{I}_{A1})x_{ 1}+\cdots+(y_{0}+\widehat{Q}_{i}^{\prime}+\widehat{I}_{A1})x_{i}+\cdots+( \widehat{Q}_{m}+\widehat{I}_{Am})x_{m}+\mathcal{I}\cdot\mathcal{E}+\underline {G}_{B}\cdot\underline{x}^{\prime}}{2}\right)}\left|\vec{O}_{A}\right\rangle \!\!\left\langle\vec{I}_{B}\right|\]
\(\bullet\) (bra): Similar to (ket).
\(\bullet\) (Z):
\[A\circ t_{1}\circ B=\sum e^{2i\pi\left(P_{A}+P_{B}+\frac{y_{0}}{2}+R+\frac{ \mathcal{G}\cdot\mathcal{E}+\underline{I}_{A}\cdot\underline{x}+\mathcal{I} \cdot\mathcal{E}+\underline{G}_{B}\cdot\underline{x}^{\prime}}{2}\right)}\left| \vec{O}_{A}\right\rangle\!\!\left\langle\vec{I}_{B}\right|\underset{\mathrm{Z }}{\sim}\sum e^{2i\pi\left(\frac{y_{0}}{2}\right)}\left|\vec{0}\right\rangle \!\!\left\langle\vec{0}\right|\]
Thanks to this Proposition, and since \(\mathbf{SOP}/\underset{\mathrm{HH}}{\sim}\) is a \(\dagger\)-compact PROP by [20], we get:
**Corollary 2.6**.: \(\mathbf{SOP}/\underset{\mathrm{TH}}{\sim}\) _is a \(\dagger\)-compact PROP._
The set of rules was obviously chosen so as to preserve the semantics:
**Proposition 2.7** (Soundness).: _For any two \(\mathbf{SOP}\) morphisms \(t_{1}\) and \(t_{2}\), if \(t_{1}\xrightarrow[\mathrm{TH}]{}t_{2}\), then \([\![t_{1}]\!]=[\![t_{2}]\!]\)._
Proof.: We mean to show that for any single step rewrite, the interpretation is preserved. As most of the rules were present in [1] or [20], this was proven for them. It remains to show the result for (HHgen). The soundness of a stronger version of (HHgen) is proven in upcoming Lemma 2.11. The result is then a direct consequence of this.
The addition of the rule (HHgen) allows for further reductions:
**Example 2.8**.: The following morphism:
\[\sum_{\vec{y}}e^{2i\pi(\frac{y_{0}y_{1}y_{2}}{2}+\frac{y_{2}}{2}+\frac{y_{1}y_{2}y _{3}}{2}+\frac{y_{0}y_{1}y_{2}y_{3}}{2})}\,|y_{3}\rangle\!\langle y_{0}|\]
is irreducible using the rules of [1] and [21]. However, here it can be reduced to:
\[\sum_{\vec{y}}e^{2i\pi(\frac{y_{0}y_{1}y_{2}}{2}+\frac{y_{2}}{2}+\frac{y_{1}y _{2}y_{3}}{2}+\frac{y_{0}y_{1}y_{2}y_{3}}{2})}\,|y_{3}\rangle\!\langle y_{0}| \xrightarrow[y_{1}\!\leftrightarrow 1]{}\sum_{y_{0},y_{2},y_{3}}e^{2i\pi(\frac{y_{ 0}y_{2}}{2}+\frac{y_{2}}{2}+\frac{y_{2}y_{3}}{2}+\frac{y_{0}y_{2}y_{3}}{2})}\,| y_{3}\rangle\!\langle y_{0}|\]
where the rewrite can be made clearer by writing the phase polynomial as \(\frac{y_{2}}{2}(y_{1}(y_{0}+y_{3}+y_{2}y_{3})+0+1)\) and checking that (obviously) \((y_{0}+y_{3}+y_{2}y_{3})\times 0=0\).
### Generalising Rules
In [21] another rule was introduced to allow for the proof of completeness:
\[t=\sum e^{2i\pi\left(\frac{y_{0}}{2}\widetilde{Q}+\frac{y_{0}^{\prime}}{2} \widetilde{Q}^{\prime}+R\right)}\,\Big{|}\vec{O}\Big{\rangle}\!\Big{\langle} \vec{I}\,\,\,\frac{}{y_{0},y_{0}^{\prime}\notin\mathrm{Var}(Q,Q^{\prime},R, \widetilde{Q},\widetilde{I})}\,\,2t[y_{0}^{\prime}\gets y_{0}\oplus y_{0}Q ]\] (HHnl)
This rule was only used when proving that one of the axioms of \(\mathrm{ZH}_{\mathrm{TH}}\) from [21] was provable with \(\underset{\mathrm{TH}}{\sim}\). However, it was proven in [1] that this axiom could be replaced by two simpler ones and otherwise provable, hence making (HHnl) unnecessary for completeness, as will be shown in the following of the paper.
Completeness however assumes rules can be used both ways, while we ideally only want to reduce a term when analysing it. With this in mind, it is relevant to look for rules that allow for further reduction, and (HHnl) is one of them.
**Example 2.9**.: The following term, while irreducible with \(\underset{\mathrm{TH}}{\longrightarrow}\), can be reduced using (HHnl):
\[\sum_{y_{0},y_{1},y_{2},y_{3},y_{4}}e^{2i\pi(\frac{y_{0}y_{1}y_{2 }}{2}+\frac{y_{2}}{2}+\frac{y_{2}y_{3}y_{4}}{2})}\,|y_{4}\rangle\!\langle y_{0}|\] \[\xrightarrow[y_{3}\!\leftarrow\!y_{1}\oplus y_{0}y_{1}y_{2}]\,2 \sum_{y_{0},y_{1},y_{2},y_{4}}e^{2i\pi(\frac{y_{0}y_{1}y_{2}}{2}+\frac{y_{2}} {2}+\frac{y_{1}y_{2}y_{4}}{2}+\frac{y_{0}y_{1}y_{2}y_{4}}{2})}\,|y_{4}\rangle \!\langle y_{0}|\]
As (HHnl) is not necessary for completeness, it should be possible to derive it from the rules of \(\underset{\mathrm{TH}}{\longrightarrow}\):
**Lemma 2.10**.: _(HHnl) can be derived from \(\underset{\mathrm{TH}}{\longrightarrow}\)._
Proof.: Consider the term:
\[\sum e^{2i\pi\left(\frac{y_{0}}{2}+\frac{y_{0}y_{1}(\widetilde{Q}+1)}{2}+\frac {y_{1}y_{0}^{\prime}}{2}+\frac{y_{0}^{\prime}(\widetilde{Q}^{\prime}+1)}{2}+R \right)}\,\Big{|}\vec{O}\Big{\rangle}\!\Big{\langle}\vec{I}\Big{|}\]
We can rewrite it in two different ways, which give both sides of the (HHnl) rule:
\[\sum e^{2i\pi\left(\frac{y_{0}}{2}+\frac{y_{0}y_{1}(\widetilde{Q}+1)}{2}+\frac {y_{1}y_{0}^{\prime}}{2}+\frac{y_{0}(\widetilde{Q}^{\prime}+1)}{2}+R\right)} \,\Big{|}\vec{O}\Big{\rangle}\!\Big{\langle}\vec{I}\Big{|}\] \[\Big{\downarrow}\mathrm{HHgen}(y_{1}\gets 1)\,\Big{\downarrow} \mathrm{HH}(y_{1}\gets Q^{\prime}\oplus 1)\] \[\sum e^{2i\pi\left(\frac{y_{0}}{2}\widetilde{Q}+\frac{y_{0}^{ \prime}}{2}\widetilde{Q}^{\prime}+R\right)}\,\Big{|}\vec{O}\Big{\rangle}\! \Big{\langle}\vec{I}\,
When rewriting SOP-morphisms for simplification or verification, it can be beneficial to not only reduce the number of variables - which is what all rules but (ket/bra) do -, but also to keep the size of the phase polynomial as short as possible. In that respect, the rule (HHgen) itself can be generalised to:
\[t=\sum e^{2i\pi\left(\frac{y_{0}}{2}(y_{i}\widehat{Q}+\widehat{QQ^{\prime}}+1)+ R\right)}\left|\vec{G}\right\rangle\!\!\left\langle\vec{I}\right|\xrightarrow[ \begin{subarray}{c}y_{0}\notin\operatorname{Var}(Q,Q^{\prime},R,\vec{Q}, \vec{I})\\ y_{i}\notin\operatorname{Var}(Q,Q^{\prime})\end{subarray}}t[y_{i}\gets 1 \oplus Q^{\prime}]\] (HHgen')
where the polynomial \(Q^{\prime}\) can here be smaller (in the number of terms) than the one in (HHgen). However, finding a "minimal" \(Q^{\prime}\) for this rule is a hard problem, as it requires the use of boolean Groebner bases [1], while instances (HHgen) can easily be found. (HHgen) can be seen as a particular case of (HHgen'), where \(Q^{\prime}\gets QQ^{\prime}\), as \(Q\times QQ^{\prime}=QQ^{\prime}\).
The previous observation can be made into another rule, which again uses the fact that when a term has a phase polynomial of the form \(\frac{y_{0}}{2}\widehat{Q}+R\) with \(y_{0}\) internal, then we can force \(Q\) to be \(0\). This means we can replace \(R\) by a remainder in the euclidean division of \(R\) by \(Q\):
\[\sum e^{2i\pi\left(\frac{y_{0}}{2}\widehat{Q}+\widehat{SQ}+\widehat{R}\right)} \left|\vec{G}\right\rangle\!\!\left\langle\vec{I}\right|\xrightarrow[\begin{subarray} {c}y_{0}\notin\operatorname{Var}(Q,S,R,\vec{O},\vec{I})\end{subarray}]{}\sum e^ {2i\pi\left(\frac{y_{0}}{2}\widehat{Q}+\widehat{R}\right)}\left|\vec{G}\right \rangle\!\!\left\langle\vec{I}\right|\] (Rem)
Notice that, in contrast with the (HH), (HHgen) and (HHgen'), this rule does not reduce the number of variables, but instead equates polynomials that are equivalent "modulo \(Q\)". In practice, this rule is also hard to use as it again uses Groebner bases to properly implement. The proximity between the last two aforementioned rules is not surprising once we realise that (Rem) can be deduced from (HHgen) and (HHgen'):
\[\sum e^{2i\pi\left(\frac{y_{0}}{2}\left(y_{i}(1+\widehat{Q})+(1+ \widehat{Q})\widehat{S}+1\right)+\frac{\widehat{S}}{2}+\frac{y_{i}}{2}+\frac {1}{2}+R\right)}\left|\vec{G}\right\rangle\!\!\left\langle\vec{I}\right|\] \[\operatorname{HHgen}(y_{i}\leftarrow(1\oplus Q)S\oplus 1)\Big{|} \Big{|}\operatorname{HHgen}'(y_{i}\gets S\oplus 1)\] \[\sum e^{2i\pi\left(\frac{y_{0}}{2}\widehat{Q}+\widehat{SQ}+R \right)}\left|\vec{G}\right\rangle\!\!\left\langle\vec{I}\right| \sum e^{2i\pi\left(\frac{y_{0}}{2}\widehat{Q}+R\right)}\left|\vec{G}\right\rangle \!\!\left\langle\vec{I}\right|\]
We can show that (HHgen') is sound, which implies that (HHgen) - as a particular case - and both (HHnl) and (Rem) - as a compositions of particular cases of (HHgen') - are also sound:
**Lemma 2.11**.: _(HHgen') is sound._
Proof.: If \(t=\sum_{\vec{y}\in V^{k}}e^{2i\pi\left(\frac{y_{0}}{2}(y_{i}\widehat{Q}+ \widehat{QQ^{\prime}}+1)+R\right)}\left|\vec{G}\right\rangle\!\!\left\langle \vec{I}\right|\), then:
\[\llbracket t\rrbracket =\sum_{\vec{y}\in\{0,1\}^{k}}e^{2i\pi\left(\frac{y_{0}}{2}(y_{i }\widehat{Q}+\widehat{QQQ^{\prime}}+1)+R\right)}\left|\vec{G}\right\rangle \!\!\left\langle\vec{I}\right|=\sum_{\vec{y}\in\{0,1\}^{k-1}}(1+e^{i\pi(y_{i} \widehat{Q}+\widehat{QQ^{\prime}}+1)})e^{2i\pi R}\left|\vec{G}\right\rangle \!\!\left\langle\vec{I}\right|\] \[=\sum_{\vec{y}\in\{0,1\}^{k-2}}(1+e^{i\pi(\widehat{Q}^{\prime} \widehat{Q}+\widehat{QQQ^{\prime}}+1)})e^{2i\pi R[y_{i}\leftarrow\widehat{Q^{ \prime}}]}(\left|\vec{G}\right\rangle\!\!\left\langle\vec{I}\right|)[y_{i} \gets Q^{\prime}]\] \[=0+\sum_{\vec{y}\in\{0,1\}^{k-2}}(1+e^{i\pi(\widehat{1\oplus Q^ {\prime}}\widehat{Q}+\widehat{QQQ^{\prime}}+1)})e^{2i\pi R[y_{i}\leftarrow \widehat{1\oplus Q^{\prime}}]}(\left|\vec{G}\right\rangle\!\!\left\langle\vec{I }\right|)[y_{i}\gets 1\oplus Q^{\prime}]\]
\(=\left[\!\left[t[y_{i}\gets 1\oplus Q^{\prime}]\right]\!\right]\)
While we have identified stronger rewrite rules than the ones in \(\underset{\text{TH}}{\sim}\), as the rest of the paper is mainly focussed on completeness, we stick to the rules of \(\underset{\text{TH}}{\sim}\), since, as we shall see later, they are enough for that particular problem.
## 3. The **Zh**-Calculus
As a foundation towards completeness of the Toffoli-Hadamard fragment of **SOP**, we will use a similar result on another formalism: the graphical calculus ZH.
The graphical calculi ZX, ZW and ZH [1, 1, 1] are calculi for quantum computing, with a tight link with the Sum-Over-Paths formalism [1, 2, 2], and whose completeness was proven in particular for the Toffoli-Hadamard fragment [1, 2, 2, 3, 4, 5].
This fragment of quantum mechanics is approximately universal [1, 2], and it is arguably the simplest one with this property. This is the fragment we will be interested in, in most of the following of the paper; and the associated completeness result will be paramount in the development of the following.
We choose to present here the ZH-calculus, because of its proximity with both **SOP** and the Toffoli-Hadamard fragment. Notice however that there exist translations between all the aforementioned graphical calculi, so by composition, we can connect **SOP** to all of them.
### The Diagrams and their Interpretation
**ZH** is a PROP whose morphisms - read here from top to bottom - are composed (sequentially \((\cdot\circ\cdot)\) or in parallel \((\cdot\otimes\cdot)\)) from Z-spiders and H-spiders:
* \(Z_{m}^{n}:n\to m::\)\(\underset{\text{\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig//
\[\left[\begin{array}{c}\cdots\\ \hline\hline\hline\hline\hline\cdots\\ \hline\hline\hline\hline\end{array}\right]=\sum_{j_{k},i_{k}\in\{0,1\}}r^{j_{1} \ldots j_{m}i_{1}\ldots i_{n}}\left|j_{1},\ldots,j_{m}\rangle\!\langle i_{1}, \ldots,i_{n}\right|\]
\[\left[\begin{array}{c}\hline\hline\hline\hline\hline\hline\hline\hline\hline\cdots\\ \hline\hline\hline\end{array}\right]=\sum_{i_{k},j_{k}\in\{0,1\}}\left|j_{1}, \ldots,j_{m},i_{1},\ldots,i_{n}\rangle\!\langle i_{1},\ldots,i_{n},j_{1}, \ldots,j_{m}\right|\]
Notice that we used the same symbol for two different functors: the two interpretations \(\llbracket\cdot\rrbracket:\mathbf{SOP}\rightarrow\mathbf{Qubit}\) and \(\llbracket\cdot\rrbracket:\mathbf{ZH}\rightarrow\mathbf{Qubit}\). It should be clear from the context which one is to be used.
The language is universal: \(\forall f\in\mathbf{Qubit},\ \exists D_{f}\in\mathbf{ZH},\ \llbracket D_{f} \rrbracket=f\). In other words, the interpretation \(\llbracket\cdot\rrbracket\) is surjective.
### Equational Theory
The language comes with an equational theory, which in particular gives the axioms for a \(\dagger\)-compact PROP. We will not present it here.
We can easily define a restriction of \(\mathbf{ZH}\) that exactly captures the Toffoli-Hadamard fragment of quantum mechanics [BKMB\({}^{+}\)23, vdWW19], as the language generated by:
Notice that the two black spiders can still be defined if we also define \(\framebox{\frac{1}{\sqrt{2^{p}}}}:=\framebox{\frac{1}{\sqrt{2}}}^{\otimes p}\). We denote this restriction by \(\mathbf{ZH}_{\mathrm{TH}}\).
This restriction is provided with an equational theory, given in Figure 2, that makes it complete3.
Footnote 3: The axiomatisation provided here is based on that of [BKMB\({}^{+}\)23] together with [BKMB\({}^{+}\)23, Thm. 8.6] which simplifies one of the rules. It simplifies the axiomatisation found in [vdWW19], however the fragment considered is slightly different, as [BKMB\({}^{+}\)23] does _not_ contain the scalar \(\frac{1}{\sqrt{2}}\), but only \(\frac{1}{2}\). It is however very easy to extend the completeness to the language obtained by adjoining a generator for the \(\frac{1}{\sqrt{2}}\) scalar, using the additional rule that states that \(2*\frac{1}{\sqrt{2}}*\frac{1}{\sqrt{2}}=1\). This is what is done in the axiomatisation provided here. A proof that completeness does extend can be adapted from e.g. [JPV18, Prop. 1].
**Theorem 3.1** [BKMB\({}^{+}\)23] Completeness of \(\mathbf{ZH}_{\mathrm{TH}}/\operatorname{ZH}_{\mathrm{TH}}\)**.**
\[\forall D_{1},D_{2}\in\mathbf{ZH}_{\mathrm{TH}},\ \llbracket D_{1} \rrbracket=\llbracket D_{2}\rrbracket\iff\operatorname{ZH}_{\mathrm{TH}} \vdash D_{1}=D_{2}\]
## 4. Translations between \(\mathbf{SOP}\) and \(\mathbf{ZH}\)
### From \(\mathbf{SOP}\) to \(\mathbf{ZH}\)
It is possible to translate \(\mathbf{SOP}\) morphisms to \(\mathrm{ZH}\)-diagrams using interpretation \([\cdot]^{\mathrm{ZH}}:\mathbf{SOP}\rightarrow\mathbf{ZH}\). A description of \([\cdot]^{\mathrm{ZH}}:\mathbf{SOP}\rightarrow\mathbf{ZH}\) was defined in
[11, 12] and in [13]. We choose the latter definition as it fits our definition of **SOP**.
\[\left[s\sum_{\vec{y}}e^{2i\pi P}\left|O_{1},\ldots,O_{m}\right\rangle\!\!\left\langle I _{1},\ldots,I_{n}\right|\right]^{\mathrm{ZH}}:=\raisebox{-14.226378pt}{ \includegraphics[scale=0.5]{ZH
is mapped to
The boolean polynomials as defined above are given in their (unique) expanded form. These can easily be shown to be copied through the white node:
**Lemma 4.2**.: \[\tikzfig{width=1.5pt}\]
Proof.: We prove the result where \(Q\)'s constant is \(0\). The proof where \(Q\)'s constant is \(1\) is very similar, and simply uses the fact that is copied through the white node [BKMB\({}^{+}\)23]:
The above translation preserves the semantics:
**Proposition 4.3** [Vil21].: \(\llbracket\![\cdot]^{\mathrm{ZH}}\rrbracket=\llbracket\![\cdot]\!\rrbracket\)_.
### From ZH to Sop
Any \(\mathbf{ZH}\)-diagram can be understood as a \(\mathbf{SOP}\)-morphism. To do so, we use the PROP-functor \([\cdot]^{\mathrm{sop}}:\mathbf{ZH}\to\mathbf{SOP}\) defined as:
\[\left[\begin{array}{c}\tikzfig{width=1.5pt}\]
\[\tikzfig{width=1.
This does not give a full description of \([\cdot]^{\mathrm{sop}}\), as we did not describe the interpretation of the H-spider for all parameters, but only for phases and \(0\). However, any H-spider can be decomposed using the previous ones:
**Lemma 4.4**.: _For any \(r\in\mathbb{C}\) such that \(|r|\notin\{0,1\}\), there exist \(s\in\mathbb{C}\), \(\alpha,\beta\in\mathbb{R}\) such that:_
Proof.: First, thanks to rule (HS1), we have \(\left\lfloor\frac{\pi}{r}\right\rfloor=\left\lfloor\frac{1}{2}\right\rfloor \left\lfloor\frac{\pi}{r}\right\rfloor\). Then, we have:
\[\left\lfloor\frac{\pi}{r}\right\rfloor=\frac{1}{2}\left(\begin{matrix}1+r\\ 1-r\end{matrix}\right)=\frac{1+r}{2}\left(\begin{matrix}1\\ \frac{1-r}{1+r}\end{matrix}\right)\]
and
\[\left\lfloor\frac{\pi}{r}\right\rfloor=2se^{i\frac{\alpha}{2}}\left(\begin{matrix} \cos\frac{\alpha}{2}\\ -ie^{i\beta}\sin\frac{\alpha}{2}\end{matrix}\right)=2se^{i\frac{\alpha}{2}} \cos\frac{\alpha}{2}\left(\begin{matrix}1\\ e^{i(\beta-\frac{\pi}{2})}\tan\frac{\alpha}{2}\end{matrix}\right)\]
Hence, when \(|r|\notin\{0,1\}\), we have equality between the two with \(\alpha:=2\arctan\left(\frac{1-r}{1+r}\right)\), \(\beta=\arg\left(\frac{1-r}{1+r}\right)+\frac{\pi}{2}\) and \(s:=\frac{1+r}{4e^{i\frac{\alpha}{2}}\cos\frac{\alpha}{2}}\) (since \(r\neq 1\), \(\alpha\) is well defined and \(\alpha\neq\pi\mod 2\pi\) so \(s\) is also well-defined). From this, we get:
\[\left\lfloor\frac{\cdots}{r}\right\rfloor=\left\lfloor\frac{\left\lfloor\frac {\pi}{r}\right\rfloor}{\left\lfloor\frac{\pi}{r}\right\rfloor}\right\rfloor\]
As a consequence, we extend the definition of \([\cdot]^{\mathrm{sop}}\) by:
\[\left\lceil\frac{\left\lfloor\frac{\cdots}{r}\right\rfloor}{\left\lceil\frac {\cdots}{r}\right\rfloor}\right\rceil^{\mathrm{sop}}:=\left\lceil\frac{ \left\lfloor\frac{\cdots}{r}\right\rfloor}{\left\lceil\frac{\pi}{r}\right\rfloor }\right\rceil^{\mathrm{sop}}\]
This interpretation of ZH-diagrams as **SOP**-morphisms preserves the semantics:
**Proposition 4.5** [25].: _In other words, the following diagram commutes:_
The composition of the two interpretations is the identity up to small rewrites:
**Proposition 4.6** [25].: _[_25_]__\(\left\lceil\left\lfloor\cdot\right\rfloor^{\mathrm{sop}}\right\rfloor\)
### Restrictions of Sop
Recall that \(\mathbf{ZH}_{\mathrm{TH}}\) exactly captures the Toffoli-Hadamard fragment of quantum mechanics. We can then use the two interpretations to define the Toffoli-Hadamard fragment of \(\mathbf{SOP}\). We actually go a step beyond and define a family of fragments indexed by \(n\):
**Definition 4.7** (\(\mathbf{SOP}[\frac{1}{2^{n}}]\)).: We define \(\mathbf{SOP}[\frac{1}{2^{n}}]\) as the restriction of \(\mathbf{SOP}\) to morphisms of the form: \({t=\frac{1}{\sqrt{2^{p}}}\sum e^{2i\pi\frac{P}{2^{n}}}\left|\vec{O} \right\rangle\!\!\left\langle I\right|}\) where \(p\in\mathbb{Z}\) and \(P\) has integer coefficients.
The Toffoli-Hadamard fragment is then the first such restriction (\(n=1\)):
**Proposition 4.8**.: \(\mathbf{SOP}[\frac{1}{2}]\) _captures exactly the Toffoli-Hadamard fragment of quantum mechanics._
Proof.: We can prove this by showing that \(\left[\mathbf{ZH}_{\mathrm{TH}}\right]^{\mathrm{sop}}\subseteq\mathbf{SOP}[ \frac{1}{2}]\) and that \(\left[\mathbf{SOP}[\frac{1}{2}]\right]^{\mathrm{ZH}}\subseteq\mathbf{ZH}_{ \mathrm{TH}}\). The two claims are straightforward verifications, and use the fact that compositions of \(\mathbf{SOP}[\frac{1}{2}]\)-morphisms give \(\mathbf{SOP}[\frac{1}{2}]\)-morphisms.
Then, \(\llbracket\mathbf{ZH}_{\mathrm{TH}}\rrbracket=\llbracket\llbracket\mathbf{ZH}_{ \mathrm{TH}}\rrbracket^{\mathrm{sop}}\rrbracket\subseteq\llbracket\mathbf{SOP}[ \frac{1}{2}]\rrbracket=\llbracket\llbracket\mathbf{SOP}[\frac{1}{2}]\rrbracket^{ \mathrm{ZH}}\rrbracket\subseteq\llbracket\mathbf{ZH}_{\mathrm{TH}}\rrbracket\), so:
\[\llbracket\mathbf{SOP}[\frac{1}{2}]\rrbracket=\llbracket\mathbf{ZH}_{ \mathrm{TH}}\rrbracket\]
Notice in particular that the Hadamard and Toffoli gates given in Example 2.3 lie in this fragment; and that compositions of elements of this fragment remain in the fragment. Not all of \(\mathbf{SOP}[\frac{1}{2}]\) can be generated by the two gates \(H\) and \(\mathrm{Tof}\), however, as \(\mathbf{SOP}[\frac{1}{2}]\) comprises linear maps that are not unitary, i.e. such that \(\llbracket t^{\dagger}\circ t\rrbracket\neq id\).
### The Rewrite System \(\underset{\mathrm{TH}}{\longrightarrow}\), Graphically
Before moving on to show the result of completeness the Toffoli-Hadamard fragment of \(\mathbf{SOP}\), it is worth checking how the rules of \(\underset{\mathrm{TH}}{\longrightarrow}\) translate in \(\mathbf{ZH}\). We will focus on the rules mentioned above, that were not present in the previous works on \(\mathbf{SOP}\), namely (HHgen), (HHgen'), (HHnl) and (Rem).
Rule (HHgen) uses a side condition, that is \(QQ^{\prime}=Q\), which translates to
(4.1)
The pattern \(\frac{y_{0}}{2}(y_{i}\widehat{Q}+\widehat{Q}^{\prime}+1)\) is represented by:
and can be rewritten into:
where the last equality (that uses Lemma 4.2) is the substitution \([y_{i}\gets Q^{\prime}\oplus 1]\).
Rule (HHgen') can be proven graphically exactly in the same way, except we start from the second diagram in the derivation above.
Rule (HHnl) turns an occurrence of \(\frac{y_{0}}{2}\widehat{Q}+\frac{y_{0}^{\prime}}{2}\widehat{Q}^{\prime}\) into \(\frac{y_{0}}{2}(\widehat{Q}+\widehat{Q}^{\prime}+\widehat{Q}\widehat{Q}^{ \prime})\), when the two variables are linked to nothing else than their respective polynomials \(Q\) and \(Q^{\prime}\). The induced \(\mathbf{ZH}\) identity can be derived using its rules:
Although the overall number of nodes usually increases, the number of white nodes that amount to \(\mathbf{SOP}\)-variables (i.e. white nodes that are not part of a polynomial) decreases.
Finally, Rule (Rem) turns the phase polynomial \(\frac{y_{0}}{2}\widehat{Q}+\widehat{SQ}+\widehat{R}\) into \(\frac{y_{0}}{2}\widehat{Q}+\widehat{R}\), which can be derived graphically as:
## 5. Completeness for Toffoli-Hadamard
In this section, we aim to show that the set of rules \(\underset{\mathrm{TH}}{\longrightarrow}\) captures the whole Toffoli-Hadamard fragment of quantum mechanics. We do so by transporting the similar result from \(\mathbf{Z}\mathbf{H}_{\mathrm{TH}}\) to \(\mathbf{SOP}[\frac{1}{2}]\). First, we show:
**Proposition 5.1**.: \(\forall D_{1},D_{2}\in\mathbf{Z}\mathbf{H}_{\mathrm{TH}},\ \mathrm{Z}\mathbf{H}_{ \mathrm{TH}}\vdash D_{1}=D_{2}\implies[D_{1}]^{\mathrm{sop}}\underset{ \mathrm{TH}}{\sim}[D_{2}]^{\mathrm{sop}}\)__
Proof.: We show that all the rules of \(\mathrm{Z}\mathbf{H}_{\mathrm{TH}}\) hold in \(\mathbf{SOP}[\frac{1}{2}]\), which together with Proposition 2.5 proves the result.
The translation of rule (IP) is implicit in \(\mathbf{SOP}\): it essentially states that \(y_{i}\cdot y_{i}=y_{i}\) for boolean variable \(y_{i}\). Checking the rules (ZS1), (ZS2), (HS1), (HS2) and (M) is straightforward using the rule (HH). We give for instance a check of the rule (ZS1):
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
We give derivations to prove the remaining rules of \(\mathrm{Z}\mathbf{H}_{\mathrm{TH}}\). Recall that equality is up to \(\alpha\)-conversion.
(IV):
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
(Z):
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
The two rules (BA1) and (BA2) are fairly easy to check, once one realises that \(\includegraphics[width=142.26378pt]{figs/2011.eps}\end{split}\)\(\underset{\mathrm{HH}}{\longrightarrow}\)\(\sum|y_{0}\oplus y_{1}\rangle\!\langle y_{0},y_{1}|\):
[MISSING_PAGE_POST]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\includegraphics[width=142.26378pt]{figs/2011.eps} \end{split}\]
\[\begin{split}\includegraphics[width=142.26378pt]{figs/
and
\[\begin{CD}\includegraphics[]{figure/1.eps}\end{CD}\begin{CD} \includegraphics[]{figure/1.
\[\begin{array}{ccc}\Big{\downarrow}\operatorname{HHgen}(y_{1}\gets 1)&\Big{\downarrow} \operatorname{HH}(y_{1}\gets y_{4}y_{5}\oplus 1)\\ \sum e^{2i\pi\left(\frac{y_{0}}{2}y_{2}y_{3}+\frac{y_{0}^{\prime}}{2}y_{4}y_{5} \right)}\left|{}_{y_{2}}y_{3}y_{4}y_{5}\right\rangle&2\sum e^{2i\pi\left(\frac {y_{0}}{2}(yz_{3}+y_{4}y_{5}+yz_{2}y_{3}y_{4}y_{5})\right)}\left|{}_{y_{2}}y_ {3}y_{4}y_{5}\right\rangle\end{array}\]
Another important downside is the potential explosion of the size of the phase polynomial:
**Lemma 5.5**.: _Applying (HH) \(k\) times in a row on an SOP morphism with phase polynomial of size \(O(k)\) may give a morphism with phase polynomial of size \(O(2^{k})\)._
Proof.: For any \(k\geq 1\) we can define the following term:
\[t_{k}:=\sum e^{2i\pi\left(y_{1}\dots\cdot y_{k}+\sum\limits_{i=1}^{k}\frac{y_{ i}^{\prime}}{2}(y_{i}+x_{i}+x_{i}^{\prime})\right)}\]
on which we can apply (HH) \(k\) times in a row with \([y_{i}\gets x_{i}\oplus x_{i}^{\prime}]\). In that case we end up with:
\[t_{k}\xrightarrow{}^{k}2^{k}\sum e^{2i\pi\left(\prod\limits_{i=1}^{k}\left(x_{ i}+x_{i}^{\prime}\right)\right)}\]
While \(t_{k}\) has only \(3(k+1)\) terms (each of degree \(2\) except one of degree \(k\)) in its phase polynomial, it can rewrite into a morphism with \(2^{k+1}\) terms (each of degree \(k\)).
Hence, if one were to perform simplifications with this rewrite system, they ought to give special attention as to where and in which order to apply the rules.
## 6. Completeness for the Dyadic Fragment
We show here how we can turn an \(\mathbf{SOP}[\frac{1}{2^{n+1}}]\)-morphism into an \(\mathbf{SOP}[\frac{1}{2^{n}}]\)-morphism in a "reversible" manner. This will allow us to extend the completeness result to all the restrictions \(\mathbf{SOP}[\frac{1}{2^{n}}]\). This is particularly interesting as the phase gates with dyadic multiples of \(\pi\), used in particular in the quantum Fourier transform, belong in these fragments:
\[R_{Z}\left(p\frac{\pi}{2^{k}}\right):=\sum_{y_{0}}e^{2i\pi\cdot\frac{p}{2^{k-1 }}}\left|{}_{y_{0}}\rangle\!\langle{}y_{0}\right|\]
### Ascending the Dyadic Levels
These transformations between restrictions of \(\mathbf{SOP}\) are more easily defined on \(\mathbf{SOP}\)-morphisms of a particular shape, namely, when their phase polynomial is reduced to a single monomial. Because of this, we show how a \(\mathbf{SOP}\)-morphism can be turned into a composition of these.
**Lemma 6.1**.: _Let \(P=\sum m_{i}\in\mathbb{R}[X_{1},\dots,X_{k}]/(X_{i}^{2}-X_{i})\), and \(t=s\sum e^{2i\pi P}\left|{\vec{O}}\right\rangle\!\left\langle{}{\vec{I}}\right|\). Then:_
\[\left[\begin{array}{c}\left(s\sum\left|{\vec{O}}\right\rangle\!\langle{}y_{ 0},...,y_{k}|\right)\circ\\ \left(\sum e^{2i\pi m_{1}}\left|{}_{y_{0}},...,y_{k}\rangle\!\langle{}y_{0},...,y_{k}|\right)\circ\dots\circ\left(\sum e^{2i\pi m_{\ell}}\left|{}_{y_{0}},...,y_{k}\rangle\!\langle{}y_{0},...,y_{k}|\right)\\ \circ\left(\sum\left|{}_{y_{0}},...,y_{k}\right\rangle\!\left\langle{}{\vec{I }}\right|\right)\end{array}\right]\xrightarrow[\text{\tiny{HH}}\text{ }t\]
Proof.: Let us start by composing the two first diagonal terms:
\[\left(\sum e^{2i\pi m_{1}}\,|y_{0},...,y_{k}\rangle\!\langle y_{0},...,y_{k}| \right)\circ\left(\sum e^{2i\pi m_{\ell}}\,|y_{0},...,y_{k}\rangle\!\langle y_{ 0},...,y_{k}|\right)\] \[=\frac{1}{2^{k+1}}\sum e^{2i\pi(m_{1}+m_{2}[y_{i}\gets y_{i}^ {\prime}]+\frac{y_{0}y_{0}^{\prime\prime}+y_{0}^{\prime\prime}y_{0}^{\prime \prime}}{2}+...+\frac{y_{k}y_{k}^{\prime\prime}+y_{k}^{\prime\prime}y_{k}^{ \prime}}{2})}\,|y_{0},...,y_{k}\rangle\!\langle y_{0}^{\prime},...,y_{k}^{ \prime}|\] \[\xrightarrow[\text{HH}(y_{i}^{\prime}\gets y_{i})]\,\sum e^{ 2i\pi(m_{1}+m_{2})}\,|y_{0},...,y_{k}\rangle\!\langle y_{0},...,y_{k}|\]
Doing so repeatedly with all the diagonal terms gives
\[\sum e^{2i\pi P}\,|y_{0},...,y_{k}\rangle\!\langle y_{0},...,y_{k}|\]
Finally, applying the extremal terms (one at a time) and removing the newly created variables with (HH), just as we did for the diagonal terms, yields \(t\).
Notice that this decomposed form is not unique, as different orderings on the monomials of \(P\) define different orderings of the compositions. However, this will not matter.
A particular care is sadly needed for the overall scalar. Because of this, we will first focus on a slightly different notion of restriction of \(\mathbf{SOP}\).
**Definition 6.2** (\(\mathbf{SOP}[\frac{1}{2^{n}}]^{\prime}\)).: We define \(\mathbf{SOP}[\frac{1}{2^{n}}]^{\prime}\) as the restriction of \(\mathbf{SOP}\) to morphisms of the form: \(t=\frac{1}{2^{p}}\sum e^{2i\pi\frac{P}{2^{n}}}\left|\vec{O}\right\rangle\! \!\left\langle\vec{I}\right|\) where \(P\) has integer coefficients.
The only difference with \(\mathbf{SOP}[\frac{1}{2^{n}}]\) is that the overall scalar is now a power of \(\frac{1}{2}\) and not of \(\frac{1}{\sqrt{2}}\). There always exists a \(\mathbf{SOP}[\frac{1}{2^{n}}]^{\prime}\)-morphism that represents the same linear map as any \(\mathbf{SOP}[\frac{1}{2^{n}}]\)-morphism.
**Lemma 6.3**.: \(\left[\!\frac{1}{\sqrt{2}}\sum\limits_{y_{0}\in V}e^{2i\pi\left(\frac{1}{8}+ \frac{3}{4}y_{0}\right)}\!\right]=1\)_. Hence:_
\[\forall t\in\mathbf{SOP}[\frac{1}{2^{n}}],\ \exists t^{\prime}\in\mathbf{SOP}[ \frac{1}{2^{\max(3,n)}}]^{\prime},\ [\![t]\!]=[\![t^{\prime}]\!]\]
Proof.: If \(t\in\mathbf{SOP}[\frac{1}{2^{n}}]\) and \(t\notin\mathbf{SOP}[\frac{1}{2^{n}}]^{\prime}\), then:
\[t^{\prime}:=t\otimes\left(\frac{1}{\sqrt{2}}\sum e^{2i\pi\left(\frac{1}{8}+ \frac{3}{4}y_{0}\right)}\right)\in\mathbf{SOP}[\frac{1}{2^{\max(3,n)}}]^{ \prime}\quad\text{ and }\quad[\![t^{\prime}]\!]=[\![t]\!]\,.\qed\]
We can now define the family of maps that will link the different levels of the "dyadic levels":
**Definition 6.4**.: For any \(k\geq 1\), we define the functor \([\cdot\cdot]_{k}:\mathbf{SOP}[\frac{1}{2^{k+1}}]^{\prime}\to\mathbf{SOP}[\frac{ 1}{2^{k}}]^{\prime}\), first for morphisms \(t=s\sum e^{2i\pi\frac{\ell}{2^{k+1}}y_{i_{1}}...y_{i_{q}}}\left|\vec{O} \right\rangle\!\!\left\langle\vec{I}\right|\) with phase polynomial of size \(0\) or \(1\):
\[t\mapsto\begin{cases}s\sum e^{2i\pi\frac{\ell/2}{2^{k}}y_{i_{1}}...y_{i_{q}}} \left|\vec{O},y^{\prime}\right\rangle\!\!\left\langle\vec{I},y^{\prime}\right| =t\otimes id&\text{ if }\ell\bmod 2=0\\ s\sum e^{2i\pi\frac{y_{i_{1}}...y_{i_{q}}}{2^{k}}((\ell-1)/2+y^{\prime})}\left| \vec{O},y^{\prime}\right\rangle\!\!\left\langle\vec{I},y^{\prime}\!\oplus y_{ i_{1}}...y_{i_{q}}\right|&\text{ if }\ell\bmod 2=1\end{cases}\]
The functor is then extended to any \(\mathbf{SOP}[\frac{1}{2^{k+1}}]^{\prime}\)-morphism by the decomposition of Lemma 6.1 (and given a particular ordering on the monomials of the phase polynomial).
Since \(\left\{\cdot\right\}_{k}\) is defined to be a functor, we have \(\left\{\cdot\circ\cdot\right\}_{k}=\left\{\cdot\cdot\right\}_{k}\circ\left\{ \cdot\right\}_{k}\). We can show that the ordering of the monomials has no real importance. Indeed, suppose \(t_{1}=\sum e^{2i\pi\frac{\ell_{1}}{2^{k+1}}y_{i_{1}}\cdot y_{i_{2}}}\left| \vec{y}\middle\rangle\!\!\left\langle\vec{y}\right|\) and \(t_{2}=\sum e^{2i\pi\frac{\ell_{2}}{2^{k+1}}y_{j_{1}}\cdot\cdot y_{j_{r}}}\left| \vec{y}\middle\rangle\!\!\left\langle\vec{y}\right|\). Then: \(\left\{t_{1}\circ t_{2}\right\}_{k}=\left\{t_{2}\circ t_{1}\right\}_{k}\) quite obviously when either \(\ell_{1}\bmod 2=0\) or \(\ell_{2}\bmod 2=0\), but also when \(\ell_{1}\bmod 2=\ell_{2}\bmod 2=1\):
\[\left\{t_{1}\circ t_{2}\right\}_{k}\xrightarrow[\text{HH}]{}\sum e^{2i\pi \left(\begin{array}{c}\frac{y_{i_{1}}\cdot\cdot y_{i_{q}}}{2^{k}}((\ell_{1} -1)/2+y^{\prime})+\frac{y_{j_{1}}\cdot\cdot y_{j_{r}}}{2^{k}}((\ell_{2}-1)/2+y ^{\prime})\\ +\frac{y_{i_{1}}\cdot\cdot y_{i_{q}}y_{j_{1}}\cdot\cdot y_{j_{r}}}{2^{k}}(1-2 y^{\prime})\end{array}\right)}\] \[\left|\vec{y},y^{\prime}\middle\rangle\!\!\left\langle\vec{y},y^{ \prime}\middle\rangle\!\!\left\langle\vec{y},y^{\prime}\middle\rangle\!\! \left\langle\vec{y},y^{\prime}\middle\rangle\!\!\left\langle\vec{y},y^{\prime }\middle\rangle\!\!\left\langle\vec{y},y^{\prime}\middle\rangle\!\!\left\langle \vec{y},y^{\prime}\middle\rangle\!\!\left\langle\vec{y},y_{i_{1}}\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
3. We now need to define \(\psi_{k}\). We are going to define it first on scalars, and on the basis \((1,e^{i\frac{\pi}{2^{k}}})\): \[\psi_{k}(1):=I_{2}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\qquad\quad\text{and}\qquad\quad\psi_{k}(e^{i\frac{\pi}{2^{k}} }):=X_{k}=\begin{pmatrix}0&1\\ e^{i\frac{\pi}{2^{k-1}}}&0\end{pmatrix}\] By linearity, \(\psi_{k}\) is defined on all elements of \(\mathbb{Z}[\frac{1}{2},e^{i\frac{\pi}{2^{k}}}]\). We then naturally extend this definition to any matrix over these elements. Formally: \(\psi_{k}:A+Be^{i\frac{\pi}{2^{k}}}\mapsto A\otimes I_{2}+B\otimes X_{k}\) where \(A+Be^{i\frac{\pi}{2^{k}}}\) is the aforementioned decomposition extended to matrices. One can check that \(\psi_{k}\) is a homomorphism, i.e. \(\psi_{k}(.+.)=\psi_{k}(.)+\psi_{k}(.)\) and \(\psi_{k}(.\circ.)=\psi_{k}(.)\circ\psi_{k}(.)\). It remains to show that \(\llbracket\cdot\rrbracket_{k}\rrbracket=\psi_{k}\left(\llbracket\cdot \rrbracket\right)\). Since \(\psi_{k}\) is a homomorphism, it is enough to show the result on the terms in the decomposed form of Lemma 6.1. Let \(t=s\sum e^{2i\pi\frac{\ell}{2^{k+1}}y_{i_{1}}\cdots y_{iq}}\left|\vec{G} \right\rangle\!\!\left\langle\vec{I}\right|\) be such a term. If \(\ell\bmod 2=0\), then \(\llbracket t\rrbracket\in\mathcal{M}(\mathbb{Z}[\frac{1}{2},e^{i\frac{\pi}{2^{k -1}}}])\) so \(\psi_{k}(\llbracket t\rrbracket)=\llbracket t\rrbracket\otimes I_{2}\) and: \[\llbracket t\rrbracket_{k}\rrbracket=\left\llbracket s\sum e^{2i\pi\frac{\ell/ 2}{2^{k}}y_{i_{1}}\cdots y_{iq}}\left|\vec{G},y^{\prime}\right\rangle\!\! \left\langle\vec{I},y^{\prime}\right|\right\rrbracket=\llbracket t \rrbracket\otimes I_{2}.\] If \(\ell\bmod 2=1\), then: \[\llbracket t\rrbracket=se^{i\frac{\pi}{2^{k}}}\sum_{y_{i_{1}}\cdots y_{iq_{q}} =1}e^{2i\pi\frac{(\ell-1)/2}{2^{k}}}\left|\vec{G}\right\rangle\!\!\left\langle \vec{I}\right|+s\sum_{y_{i_{1}}\cdots y_{iq_{q}}=0}\left|\vec{G}\right\rangle \!\!\left\langle\vec{I}\right|\] so: \[\psi_{k}(\llbracket t\rrbracket)=\left(s\sum_{y_{i_{1}}\cdots y_{iq_{q}}=1}e^ {2i\pi\frac{(\ell-1)/2}{2^{k}}}\left|\vec{G}\right\rangle\!\!\left\langle\vec {I}\right|\right)\otimes X_{k}+\left(s\sum_{y_{i_{1}}\cdots y_{iq_{q}}=0} \left|\vec{G}\right\rangle\!\!\left\langle\vec{I}\right|\right)\otimes I_{2}\] and \[\llbracket t\rrbracket_{k}\rrbracket=s\sum e^{2i\pi\frac{y_{i_{1}} \cdots y_{iq_{q}}}{2^{k}}((\ell-1)/2+y^{\prime})}\left|\vec{G},y^{\prime} \right\rangle\!\!\left\langle\vec{I},y^{\prime}\oplus y_{i_{1}}...y_{iq_{q}}\right|\] \[=s\sum_{y_{i_{1}}\cdots y_{iq_{q}}=1}e^{2i\pi\frac{(\ell-1)/2+y^ {\prime}}{2^{k}}}\left|\vec{G},y^{\prime}\right\rangle\!\!\left\langle\vec{I},y^{\prime}\oplus 1\right|+s\sum_{y_{i_{1}}\cdots y_{iq_{q}}=0}\left|\vec{G},y^{ \prime}\right\rangle\!\!\left\langle\vec{I},y^{\prime}\right|\] \[=\left(s\sum_{y_{i_{1}}\cdots y_{iq_{q}}=1}e^{2i\pi\frac{(\ell-1) /2}{2^{k}}}\left|\vec{G}\right\rangle\!\!\left\langle\vec{I}\right|\right) \otimes X_{k}+\left(s\sum_{y_{i_{1}}\cdots y_{iq_{q}}=0}\left|\vec{G}\right \rangle\!\!\left\langle\vec{I}\right|\right)\otimes I_{2}=\psi_{k}(\llbracket t \rrbracket)\qed\]
### Going Back
We now show how to reverse the functors \(\llbracket\cdot\rrbracket_{k}\).
**Definition 6.6**.: For any \(k\geq 1\), we define the (partial) map \(\llbracket\cdot\rrbracket_{k}:\mathbf{SOP}[\frac{1}{2^{k}}]^{\prime}\to \mathbf{SOP}[\frac{1}{2^{k+1}}]^{\prime}\) as:
\[\forall t:n+1\to m+1\in\mathbf{SOP}[\tfrac{1}{2^{k}}]^{\prime},\ \lceil t\rceil_{k}:=(id_{m}\otimes\langle 0 \rvert)\circ t\circ(id_{n}\otimes\sum e^{2i\pi\frac{y_{0}}{2^{k+1}}}\left|y_{0} \right\rangle)\]
Notice that \(\llbracket\cdot\rrbracket_{k}\) can only be applied on morphisms that have at least one input and one output.
\(\lceil\cdot\rrbracket_{k}\) reverses the action of \(\lfloor\cdot\rrbracket_{k}\) (up to some rewrites):
**Proposition 6.7**.: \(\llbracket\cdot\rrbracket_{k}\rrbracket_{k}\underset{\mathrm{TH}}{\sim}(\cdot)\) _and \(t_{1}\underset{\mathrm{TH}}{\sim}t_{2}\implies\lceil t_{1}\rceil\underset{ \mathrm{TH}}{\sim}\lceil t_{2}\rceil\) for any two terms \(t_{1},t_{2}\)._
Proof.: Again, we can use the decomposition given in Lemma 6.1. We can show that if \(t=s\sum e^{2i\pi\frac{\ell}{2^{k+1}}y_{i_{1}}...y_{iq}}\left|\vec{O}\right\rangle \!\!\left\langle\vec{I}\right|\), then \(\left[t\right]_{k}\circ\left(id_{n}\otimes\sum e^{2i\pi\frac{y_{0}}{2^{k+1}}} \left|y_{0}\right)\right\rangle\underset{\mathrm{TH}}{\sim}t\otimes\sum e^{2i \pi\frac{y_{0}}{2^{k+1}}}\left|y_{0}\right)\):
If \(\ell\bmod 2=0\), then \(\left[t\right]_{k}=t\otimes id\) so \(\left[t\right]_{k}\circ\left(id_{n}\otimes\sum e^{2i\pi\frac{y_{0}}{2^{k+1}}} \left|y_{0}\right)\right\rangle\underset{\mathrm{TH}}{\sim}t\otimes\sum e^{2i \pi\frac{y_{0}}{2^{k+1}}}\left|y_{0}\right\rangle\).
If \(\ell\bmod 2=1\), then:
\[\left[t\right]_{k}\circ\left(id_{n}\otimes\sum e^{2i\pi\frac{y_{0}}{2^{k+1}}} \left|y_{0}\right)\right)\\ =\frac{s}{2}\sum e^{2i\pi\left(\frac{y_{i_{1}}...y_{iq}}{2^{k}}( (\ell-1)/2+y^{\prime})+\frac{y^{\prime}+y_{i_{1}}...y_{iq}+y_{0}}{2}y^{\prime \prime}+\frac{y_{0}}{2^{k+1}}\right)}\left|\vec{O},y^{\prime}\right\rangle \!\!\left\langle\vec{I}\right|\\ \xrightarrow[\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left. \left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\left.\.\left.\left.\
However, adding a single rule:
\[\sum_{\vec{y}}e^{2i\pi\left(\frac{1}{8}+\frac{3}{4}y_{0}+R\right)}\left|\vec{O} \right\rangle\!\!\left\langle\vec{I}\right|\xrightarrow[y_{0}\not\in\text{Var} (R,\vec{O},\vec{I})]{}\sqrt{2}\sum_{\vec{y}\setminus\{y_{0}\}}e^{2i\pi R} \left|\vec{O}\right\rangle\!\!\left\langle\vec{I}\right|\] ( \[\sqrt{2}\] )
fixes this caveat. This rule can also be recovered from the more general one:
\[\sum_{\vec{y}}e^{2i\pi\left(\frac{y_{0}}{4}+\frac{y_{0}}{2}\widehat{Q}+R \right)}\left|\vec{O}\right\rangle\!\!\left\langle\vec{I}\right|\xrightarrow[y_ {0}\not\in\text{Var}(Q,R,\vec{O},\vec{I})]{}\sqrt{2}\sum_{\vec{y}\setminus\{y _{0}\}}e^{2i\pi\left(\frac{1}{8}-\frac{1}{4}\widehat{Q}+R\right)}\left|\vec{O} \right\rangle\!\!\left\langle\vec{I}\right|\] ( \[\omega\] )
which was already used in [1, 2, 3] to deal with the Clifford fragment of quantum mechanics.
With this additional rule at hand, we can derive the general completeness theorem:
**Theorem 6.10** (Completeness of \(\mathbf{SOP}[\frac{1}{2^{k+1}}]/\underset{\text{TH}}{\sim}\)).: _Let us write \(\xrightarrow[\text{TH}^{\prime}]:=\xrightarrow[\text{TH}^{\prime}]{}+\{(\sqrt{ 2})\}\). Then: \(\forall t_{1},t_{2}\in\mathbf{SOP}[\frac{1}{2^{k+1}}],\ [\![t_{1}]\!]=[\![t_{2}]\!]\iff t_{1} \underset{\text{TH}}{\sim}t_{2}\)_
Proof.: Let \(t_{1},t_{2}\in\mathbf{SOP}[\frac{1}{2^{k+1}}]\) such that \([\![t_{1}]\!]=[\![t_{2}]\!]\). Let us also write:
\[t_{\sqrt{2}}:=\frac{1}{\sqrt{2}}\sum e^{2i\pi\left(\frac{1}{8}+\frac{3}{4}y_{0 }\right)}\]
We define \(t_{i}^{\prime}\) as:
\[t_{i}^{\prime}:=\begin{cases}t_{i}&\text{ if }t_{i}\in\mathbf{SOP}[\frac{1}{2^{k+ 1}}]^{\prime}\\ t_{i}\otimes t_{\sqrt{2}}&\text{ if }t_{i}\notin\mathbf{SOP}[\frac{1}{2^{k+1}}]^{ \prime}\end{cases}.\]
It is easy to check that \(t_{i}^{\prime}\in\mathbf{SOP}[\frac{1}{2^{\max(3,k+1)}}]^{\prime}\) and that \(t_{i}\underset{\text{TH}}{\sim}t_{i}^{\prime}\). By Theorem 6.8:
\[t_{1}\underset{\text{TH}}{\sim}t_{1}^{\prime}\underset{\text{TH}}{\sim}t_{2}^ {\prime}\underset{\text{TH}}{\sim}t_{2}\]
We hence have completeness for all dyadic fragments of quantum computation. By taking their union, we can get completeness for the "whole dyadic fragment".
**Definition 6.11**.: Let \(\mathbf{SOP}[\mathbb{D}]:=\bigcup\limits_{k=1}^{\infty}\mathbf{SOP}[\frac{1}{2 ^{k}}]\) be the whole dyadic fragment of quantum computation.
**Corollary 6.12** (Completeness of \(\mathbf{SOP}[\mathbb{D}]/\underset{\text{TH}}{\sim}\)).:
\[\forall t_{1},t_{2}\in\mathbf{SOP}[\mathbb{D}],\ [\![t_{1}]\!]=[\![t_{2}]\!] \iff t_{1}\underset{\text{TH}}{\sim}t_{2}\]
## 7. Summing and Concatenating \(\mathbf{SOP}\)-Morphisms
We show in this section two interesting constructions that allow us to respectively sum two \(\mathbf{SOP}\)-morphisms or "concatenate" them, something that is not primitively doable in \(\mathbf{SOP}\) or more generally in gate-based quantum computation, but necessary when considering Hamiltonian-based computation [10]. These are however well suited to the dyadic fragments, as they can be performed entirely inside them.
To do so, we need a notion of controlled \(\mathbf{SOP}\)-morphism, inferred from [10] and made systematic in the graphical framework in [10].
**Definition 7.1**.: A **SOP**-morphism \(t:n+1\to m\) is called a _controlled morphism_ if:
\[\llbracket t\circ(|0\rangle\otimes id_{n})\rrbracket=\sum_{\vec{y}\in\{0,1\}^{n+ m}}|y_{0},...,y_{m}\rangle\!\langle y_{m+1},...,y_{m+n}|=\llbracket H^{n}_{m}(1)\rrbracket\]
where \(H^{n}_{m}(1)\) is the H-spider from **ZH** with parameter \(1\). It will be used as a shortcut notation to represent the linear map \(\sum_{\vec{y}\in\{0,1\}^{n+m}}|y_{0},...,y_{m}\rangle\!\langle y_{m+1},...,y_{ m+n}|\) or equivalently the **SOP**-morphism \(\sum_{\vec{y}}|y_{0},...,y_{m}\rangle\!\langle y_{m+1},...,y_{m+n}|\).
In a controlled morphism, the rightmost input is called the _control input_. We also call the morphism \(t\circ(|1\rangle\otimes id_{n})\) the _controlee_.
**Example 7.2**.: \(t:=\frac{1}{2}\sum e^{2i\pi(\frac{1}{2}y_{0}y_{1}y_{6}+\frac{1}{2}y_{1}y_{2}y _{6})}\,|y_{0}\rangle\!\langle y_{1},y_{2}|\) is a controlled morphism, as \(\llbracket t\rrbracket=\langle 0|\otimes\llbracket H^{1}_{1}(1)\rrbracket+ \langle 1|\otimes id_{1}\).
### The Constructions
Using this notion of controlled morphism, there exists a construction that will allow us to perform a sum of terms. In this construction, we need in particular:
\[t_{+}:=\frac{1}{2}\sum e^{2i\pi(\frac{1}{2}y_{0}y_{1}y_{3})}\,|y_{0},y_{1} \rangle\!\langle y_{0}\oplus y_{1}|\]
and the family of morphisms:
\[\mathrm{cp}_{n}:n\to 2n:=\sum_{\vec{y}}|\vec{y},\vec{y}\rangle\!\langle\vec{y}|\]
We also need the following identities:
**Lemma 7.3**.: \(\llbracket H^{n_{1}}_{m_{1}}(1)\rrbracket\otimes\llbracket H^{n_{2}}_{m_{2}}(1 )\rrbracket=\llbracket H^{n_{1}+n_{2}}_{m_{1}+m_{2}}(1)\rrbracket\)__
Proof.: This is obvious from the fact that:
\[\llbracket H^{n}_{m}(1)\rrbracket=\sum_{\vec{y}\in\{0,1\}^{n+m}}|y_{0},...,y_{ m}\rangle\!\langle y_{m+1},...,y_{m+n}|\qed\]
**Lemma 7.4**.: \((\llbracket H^{n}_{0}(1)\rrbracket\otimes\.\ )\circ\llbracket\mathrm{cp}_{n}\rrbracket=(.)\)__\((\.\ \otimes\llbracket H^{n}_{0}(1)\rrbracket)\circ\llbracket\mathrm{cp}_{n}\rrbracket=(.)\)__
\[\llbracket\mathrm{cp}_{m}\rrbracket^{\dagger}\circ\big{(}\llbracket H^{0}_{m}( 1)\rrbracket\otimes\.\ \big{)}=(.)\hskip 56.905512pt\llbracket\mathrm{cp}_{m}\rrbracket^{ \dagger}\circ\big{(}\.\ \otimes\ \llbracket H^{0}_{m}(1)\rrbracket\big{)}=(.)\]
Proof.: The first equation is obtained by:
\[(\llbracket H^{n}_{0}(1)\rrbracket\otimes\.\ )\circ\llbracket\mathrm{cp}_{n} \rrbracket=\left(\sum_{\vec{y}_{1},\vec{y}_{2}}|\vec{y}_{2}\rangle\!\langle \vec{y}_{1},\vec{y}_{2}|\right)\circ\left(\sum_{\vec{y}}|\vec{y},\vec{y} \rangle\!\langle\vec{y}|\right)=\sum_{\vec{y}}|\vec{y}\rangle\!\langle\vec{y}| =id_{n}\]
The other three equations can be obtained similarly.
We may now build, from two controlled morphisms, a third controlled morphism whose controlee is the sum of the two first controlees:
**Proposition 7.5**.: _Let \(t_{1},t_{2}:n+1\to m\) be two controlled morphisms. We define:_
\[t:=\mathrm{cp}_{m}^{\dagger}\circ(t_{1}\otimes t_{2})\circ(id_{1}\otimes \sigma_{1,n}\otimes id_{n})\circ(t_{+}\otimes\mathrm{cp}_{n})\]
_Then:_
\[\llbracket t\rrbracket=\langle 0|\otimes\llbracket H^{n}_{m}(1)\rrbracket+ \langle 1|\otimes\Big{(}\llbracket t_{1}\circ(|1\rangle\otimes id_{n})\rrbracket+ \llbracket t_{2}\circ(|1\rangle\otimes id_{n})\rrbracket\Big{)}\]
Proof.: First, notice that \(t_{+}\circ|0\rangle\xrightarrow[\text{HH}]{}|0,0\rangle\) and \(t_{+}\circ|1\rangle\xrightarrow[\text{HH}]{}\sum|y_{0},y_{0}\oplus 1\rangle\).
Then:
\[\llbracket t\circ(|0\rangle\otimes id_{n})\rrbracket =\llbracket\text{cp}_{m}^{\dagger}\circ(t_{1}\otimes t_{2})\circ( id_{1}\otimes\sigma_{1,n}\otimes id_{n})\circ((t_{+}\circ|0\rangle)\otimes \text{cp}_{n})\rrbracket\] \[=\llbracket\text{cp}_{m}^{\dagger}\circ(t_{1}\otimes t_{2})\circ( id_{1}\otimes\sigma_{1,n}\otimes id_{n})\circ(|0,0\rangle\otimes\text{cp}_{n})\rrbracket\] \[=\llbracket\text{cp}_{m}\rrbracket^{\dagger}\circ(\llbracket t_{1 }\circ(|0\rangle\otimes id_{n})\rrbracket\otimes\llbracket t_{2}\circ(|0 \rangle\otimes id_{n})\rrbracket)\circ\llbracket\text{cp}_{n}\rrbracket\] \[=\llbracket\text{cp}_{m}\rrbracket^{\dagger}\circ(\llbracket H_{ m}^{n}(1)\rrbracket\otimes\llbracket H_{m}^{n}(1)\rrbracket)\circ\llbracket\text{cp}_{n} \rrbracket=\llbracket H_{m}^{n}(1)\rrbracket\]
and:
\[\llbracket t\circ(|1\rangle\otimes id_{n})\rrbracket =\llbracket\text{cp}_{m}^{\dagger}\circ(t_{1}\otimes t_{2})\circ (id_{1}\otimes\sigma_{1,n}\otimes id_{n})\circ((t_{+}\circ|1\rangle)\otimes \text{cp}_{n})\rrbracket\] \[=\llbracket\text{cp}_{m}^{\dagger}\circ(t_{1}\otimes t_{2})\circ (id_{1}\otimes\sigma_{1,n}\otimes id_{n})\circ\left(\left(\sum|y_{0},y_{0} \oplus 1\rangle\right)\otimes\text{cp}_{n}\right)\rrbracket\] \[=\llbracket\text{cp}_{m}^{\dagger}\circ(t_{1}\otimes t_{2})\circ (id_{1}\otimes\sigma_{1,n}\otimes id_{n})\circ(|0,1\rangle\otimes\text{cp}_{ n})\rrbracket\] \[\qquad\qquad\qquad\qquad+\left\llbracket\text{cp}_{m}^{\dagger} \circ(t_{1}\otimes t_{2})\circ(id_{1}\otimes\sigma_{1,n}\otimes id_{n})\circ( |1,0\rangle\otimes\text{cp}_{n})\right\rrbracket\] \[=\llbracket\text{cp}_{m}\rrbracket^{\dagger}\circ(\llbracket t _{1}\circ(|0\rangle\otimes id_{n})\rrbracket\otimes\llbracket t_{2}\circ(|1 \rangle\otimes id_{n})\rrbracket)\circ\llbracket\text{cp}_{n}\rrbracket\] \[\qquad\qquad\qquad\qquad+\llbracket\text{cp}_{m}\rrbracket^{ \dagger}\circ(\llbracket t_{1}\circ(|1\rangle\otimes id_{n})\rrbracket \otimes\llbracket t_{2}\circ(|0\rangle\otimes id_{n})\rrbracket)\circ \llbracket\text{cp}_{n}\rrbracket\] \[=\llbracket t_{2}\circ(|1\rangle\otimes id_{n})\rrbracket)+ \llbracket t_{1}\circ(|1\rangle\otimes id_{n})\rrbracket\]
**Example 7.6**.: \(t_{1}=\frac{1}{2}\sum e^{2i\pi\left(\frac{1}{2}y_{0}y_{1}y_{6}+\frac{1}{2}y_{ 1}y_{2}y_{6}\right)}\,|y_{0}\rangle\!\langle y_{1},y_{2}|\) is a controlled morphism by Example 7.2. The morphism \(t_{2}=\sum|y_{0}\rangle\!\langle y_{1},y_{2}|\) is also obviously a controlled morphism. \(t_{1}\) controls \(\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\) and \(t_{2}\) controls \(\begin{pmatrix}1&1\\ 1&1\end{pmatrix}\). The above construction on \(t_{1}\) and \(t_{2}\) yields
\[t\xrightarrow[\text{HH}]{}\frac{1}{4}\sum e^{2i\pi\left(\frac{1}{2}y_{2}y_{11} y_{6}+\frac{1}{2}y_{2}y_{3}y_{4}+\frac{1}{2}y_{0}y_{2}y_{4}\right)}\,|y_{0}\rangle\! \langle y_{2}\oplus y_{6},y_{3}|\quad\text{which controls }\begin{pmatrix}2&1\\ 1&2\end{pmatrix}.\]
To get the controlee, it then suffices to apply \(|1\rangle\) on the first input of \(t\):
\[t\circ(|1\rangle\otimes id)\xrightarrow[\text{HH}]{}\frac{1}{2}\sum e^{2i\pi \left(\frac{1}{2}y_{3}y_{4}y_{6}+\frac{1}{2}y_{0}y_{4}+\frac{1}{2}y_{3}y_{4}+ \frac{1}{2}y_{0}y_{4}y_{6}\right)}\,|y_{0}\rangle\!\langle y_{3}|\]
The notion of controlled morphism also allows us to define a construction that will perform the concatenation of the controlees. Here again, we need a new building block:
\[t_{c}:=\sum|y_{0},y_{0}y_{1}\oplus y_{1},y_{0}y_{1}\rangle\!\langle y_{1}|\]
**Proposition 7.7**.: _Let \(t_{1},t_{2}:n+1\to m\) be two controlled morphisms. We define:_
\[t:=(id_{1}\otimes\text{cp}_{m}^{\dagger})\circ(id_{1}\otimes t_{1}\otimes t_{ 2})\circ(id_{2}\otimes\sigma_{1,n}\otimes id_{n})\circ(t_{c}\otimes\text{cp} _{n})\]
_Then:_
\[\llbracket t\rrbracket=\langle 0|\otimes\llbracket H_{m+1}^{n}(1) \rrbracket+\langle 1|\otimes\Big{(}\,|0\rangle\otimes\llbracket t_{1}\circ(|1 \rangle\otimes id_{n})\rrbracket+|1\rangle\otimes\llbracket t_{2}\circ(|1 \rangle\otimes id_{n})\rrbracket\Big{)}\]
Again, \(t\) in this construction is a controlled morphism. Its controlee is \(|0\rangle\!\otimes\!(t_{1}\)'s controlee)\(+|1\rangle\otimes(t_{2}\)'s controlee).
Proof.: Notice first that \(t_{c}\circ\left|0\right\rangle\xrightarrow[\text{\tiny{\rm{HH}}}]{}\sum\left|y_{0},0,0\right\rangle\), \((\left\langle 0\right|\otimes id_{2})\circ t_{c}\left|1\right\rangle\xrightarrow[ \text{\tiny{\rm{HH}}}]{}\left|1,0\right\rangle\) and \((\left\langle 1\right|\otimes id_{2})\circ t_{c}\left|1\right\rangle\xrightarrow[ \text{\tiny{\rm{HH}}}]{}\left|0,1\right\rangle\).
Let \(t\) be defined as above. First, we have:
\[\llbracket t\circ(\left|0\right\rangle\otimes id_{n})\rrbracket\] \[=\left[\left(id_{1}\otimes\text{cp}_{m}^{\dagger}\right)\circ( id_{1}\otimes t_{1}\otimes t_{2})\circ(id_{2}\otimes\sigma_{1,n}\otimes id _{n})\circ\left(\left(\sum\left|y_{0},0,0\right\rangle\right)\otimes\text{cp }_{n}\right)\right]\] \[=\left[\left(\sum\left|y_{0}\right\rangle\right)\otimes\left( \text{cp}_{m}^{\dagger}\circ((t_{1}\circ(\left|0\right\rangle\otimes id_{n}) )\otimes(t_{2}\circ(\left|0\right\rangle\otimes id_{n})))\circ\text{cp}_{n} \right)\right]\] \[=\left[H_{1}^{0}(1)\right]\otimes\left(\llbracket\text{cp}_{m} \rrbracket^{\dagger}\circ(\llbracket t_{1}\circ(\left|0\right\rangle\otimes id _{n})\rrbracket\otimes\llbracket t_{2}\circ(\left|0\right\rangle\otimes id_{n })\rrbracket)\circ\llbracket\text{cp}_{n}\rrbracket\right)\] \[=\left[H_{1}^{0}(1)\right]\otimes\left(\llbracket\text{cp}_{m} \rrbracket^{\dagger}\circ(\llbracket H_{m}^{n}(1)\rrbracket\otimes\llbracket H _{m}^{n}(1)\rrbracket)\circ\llbracket\text{cp}_{n}\rrbracket\right)= \llbracket H_{1}^{0}(1)\rrbracket\otimes\llbracket H_{m}^{n}(1)\rrbracket\] \[=\left[H_{m+1}^{n}(1)\right]\]
Then:
\[\llbracket(\left\langle 0\right|\otimes id_{m})\circ t\circ( \left|0\right\rangle\otimes id_{n})\rrbracket\] \[=\left[\text{cp}_{m}^{\dagger}\circ(t_{1}\otimes t_{2})\circ( id_{1}\otimes\sigma_{1,n}\otimes id_{n})\circ(\left(\left\langle 0 \right|\otimes id_{2}\right)\circ t_{c}\left|1\right\rangle)\otimes\text{cp}_{n })\right]\] \[=\left[\text{cp}_{m}\rrbracket^{\dagger}\circ(\llbracket t_{1} \circ(\left|1\right\rangle\otimes id_{n})\rrbracket\otimes\llbracket t_{2} \circ(\left|0\right\rangle\otimes id_{n})\rrbracket)\circ\llbracket\text{cp}_{n }\rrbracket\] \[=\llbracket\text{cp}_{m}\rrbracket^{\dagger}\circ(\llbracket t _{1}\circ(\left|1\right\rangle\otimes id_{n})\rrbracket\otimes\llbracket H _{m}^{n}(1)\rrbracket)\circ\llbracket\text{cp}_{n}\rrbracket\] \[=\llbracket t_{1}\circ(\left|1\right\rangle\otimes id_{n})\rrbracket\]
and similarly:
\[\llbracket(\left\langle 1\right|\otimes id_{m})\circ t\circ(\left|0\right\rangle \otimes id_{n})\rrbracket=\llbracket t_{2}\circ(\left|1\right\rangle\otimes id _{n})\rrbracket\qed\]
### Controlling SOP-Morphisms
Critical to the previous two constructions is the existence, for every morphism \(t\) of another morphism that controls it.
First, we need to be able, in all generality, to control arbitrary complex scalars.
**Definition 7.8**.: Let \(s\in\mathbb{C}\). If \(s\neq 0\), let \(n:=\lceil\log_{2}(\left|s\right|+1)\rceil\), \(\alpha:=\arccos\left(\frac{\left|s\right|}{2^{n}-1}\right)\) and \(\theta=\arg(s)\). Then \(s=(2^{n}-1)\cos\alpha e^{i\theta}\). We hence define:
\[\Lambda^{\prime}s:=\begin{cases}\left\langle 0\right|&\text{ if }s=0\\ \frac{1}{4}\sum e^{2i\pi\left(\frac{y_{1}\ldots y_{n}y^{\prime\prime}}{2}+ \frac{y^{\prime\prime}(1+y_{0})}{2}+\frac{2\alpha}{2\pi}y_{0}y^{\prime}+\frac{ \theta-\alpha}{2\pi}y_{0}\right)}\left\langle y_{0}\right|&\text{ if }s\neq 0\end{cases}\]
**Proposition 7.9**.: _For any \(s\in\mathbb{C}\), \(\llbracket\Lambda^{\prime}s\rrbracket=\left\langle 0\right|+s\left\langle 1\right|\)._
Proof.: If \(s=0\), the result is obvious. Otherwise, notice that \(\sum\limits_{\vec{y}\in\{0,1\}^{n}}e^{2i\pi\frac{y_{1}\ldots y_{n}}{2}}=2^{n}-2\). Then:
\[\llbracket\Lambda^{\prime}s\circ\left|0\right\rangle\rrbracket=\left[ \left\|\frac{1}{4}\sum\limits_{\vec{y}\setminus\{y_{0}\}}e^{2i\pi\left(\frac {y_{1}\ldots y_{n}y^{\prime\prime}}{2}+\frac{y^{\prime\prime}}{2}\right)} \right\|=\left[\left\|\frac{1}{2}\sum\limits_{\vec{y}\setminus\{y_{0},y^{ \prime}\}}e^{2i\pi\left(\frac{y_{1}\ldots y_{n}y^{\prime\prime}}{2}+\frac{y^{ \prime\prime}}{2}\right)}\right\|\right]\]
\[=\left[\!\!\left[\frac{1}{2}\sum_{\vec{y}\setminus\{y_{0},y^{\prime},y^{ \prime\prime}\}}e^{2i\pi(0)}\right]\!\right]-\left[\!\!\left[\frac{1}{2}\sum_{ \vec{y}\setminus\{y_{0},y^{\prime},y^{\prime\prime}\}}e^{2i\pi\left(\frac{y_{1 }\cdots y_{n}y^{\prime}}{2}\right)}\right]\!\right]=\frac{1}{2}(2^{n}-(2^{n}-2))=1\] \[\left[\!\!\left[\Lambda^{\prime}s\circ\left|1\right\rangle\right]\!\right] =\left[\!\!\left[\frac{1}{4}\sum_{\vec{y}\setminus\{y_{0}\}}e^{2i \pi\left(\frac{y_{1}\cdots y_{n}y^{\prime\prime}}{2}\right)}\!\right]\!\right]\] \[=\frac{1}{4}(2^{n+1}-2)e^{i(\theta-\alpha)}(1+e^{2i\alpha})=(2^{n }-1)\cos\alpha e^{i\theta}=s\qed\]
This construction is interesting as it requires a fairly small number of variables. However, scalars that we can find in the dyadic fragments (\(\mathbf{SOP}[\frac{1}{2^{n}}]\)) may require their control to be outside them. For instance, in \(\Lambda^{\prime}\frac{1}{2}\), we need \(\alpha=\frac{\pi}{3}\), which is not a dyadic multiple of \(\pi\). We hence give another definition for the controlled scalar:
**Definition 7.10**.: Let \(s\in\mathbb{C}\). If \(s\neq 0\), let \(n:=\lceil\log_{2}(\left|s\right|)\rceil\), \(\alpha:=\arccos\left(\frac{\left|s\right|}{2^{n}}\right)\) and \(\theta=\arg(s)\). Then \(s=2^{n}\cos\alpha e^{i\theta}\). We define:
\[\Lambda s:=\begin{cases}\left\langle 0\right|&\text{if }s=0\\ \frac{1}{2^{n+1}}\sum e^{2i\pi\left(\sum\limits_{i=1}^{n}\frac{y_{i}y^{\prime }_{i}(1+y_{0})}{2}+\frac{2\alpha}{2\pi}y_{0}y^{\prime}+\frac{\theta-\alpha}{2 \pi}y_{0}\right)}\left\langle y_{0}\right|&\text{if }n\geq 0\\ 2^{2n-1}\sum e^{2i\pi\left(\sum\limits_{i=1}^{\lfloor\frac{y_{i}y^{\prime}_{i }y_{0}}{2}+\frac{2\alpha}{2\pi}y_{0}y^{\prime}+\frac{\theta-\alpha}{2\pi}y_{0} \right)}}\left\langle y_{0}\right|&\text{if }n<0\end{cases}\]
**Proposition 7.11**.: _For any \(s\in\mathbb{C}\), \(\llbracket\Lambda s\rrbracket=\left\langle 0\right|+s\left\langle 1\right|\)._
Proof.: If \(s=0\), the result is obvious.
Then, if \(n\geq 0\):
\[\Lambda s\circ\left|0\right\rangle\underset{\text{HH}}{\longrightarrow}\frac{ 1}{2^{n+1}}\sum e^{2i\pi\left(\sum\limits_{i=1}^{n}\frac{y_{i}y^{\prime}_{i}} {2}\right)}\underset{\text{Elim}}{\longrightarrow}\frac{1}{2^{n}}\sum e^{2i \pi\left(\sum\limits_{i=1}^{n}\frac{y_{i}y^{\prime}_{i}}{2}\right)}\underset{ \text{HH}^{n}}{\longrightarrow}1\]
and:
\[\Lambda s\circ\left|1\right\rangle\underset{\text{HH}}{\longrightarrow}\frac{ 1}{2^{n+1}}\sum e^{2i\pi\left(\sum\limits_{i=1}^{n}0+\frac{2\alpha}{2\pi}y^{ \prime}+\frac{\theta-\alpha}{2\pi}\right)}\underset{\text{Elim}^{2n}}{ \longrightarrow}2^{n-1}\sum e^{2i\pi\left(\frac{2\alpha}{2\pi}y^{\prime}+ \frac{\theta-\alpha}{2\pi}\right)}\]
So \(\llbracket\Lambda s\circ\left|1\right\rangle\rrbracket=2^{n-1}e^{i(\theta- \alpha)}(1+e^{2i\alpha})=2^{n}\cos\alpha e^{i\theta}=s\).
Now, if \(n<0\):
\[\Lambda s\circ\left|0\right\rangle\underset{\text{HH}}{\longrightarrow}2^{2 n-1}\sum e^{2i\pi(0)}\xrightarrow{}1\]
and
\[\Lambda s\circ\left|1\right\rangle\underset{\text{HH}}{\longrightarrow}2^{2 n-1}\sum e^{2i\pi\left(\sum\limits_{i=1}^{n}\frac{y_{i}y^{\prime}_{i}}{2}+\frac{2 \alpha}{2\pi}y^{\prime}+\frac{\theta-\alpha}{2\pi}\right)}\xrightarrow{} \xrightarrow{}2^{n-1}\sum e^{2i\pi\left(\frac{2\alpha}{2\pi}y^{\prime}+\frac{ \theta-\alpha}{2\pi}\right)}\]
so again \(\llbracket\Lambda s\circ\left|1\right\rangle\rrbracket=s\).
This time, if \(s=\frac{1}{\sqrt{2^{r}}}=2^{\lceil\frac{p}{2}\rceil}\cos\frac{\pi}{4}(p\bmod 2)\), controlling \(s\) gives a morphism in a dyadic fragment of \(\mathbf{SOP}\).
We now give a general construction for controlling an arbitrary \(\mathbf{SOP}\)-term.
**Definition 7.12**.: Let \(t=s\sum\limits_{\vec{y}}e^{2i\pi P}\left|\vec{G}\right\rangle\!\!\left\langle \vec{I}\right|\in\mathbf{SOP}\). We define:
\[\widetilde{\Lambda}t:=\frac{1}{2|\vec{y}|}\sum\limits_{\begin{subarray}{c}\vec {y},x_{0,}\\ \vec{x}_{1},\vec{x}_{2}\end{subarray}}e^{2i\pi x_{0}P}\left|x_{0}\vec{O}\oplus \vec{x}_{1}\oplus x_{0}\vec{x}_{1}\right\rangle\!\!\left\langle x_{0},x_{0} \vec{I}\oplus\vec{x}_{2}\oplus x_{0}\vec{x}_{2}\right|\]
where
\[x_{0}\vec{O}\oplus\vec{x}_{1}\oplus x_{0}\vec{x}_{1}:=(x_{0}O_{1}\oplus x_{11 }\oplus x_{0}x_{11},...,x_{0}O_{m}\oplus x_{1m}\oplus x_{0}x_{1m}),\]
and similarly
\[x_{0}\vec{I}\oplus\vec{x}_{2}\oplus x_{0}\vec{x}_{2}:=(x_{0}I_{1}\oplus x_{21 }\oplus x_{0}x_{21},...,x_{0}I_{m+n}\oplus x_{2n}\oplus x_{0}x_{2n}).\]
We finally define:
\[\Lambda t:=\left(\Lambda(s2^{|\vec{y}|-n-m})\otimes\widetilde{\Lambda}t\right) \circ(\operatorname{cp}_{1}\otimes id_{n})\]
**Proposition 7.13**.: \(\Lambda t\) _is a controlled morphism, that controls \(t\), i.e.:_
\[\llbracket\Lambda t\rrbracket=\langle 0|\otimes\llbracket H_{m}^{n}(1) \rrbracket+\langle 1|\otimes\llbracket t\rrbracket\]
Proof.: We have
\[\llbracket\Lambda t\rrbracket\circ(|0\rangle\otimes id_{n}) =\left\llbracket\Lambda(s2^{|\vec{y}|-n-m})\circ|0\rangle \right\rrbracket\otimes\left\llbracket\widetilde{\Lambda}t\circ(|0\rangle \otimes id_{n})\right\rrbracket=1\otimes\frac{1}{2^{|\vec{y}|}}\sum\limits_{ \vec{y},\vec{x}1,\vec{x}_{2}}e^{2i\pi 0}\left|\vec{x}_{1}\right\rangle\!\!\left\langle \vec{x}_{2}\right|\] \[=\sum\limits_{\vec{x}1,\vec{x}_{2}}|\vec{x}_{1}\rangle\!\!\left\langle \vec{x}_{2}\right|=\llbracket H_{m}^{n}(1)\rrbracket\]
and
\[\llbracket\Lambda t\rrbracket\circ(|1\rangle\otimes id_{n}) =\left\llbracket\Lambda(s2^{|\vec{y}|-n-m})\circ|1\rangle \right\rrbracket\otimes\left\llbracket\widetilde{\Lambda}t\circ(|1\rangle \otimes id_{n})\right\rrbracket\] \[=\frac{s2^{|\vec{y}|}}{2^{n+m}}\otimes\frac{1}{2^{|\vec{y}|}}\sum \limits_{\vec{y},\vec{x}1,\vec{x}_{2}}e^{2i\pi P}\left|\vec{O}\right\rangle\! \!\left\langle\vec{I}\right|\] \[=\frac{s}{2^{n+m}}\sum\limits_{\vec{y},\vec{x}1,\vec{x}_{2}}e^{2i \pi P}\left|\vec{O}\right\rangle\!\!\left\langle\vec{I}\right|=s\sum\limits_{ \vec{y}}e^{2i\pi P}\left|\vec{O}\right\rangle\!\!\left\langle\vec{I}\right|=t\]
Notice that if \(t\in\mathbf{SOP}[\frac{1}{2^{n}}]\), then \(\Lambda t\in\mathbf{SOP}[\frac{1}{2^{\max(3,n)}}]\). Notice also that, for \(t:n\to m\), only \(n+m+1\) additional variables are needed in \(\Lambda t\). However, the controlled scalar will require around \(2(|\vec{y}|-n-m+\log_{2}(|s|))\) variables, and the phase polynomial gets one degree higher. Hence, it may be useful to put additional information to use if we are provided some.
**Definition 7.14**.: Let \(t=s\sum\limits_{\vec{y}}e^{2i\pi P}\left|\vec{O}\right\rangle\!\!\left\langle \vec{I}\right|\in\mathbf{SOP}\). For any \(\vec{v}_{1}\in\{0,1\}^{m},\vec{v}_{2}\in\{0,1\}^{n}\) and \(\rho e^{i\theta}\neq 0\), we define:
\[\widetilde{\Lambda}_{2}(\,t\,|\vec{v}_{1},\vec{v}_{2},\rho,\theta):=\]
\[\frac{1}{\rho 2^{n+m}}\sum e^{2i\pi\left(-\frac{\theta}{2\pi}+P+\frac{1}{2} \left(\widetilde{\partial}+\vec{x}_{1}+x_{0}\vec{x}_{1}+x_{0}\vec{v}_{1}\right) \cdot\vec{x}_{1}^{\prime}+\frac{1}{2}\left(\widetilde{I}+\vec{x}_{2}+x_{0} \vec{x}_{2}+x_{0}\vec{v}_{2}\right)\cdot\vec{x}_{2}\right)}\,|\vec{x}_{1} \rangle\!\langle x_{0}\oplus 1,\vec{x}_{2}|\]
and:
\[\Lambda_{2}(\,t\,|\vec{v}_{1},\vec{v}_{2},\rho,\theta):=\left(\Lambda(s\rho e^ {i\theta})\otimes\widetilde{\Lambda}_{2}(\,t\,|\vec{v}_{1},\vec{v}_{2},\rho, \theta)\right)\circ(\mathrm{cp}_{1}\otimes id_{n})\]
**Proposition 7.15**.: _For any \(\vec{v}_{1}\in\{0,1\}^{m},\vec{v}_{2}\in\{0,1\}^{n}\) and \(\rho e^{i\theta}\neq 0\) such that \(\langle\vec{v}_{1}|\,[\![t]\!]\,|\vec{v}_{2}\rangle=\rho e^{i\theta}\), \(\Lambda_{2}(\,t\,|\vec{v}_{1},\vec{v}_{2},\rho,\theta)\) is a controlled morphism, that controls \(t\), i.e.:_
\[[\![\Lambda_{2}(\,t\,|\vec{v}_{1},\vec{v}_{2},\rho,\theta)]\!]=\langle 0| \otimes[\![H_{m}^{n}(1)]\!]+\langle 1|\otimes[\![t]\!]\]
Proof.: Let \(t=s\sum_{\vec{y}}e^{2i\pi P}\left|\vec{O}\right\rangle\!\Big{\langle}\vec{I} \Big{|}\in\mathbf{SOP}\), \(\vec{v}_{1}\in\{0,1\}^{m},\vec{v}_{2}\in\{0,1\}^{n}\) and \(\rho e^{i\theta}\neq 0\) such that \(\langle\vec{v}_{1}|\,[\![t]\!]\,|\vec{v}_{2}\rangle=\rho e^{i\theta}\). We have:
\[\left[\![\widetilde{\Lambda}_{2}(\,t\,|\vec{v}_{1},\vec{v}_{2}, \rho,\theta)\circ(|0\rangle\otimes id_{n})\!]\right]\] \[=\left[\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
In order to control \(t:n\to m\) in this version, we only need around \(2(n+m+\log_{2}(|s\rho e^{i\theta}|))\) additional variables, and the initial phase polynomial is simply added another polynomial. However, it requires some prior knowledge on the linear map that is represented by \(t\), namely, one of its coefficients. Another caveat of importance is that \(\frac{1}{\rho e^{i\theta}}\), which is needed in the definition of the control, will in general get us out of the dyadic fragments.
**Remark 7.16**.: In the definitions of \(\Lambda\) and \(\Lambda_{2}\) for **SOP**-terms, we used the control of scalars denoted \(\Lambda\). However, any other notion of control of scalars (like \(\Lambda^{\prime}\) for instance) would work.
## 8. Conclusion and Discussion
We have given a new rewrite system for the (balanced) Toffoli-Hadamard fragment of Sums-Over-Paths, and showed the induced equational theory to be complete. We then extended this rewrite strategy by adding a single new rewrite, which we then proved to be complete for any dyadic fragment of quantum computation. As expected from the universality of the fragments at hand, we do not get all the nice properties of the rewriting in the Clifford fragment. In particular, we showed that the rewrite strategies given above are not confluent, and that the size of the terms may grow exponentially when rules are applied carelessly. Whether one of the above two drawbacks can be removed by a different rewrite system remains an open question. In particular, we have exhibited two rules that, in order to be applied properly, require solving a hard but well studied problem, i.e. that of building Groebner bases. We wonder if this is a sufficient price to pay to get confluence. If not, it would be interesting to see if a completion of the rewrite strategy a la Knuth-Bendix is possible in this framework, and if such completion would eventually yield a finite set of rules (note that this would not necessarily make problems like circuit equivalence classically tractable, as the size of the phase polynomial may still grow exponentially quickly, and application of some rules may require solving hard problems).
On a more foundational note, we conjecture that the rewrite strategy \(\underset{\mathrm{TH}}{\longrightarrow}\) is minimal, i.e. that none of the rewrites can be derived from the others. The same question can be asked in **ZH**, but no proof (confirming or refuting it) has been provided yet. The graphical study of the rewrite rules of subsection 4.4 hint at directions to simplify the rules of **ZH** thanks to their interpretation in **SOP**, or to transport potential proofs of minimality between the two. The completeness result derived from that of the **ZH**, and this small study of how the rewrites translate as ZH transformations, really show how the two formalisms give different and complementary approaches to rewriting and simplifying representations of quantum processes.
Thanks to the proximity of **SOP** with the **ZH**-calculus and to the theory developed in the graphical setting to perform sums and concatenations of terms, we were able to transport these constructions in the **SOP** formalism. This proves very useful when analysing Hamiltonian-based quantum computation, which heavily relies on building large Hamiltonians by sums of smaller terms.
All the work presented here was in the case of _balanced_ sum-over-paths. Adding unbalancedness as in [1] allows for reductions of equivalent terms with fewer variables, at the cost of a larger space in which the morphisms live. While having fewer variables is crucial when performing simulation (which implies actually computing the term), the extra degree of freedom that amplitude polynomials offer also makes issues like confluence
potentially harder to tackle. We wonder if a proper unbalanced Sum-Over-Paths formalism can be stated for the Toffoli-Hadamard fragment and the dyadic fragments.
|
2306.03476 | Putting Humans in the Image Captioning Loop | Image Captioning (IC) models can highly benefit from human feedback in the
training process, especially in cases where data is limited. We present
work-in-progress on adapting an IC system to integrate human feedback, with the
goal to make it easily adaptable to user-specific data. Our approach builds on
a base IC model pre-trained on the MS COCO dataset, which generates captions
for unseen images. The user will then be able to offer feedback on the image
and the generated/predicted caption, which will be augmented to create
additional training instances for the adaptation of the model. The additional
instances are integrated into the model using step-wise updates, and a sparse
memory replay component is used to avoid catastrophic forgetting. We hope that
this approach, while leading to improved results, will also result in
customizable IC models. | Aliki Anagnostopoulou, Mareike Hartmann, Daniel Sonntag | 2023-06-06T07:50:46Z | http://arxiv.org/abs/2306.03476v1 | # Putting Humans in the Image Captioning Loop
###### Abstract
Image Captioning (IC) models can highly benefit from human feedback in the training process, especially in cases where data is limited. We present work-in-progress on adapting an IC system to integrate human feedback, with the goal to make it easily adaptable to user-specific data. Our approach builds on a base IC model pre-trained on the MS COCO dataset, which generates captions for unseen images. The user will then be able to offer feedback on the image and the generated/predicted caption, which will be augmented to create additional training instances for the adaptation of the model. The additional instances are integrated into the model using step-wise updates, and a sparse memory replay component is used to avoid catastrophic forgetting. We hope that this approach, while leading to improved results, will also result in customizable IC models.
## 1 Introduction
Image Captioning (IC) is the task of generating a natural language description for an image (Stefanii et al., 2021). State-of-the-art IC models are trained in the traditional offline setup, where large amounts of annotated training data are required (Zhou et al., 2020; Li et al., 2020; Wang et al., 2022). This requirement is impractical for models intended to caption user-specific images without large-scale annotations. Here, an _interactive_ framework can be used to efficiently adapt a model to new data based on user feedback (Ling and Fidler, 2017; Shen et al., 2019). By exploiting user feedback, models can be trained with less annotated data. Furthermore, interactivity renders models more user-friendly, and the interaction with the user often leads to more trust in the AI/ML-based system (Bussone et al., 2015; Guo et al., 2022).
In the following, we present our work-in-progress on extending an IC model to an interactive setup. Our approach is shown in figure 1. We start with a pretrained IC model (subsection 2.1), which is used to caption new images. The user provides feedback for these captions (subsection 2.2), which is then used to generate more training instances via data augmentation (subsection 2.3). These augmented instances are used to update the model incrementally. In order to retain past knowledge, we employ sparse memory replay (subsection 2.4).
In this project, we plan to address four research questions:
1. What type of user feedback is most useful and how can it be collected?
2. Which data augmentation strategies are most useful to maximize the effect of the user feedback on model performance?
3. How helpful is user interaction in the data augmentation process?
4. How can the feedback best be integrated into the training process?
## 2 Experimental setup
In this section we describe our benchmark strategy, as well as the work-in-progress on data augmentation, model update and evaluation methods, including the human-in-the-loop intersections. The modules described in sections 2.1 and 2.3 are implemented, while the implementation of the ones described in sections 2.2 and 2.4 is ongoing.
### Benchmark strategy
We experiment with a concrete implementation of the interactive approach outlined in Hartmann et al. (2022). As a starting point, we use a PyTorch implementation of the Show, Attend and Tell model (Xu et al., 2015). This architecture consists of a convolutional neural network (CNN) encoder, which is used to extract feature vectors, and a long-short term memory (LSTM) decoder, which generates a caption conditioned on these vectors with attention. The training strategy used is cross-entropy loss.
In addition, we consider an architecture which requires more supervision, namely the Meshed-Memory (M2) Transformer (Cornia et al., 2020). This model is based on the Transformer architecture proposed by Vaswani et al. (2017). In the M2 Transformer, the encoder is extended with "slots" for additional, a priori information (_memory_). Additionally, _meshed_ cross-attention is performed in the decoder, not only for the last encoding layer, but for all of them. This model requires the additional input of object detections. The model is trained on cross-entropy loss (for pre-training) and reinforcement learning (for fine-tuning).
To pre-train these base models, we use the MS COCO dataset (Lin et al., 2014). More specifically, we use the 2014 release, which contains 82,783 training and 40,504 validation images, with five captions per image. We make use of the Karpathy splits (Karpathy and Fei-Fei, 2017).
### Feedback collection
Given the prediction of the base model on a new image, we collect useful feedback from users for both modalities. This could refer to the correction of a _caption_, but also the drawing of a bounding box around the object on the _image_ that was described incorrectly. Additionally, we plan to experiment with feedback collection for the generated augmentations (see subsection 2.3).
Use case simulation:Ultimately, we want to apply this method in a real-life scenario where any user can provide feedback to adapt the model to their user-specific data. To simulate this feedback in this early stage, we currently work with an already available dataset, namely VizWiz (Gurari et al., 2020; Simons et al., 2020). VizWiz consists of 23,431 training images, 7,750 validation images and 8,000 test images (39,181 images in total). Each image is annotated with five captions. Since captions for the test set are not publicly available, we test on the validation set and use a small part of the training set as our validation set.
### Augmenting the feedback
Data augmentation, or synthetic data generation, is a family of techniques that take an initial dataset (often limited in size) and automatically generate more examples (Atliha and Sesok, 2020). Our plan is to augment user feedback in both modalities. Furthermore, we intend to create a novel joint augmentation method, as described below. Some example augmentations from the VizWiz dataset are shown in figure 2.
TextFor the captions, meaning-preserving operations will be used, since our goal is not to introduce noise to the data, but end up with captions that are similar to the feedback provided by the user. We choose three methods:
1. Lexical substitution. We follow the EDA (Wei and Zou, 2019) implementation, which leverages WordNet. 10% of the tokens of each caption are substituted by a synonym. We generate three augmentations with this method.
2. Back-translation. We use the ArgosTranslate1
Figure 1: Our proposed pipeline. We pre-train our initial model on MS COCO. The model generates captions for new images, and the user gives feedback on the prediction (by correcting it or marking an area of interest on the image). This feedback is then augmented to create more training instances, and the user can evaluate the quality of the augmentations. The model is then updated accordingly, with a sparse memory replay in order to retain old knowledge.
library, translating our English captions to Arabic or Spanish and back to English. As a result, two augmented examples are created.
3. Paraphrasing with a T5 model (Raffel et al., 2020), which is specifically fine-tuned for this purpose. With this method, we create five more augmented captions.
Identical augmentations are discarded. The first and third method could provide more augmented examples; we notice, however, that with more examples, the augmentation quality drops significantly, and the number of identical augmentations increases. The whole procedure results in about 10 augmentations per caption.
ImageWe use the Albumentations (Buslaev et al., 2020) library for image augmentation. We use transformations like rotation, blur, optical distortion, grid distortion, and flip. An additional advantage of the Albumentations library is that it adjusts the bounding boxes with the augmentation. In this manner, feedback provided on the image can be retained.
JointWe also plan to implement a joint augmentation method, which is based on CutMix (Yun et al., 2019) and proposed by Feng et al. (2021). The idea is to cut objects from different images and insert them in other images. Since this would change the content of the image, the caption should be adjusted accordingly - that is, by addition of a description of the inserted object.
Interaction with usersThe examples resulting from the data augmentation step can be used as additional training examples right away. In addition, we consider user interaction with the augmented examples to assure their quality. More specifically, after user feedback for the prediction is processed and augmented, the user could choose to rank the augmentations (from best to worst), or evaluate them in terms of suitability (_good / bad_).
### Model update and evaluation
In a real-life application scenario, the user will input images continuously, which means that the system has to be updated multiple times. In cases where a model is trained repeatedly on new data, _catastrophic forgetting_(Kirkpatrick et al., 2017) can be observed, namely the degradation of model
Figure 2: Augmentation examples for image, captions and the joint method. The original data points are from the VizWiz dataset.
performance on older tasks when it is trained on new ones. We plan to tackle this problem with a continual or lifelong learning method, and more specifically with a sparse memory replay during training, adapting the idea of de Masson d'Autume et al. (2019): During training, some samples - experiences - are written into the memory. These past experiences are then sparsely replayed while the model is trained on new data.
In order to simulate the step-wise adaptation of the model to new data, we split the VizWiz dataset in parts of similar size, according to concepts contained in the captions. We follow a naive approach, collecting all noun phrases (NPs) from the captions, and grouping them according to their semantic similarity. We use k-means clustering Hartigan and Wong (1979) and pre-trained GloVe word embeddings Pennington et al. (2014). All images with captions that contain NPs from the same cluster are then allocated to the same split.
We plan to train disjointly, namely treating each data split as a new task, which is one of the methods used both by Nguyen et al. (2019) (sequential class addition) and Del Chiaro et al. (2020) (disjoint procedure). Evaluation is carried out over individual classes or over all classes, each time the model is trained with a new class/task.
User evaluationApart from evaluating the approach with respect to model performance using automated performance metrics, we plan to evaluate its usefulness and usability for end-users in a human study.
## 3 Possible extensions
Beyond the proposed architecture, we consider some extensions for our pipeline. As mentioned in subsection 2.4, evaluation from users can potentially point to improvements or the need for the addition of extensions. While VizWiz, which we use as substitute data in the initial implementation and experimentation stages, constitutes a use case by itself, we do not adapt out implementationto this specific dataset (for example, by employing optical character recognition/detection), as we aim to develop an approach that is applicable to a broad range of user-specific data. Implementing such specific adaptations might further improve performance and can be added on top of our approach depending on the use-case.
Further work can focus on the choice of experiences stored in the memory. This can be done either by integrating the user in the sampling process, or by employing active learning techniques to find the most suitable experiences for future replay.
Last but not least, we can leverage the advantage of interactive systems of learning from fewer labeled instances for cases which annotated data is limited. One such case is multilingual IC. For this reason, an extension of our architecture to support multiple languages looks promising.
## Acknowledgments
We thank the reviewers for their insightful comments and suggestions. The research was funded by the XAINES project (BMBF, 01IW20005).
|
2308.01915 | LOB-Based Deep Learning Models for Stock Price Trend Prediction: A
Benchmark Study | The recent advancements in Deep Learning (DL) research have notably
influenced the finance sector. We examine the robustness and generalizability
of fifteen state-of-the-art DL models focusing on Stock Price Trend Prediction
(SPTP) based on Limit Order Book (LOB) data. To carry out this study, we
developed LOBCAST, an open-source framework that incorporates data
preprocessing, DL model training, evaluation and profit analysis. Our extensive
experiments reveal that all models exhibit a significant performance drop when
exposed to new data, thereby raising questions about their real-world market
applicability. Our work serves as a benchmark, illuminating the potential and
the limitations of current approaches and providing insight for innovative
solutions. | Matteo Prata, Giuseppe Masi, Leonardo Berti, Viviana Arrigoni, Andrea Coletta, Irene Cannistraci, Svitlana Vyetrenko, Paola Velardi, Novella Bartolini | 2023-07-05T14:28:38Z | http://arxiv.org/abs/2308.01915v2 | # LOB-Based Deep Learning Models for Stock Price Trend Prediction: A Benchmark Study
###### Abstract
The recent advancements in Deep Learning (DL) research have notably influenced the finance sector. We examine the robustness and generalizability of fifteen state-of-the-art DL models focusing on Stock Price Trend Prediction (SPTP) based on Limit Order Book (LOB) data. To carry out this study, we developed LOBCAST, an open-source framework that incorporates data preprocessing, DL model training, evaluation and profit analysis. Our extensive experiments reveal that all models exhibit a significant performance drop when exposed to new data, thereby raising questions about their real-world market applicability. Our work serves as a benchmark, illuminating the potential and the limitations of current approaches and providing insight for innovative solutions.
## 1 Introduction
Predicting stock market prices is a complex endeavour due to myriad factors, including macroeconomic conditions and investor sentiment [1]. Nevertheless, professional traders and researchers usually forecast price movements by understanding key market properties, such as volatility or liquidity, and recognizing patterns to anticipate future market trends [2]. Effective mathematical models are essential for capturing complex market dependencies. The recent surge in artificial intelligence has led to significant work in using machine learning algorithms to predict future market trends [3; 4; 5]. Recent Deep Learning (DL) models have achieved over 88% in F1-score in predicting market trends in simulated settings using historical data [6]. However, replicating these performances in real markets is challenging, suggesting a possible _simulation-to-reality_ gap [7; 8].
In this paper, we benchmark the most recent and promising DL approaches to Stock Price Trend Prediction (SPTP) based on Limit Order Book (LOB) data, one of the most valuable information sources available to traders on the stock markets. Our benchmark evaluates their robustness and generalizability [9; 10; 11]. In particular, we assess the models' robustness by comparing the stated performance with our reproduced results on the same dataset FI-2010 [12]. We also assess their generalizability by testing their performance on unseen market scenarios using LOBSTER data [13]. We focus on novel data-driven approaches from Machine Learning (ML) and DL that analyze the market at its finest resolution, using high-frequency LOB data. In this work, we formally define the SPTP problem considering a ternary trend classification. Our findings reveal that while best models exhibit robustness, achieving solid F1-scores on FI-2010, they show poor generalizability, as their performance significantly drops when applied to unseen LOBSTER market data.
The main contributions of our work are the following:
* We release a highly modular open-source framework called **LOBCAST1**, to pre-process data, train, and test stock market models. Our framework employs the latest DL libraries to provide all researchers an easy, performing, and maintainable solution. Furthermore, to support future studies, we release two meta-learning models and a backtesting environment for profit analysis. Footnote 1: The code is included in the supplementary material and will be publicly available upon acceptance
* We evaluate existing LOB-based stock market trend predictors, showing that most of them overfit the FI-2010 dataset, with remarkably lower performance on unseen stock data.
* We survey and discuss the financial performance of existing methods under different market scenarios to guide model selection in real-world applications2. Footnote 2: The details are reported in the supplementary materials for space reasons
* We discuss the strengths and limitations of existing methodology and identify areas for future research toward more reliable, robust, and reproducible approaches to stock market prediction.
## 2 Related Work
The increasing interest in DL for price trend prediction motivated several researchers to collect and analyze State-Of-the-Art (SOTA) solutions in benchmark surveys. The study by Jiang et al. [4] analyzes papers published between 2017 and 2019 that focused on stock price and market index prediction. In their literature review, the authors studied the Neural Network (NN) structures and evaluation metrics used in selected papers, as well as implementation and reproducibility. This work was extended in [14], including an in-depth analysis of the data (i.e., market indices, input variables used for stock market predictions). Ozboyoglu et al. [15] and Sezer et al. [5] provide a comprehensive overview of the SOTA DL models used for financial predictions. The work in [16] surveys 86 papers on stock and foreign exchange price prediction. The authors review the datasets, variables, models, and performance metrics used in each surveyed article. In contrast to these works, in this paper, we run experiments to study the robustness and generalizability of the selected approaches. Nti et al. [17] conducted a systematic and critical review of 122 papers. Their study also compares the self-stated accuracy, error metrics, and software packages used in the selected papers by means of experiments. In contrast to this, we focus on papers that use LOB data and DL algorithms for price trend predictions. We also evaluate the generalizability of the models by driving tests on a different dataset. Other studies [18; 19] also analyze solutions based on sentiment analysis through Natural Language Processing (NLP) to investigate the impact of social media on the stock market, showing that this combination improves the accuracy of stock prediction models. In [20], the authors presented a comprehensive overview of traditional and ML-based approaches for stock market prediction and highlighted some limitations of traditional approaches, showing that DL models outperform them in terms of accuracy. Similar findings are reported in [21]. Lim et al. [22] discussed recent developments in hybrid DL models, which combine statistical and learning components for both one-step-ahead and multi-horizon time-series forecasting. Similarly, Shah et al. [23] discussed hybrid approaches in their work on the state-of-the-art algorithms commonly applied to stock market prediction. Additionally, they provided a taxonomy of computational approaches for stock market analysis and prediction. Finally, Olorunimme et al. [24] focused on exploring applications of DL in the stock market that involve backtesting, with a particular emphasis on research papers that meet the requirements for real-world use. They reviewed various scenarios in which DL has been employed in finance, with a focus on trade strategy, price prediction, portfolio management, and others.
Our work adds to this literature by providing a benchmark of recent deep learning approaches based on LOB data, evaluating their robustness and generalizability, and releasing an open-source framework for pre-processing data, training, and testing models.
## 3 Stock Price Trend Prediction
The common ground that unifies the models studied in this paper is the goal of solving the SPTP problem via Deep Neural Networks (DNNs) trained on LOB data. LOB data are particularly enlightening as they provide raw and granular information on stocks' trades. By observing the LOB in a fixed period of time, SPTP models return a distribution over the possible future market movements.
Limit Order BookA stock exchange employs a matching engine for storing and matching the orders issued by the trading agents. This is achieved by updating the so-called Limit Order Book (LOB) data structure. Each security (tradable asset) has a LOB, recording all the outstanding bid and ask orders currently available on an exchange or a trading platform. The shape of the order book gives traders a simultaneous view of the market demand and supply.
There are three major types of orders. _Market orders_ are executed immediately at the best available price. _Limit orders_, instead, include the specification of a desired target price: a limit sell [buy] order will be executed only when it is matched to a buy [sell] order whose price is greater [lower] than or equal to the target price. Finally, a _Cancel order_ removes a previously submitted limit order.
Figure 1 depicts an example of a LOB snapshot, characterized by _buy_ orders (_bid_) and _sell_ orders (_ask_) of different prices. A _level_, shown on the horizontal axis, represents the number of shares with the same price either on the bid or ask side. In the example of Figure 1, there are three bid and three ask levels. The _best bid_ is the price of the shares with the highest price on the buy side; analogously, the _best ask_ is the price of the shares with the lowest price on the bid side. When the former exceeds or equals the latter, the corresponding limit ask and bid orders are executed. The LOB is updated with each event (order insertion/modification/cancellation) and can be sampled at regular time intervals.
We represent the evolution of a LOB as a time series \(\mathbb{L}\), where each \(\mathbb{L}(t)\in\mathbb{R}^{4L}\) is called a LOB record, for \(t=1,\ldots,N\), being \(N\) the number of LOB observations and \(L\) the number of levels. In particular, \(\mathbb{L}(t)=\{P^{s}(t),V^{s}(t)\}_{s\in\{\tt ask,bid\}}\), where \(P^{\tt ask}(t),P^{\tt bid}(t)\in\mathbb{R}^{L}\) represent the prices of levels 1 to \(L\) of the LOB, on the _ask_ (\(s=\tt ask\)) side and _bid_ (\(s=\tt bid\)) side, respectively, at time \(t\). Analogously, \(V^{\tt ask}(t),V^{\tt bid}(t)\in\mathbb{R}^{L}\) represent the volumes. This means that for each \(t\) and every \(j\in\{1,\ldots,L\}\) on the _ask_ side, \(V^{\tt ask}_{j}(t)\) shares can be sold at price \(P^{\tt ask}_{j}(t)\). The _mid-price_\(m(t)\) of the stock at time \(t\), is defined as the average value between the best bid and the best ask, \(m(t)=\frac{P^{\tt ask}(t)+P^{\tt bid}(t)}{2}\). Mid-prices are synthetic values that are commonly used as indicators of the stock price trend. In average, if most of the executed orders are on the ask [bid] side, the mid-price increases [decreases] accordingly.
Trend DefinitionWe use a ternary classification for trends: U ("upward") if the price trend is increasing; D ("downward") for decreasing prices; and S ("stable") for prices with negligible variations. Thanks to their informativeness, mid-prices are well-suited to drive this classification. Nevertheless, because of the market's inherent fluctuations and shocks, they can exhibit highly volatile trends. For this reason, using a direct comparison of consecutive mid-prices, i.e., \(m(t)\) and \(m(t+1)\), for stock price labelling would result in a noisy labelled dataset. As a result, labelling strategies typically employ smoother mid-price functions instead of raw mid-prices. Such functions consider mid-prices over arbitrarily long time intervals, called _horizons_. Our experiments adopt the labelling proposed in [12] and repurposed in several other state-of-the-art solutions we selected for benchmarking. The adopted labelling strategy compares the current mid-price to the average mid-prices \(a^{+}(k,t)\) in a future _horizon_ of \(k\) time units, formally:
\[a^{+}(k,t)=\frac{1}{k}\sum_{i=1}^{k}m(t+i). \tag{1}\]
The average mid-prices are used to define a static threshold \(\theta\in(0,1)\) that is used to identify an interval around the current mid-price and define the class of the trend at time \(t\) as follows:
\[\texttt{U}\colon a^{+}(k,t)>m(t)(1+\theta),\ \texttt{D}\colon a^{+}(k,t)<m(t)(1- \theta),\ \texttt{S}\colon a^{+}(k,t)\in[m(t)(1-\theta),m(t)(1+\theta)]. \tag{2}\]
With this labelling, we beat the effect of mid-price fluctuations by considering their average over a desired horizon \(k\) and considering a trend to be stable when the average mid-price variations do
Figure 1: An example of LOB.
not change significantly, thus avoiding over-fitting. We highlight that time stamps \(t\) can come either from a homogeneous or an event-based process. In our experiments, we consider an event-based process.
Models I/OGiven the time series of a LOB \(\mathbb{L}\) and a temporal window \(T=[t-h,t]\), \(h\in\mathbb{N}\), we can extract _market observations_ on \(T\), \(\mathbb{M}(T)\), by considering the sub-sequence of LOB observations starting from time \(t-h\) up to \(t\). In Section 1 of the Supplemental Material (SUP), we give a representation of a market observation \(\mathbb{M}(T)\in\mathbb{R}^{h\times 4L}\). The market observation over the window \([t-h,t]\) is associated with the label computed through Equations 1 and 2 at time \(t\). An SPTP predictor takes as an input a market observation and outputs a probability distribution over the trend classes U, D, and S.
## 4 Experiments
We conducted an extensive evaluation to assess the _robustness_ and _generalizability_ of 15 DL models to solve the SPTP task, as presented in Section 3. Among these, 13 were SOTA models, and 2 DL baseline models commonly used in the literature. More details on the models are given in Section 4.2.
In line with many other studies, we adopt the definition of robustness and generalizability introduced by J. Pineau et al. in their work [9]. Robustness is evaluated by testing the proposed models on **FI-2010**, a benchmark dataset employed in all surveyed papers. In some cases, the authors of the considered works have not provided crucial information, such as the code or the hyperparameters of their models, making reimplementation and hyperparameter search necessary. We refer to Section 5.1 in SUP for a complete description of the hyperparameters search. To evaluate the generalizability, we created two datasets called **LOB-2021** and **LOB-2022**, extrapolated from the LOBSTER dataset [13]. We describe these datasets in Section 4.1.
Our experiments were carried out using **LOBCAST**[25], the open-source framework we make available online. The framework allows the definition of new price trend predictors based on LOB data. More details on the framework are given in Section 4.3.
### Datasets
LOB data are not often publicly available and very expensive: stock exchanges (e.g., NASDAQ) provide fine-grained data only for high fees. The high cost and low availability restrict the application and development of DL algorithms in the research community.
The most widely spread public LOB dataset is **FI-2010** which is licensed under _Creative Commons Attribution 4.0 International (CC BY 4.0)_ and was proposed in 2017 by Ntakaris et al. [12] with the objective of evaluating the performance of machine learning models on the SPTP task. The dataset consists of LOB data from five Finnish companies: Kesko Oyj, Outokumpu Oyj, Sampo, Rautaraukki, and Wartsila Oyj of the NASDAQ Nordic stock market. Data spans the time period between June 1st to June 14th, 2010, corresponding to 10 trading days (trading happens only on business days). About 4 million limit order messages are stored for 10 levels of the LOB. The dataset has an event-based granularity, meaning that the time series records are not uniformly spaced in time. LOB observations are sampled at intervals of 10 _events_, resulting in a total of 394,337 events. This dataset has the intrinsic limitation of being already pre-processed (filtered, normalized, and labelled) so that the original LOB cannot be backtracked, thus hampering thorough experimentation. Additionally, the labelling method employed is found to be prone to instability, as demonstrated by Zhang et al. in [26]. Moreover, the dataset is unbalanced at varying prediction horizons. Varying the horizon \(k\in\mathcal{K}=\{1,2,3,5,10\}\), the stationary class S is progressively less predominant in favour of the upward and downward classes. For instance, the class composition for different values of \(k\) is \(k=1\), U: 18%, S: 63%, D:19%; \(k=5\), U: 32%, S: 35%, D:33%; \(k=10\), U: 37%, S: 25%, D:38%.
To test the generalizability of the models in a more realistic scenario, we used data extracted from **LOBSTER**[13], an online LOB data provider for order book data, which is not available for free, as is often the case for critical applications such as health and finance [9]. The data are reconstructed from NASDAQ traded stocks and are publicly available for the research community with an annual fee. To compare the performance of the algorithms in a wide range of scenarios, we have created a
large LOB dataset, including several stocks and time periods. The chosen pool of stocks includes those from the top 50% more liquid stocks of NASDAQ. To create a challenging evaluation scenario, we selected six stocks, namely: SoFi Technologies (SOFI), Netflix (NFLX), Cisco Systems (CSCO), Wing Stop (WING), Shoals Technologies Group (SHLS), and Landstar System (LSTR). The periods in consideration are _July 2021_ (2021-07-01 to 2021-07-15, 10 trading days) making up **LOB-2021**, and _February 2022_ (2022-02-01 to 2022-02-15, 10 trading days) making up **LOB-2022**. The selection of these two periods aimed to capture data from periods with different levels of market volatility. February 2022 exhibited higher volatility compared to July 2021, largely influenced by the Ukrainian crisis. This allows for an assessment of models across varying market conditions. We describe in detail our stock selection procedure in Section 3 in SUP.
Datasets for the Generalizability StudyDue to copyright reasons, we are unable to release the LOB-2021 and LOB-2022 datasets. However, in Section 4 in SUP, we provide detailed insights into how they are generated, ensuring transparency and replicability in future research. The approach we adopt to generate both datasets closely follows the creation process presented for FI-2010 in [12]. In summary, for each considered stock \(s\), we construct a _stock time series_ of LOB records \(\mathbb{L}_{s}(t)\in\mathbb{R}^{4L}\), with \(L=10\). To resemble the FI-2010 structure, we sample the market observation every 10 events and split records into _training_, _validation_, and _testing_ sets using a 6-2-2 days split3. Normalization is performed on stock time series using a \(z\)-score approach, and the dataset is labelled by leveraging the trend definitions described in Equation (2). Lastly, both LOB-2021 and LOB-2022 contain prediction labels for each one of the considered horizons \(\mathcal{K}\).
Footnote 3: For the experiments on FI-2010 we followed the same data splitting procedure as the 13 SOTA papers. We split the dataset using the first 7 days for the train set and validation set (80% / 20%) and the last three days as the test set.
### Models
We have selected 13 SOTA models based on DL for the SPTP task. These models were proposed in papers published between 2017 and 2022 and utilized datasets LOB data for training and testing. In addition to the models proposed in the selected papers, we also included two classical DL algorithms, namely Multilayer Perceptron (MLP) and Convolutional Neural Network (CNN), which were used as a benchmark in [27] and in [31], respectively. All proposed models are based on DNNs and were originally trained and tested on the FI-2010 dataset.
A comprehensive summary of the benchmarked models can be found in Table 1, while for additional details, we refer the reader to Section 2 in SUP. In Table 1, the _temporal shape_ represents the length of the input market observation for the model. The _features shape_ refers to the number of features used by the models to infer the trend in the original papers. In the Table, we also indicate whether the authors released the code, and if so, whether they have used PyTorch (PT) [38] or Tensorflow (TF) [39]. This is relevant because to ensure consistency and compatibility within our proposed framework, based on PyTorch Lightning, we found it necessary to re-implement models for which the code was not available or was only available in Tensorflow. To improve the reproducibility of the results, it is advisable for the research community to publish the code developed.
In High-Frequency Trading (HFT) and algorithmic trading in general, minimizing latency between model querying and order placement is of utmost importance [40]. To explore this aspect, we an
alyzed the inference time in milliseconds of all models, based on the experiments reported in Section 4.4. As shown in Table 1, DEEPLOB, DEEPLOBAT, AXIALLOB, TRANSLOB, and ATNBoF had inference times in the order of milliseconds, potentially unsuitable for HFT applications compared to other models with shorter times. Finally, we have reported the number of trainable parameters for each model. A noteworthy observation is that the average number of parameters is very low compared to other classical fields, such as computer vision [41] and natural language processing [42; 43]. This leads us to conjecture that current systems are inadequate in effectively handling the complexity of LOB data, as we will verify in the rest of this paper.
To explore the possibility of achieving new SOTA performance by combining the predictions of all 15 models, we have implemented two ensemble methods: _MAJORITY_, which performs a majority voting weighted by the F1-Score of the predictions made by all the models, and _METALOB_, which is trained with the predictions made by the individual models to learn the most appropriate aggregation function. A detailed description of these ensemble methods can be found in Section 2.1 in SUP.
### LOBCAST Framework for SPTP
We present **LOBCAST4**[25], a Python-based framework developed for stock market trend forecasting using LOB data. LOBCAST is an open-source framework that enables users to test DL models for the SPTP task. The framework provides data pre-processing functionalities, which include normalization, splitting, and labelling. LOBCAST also offers a comprehensive training environment for DL models implemented in PyTorch Lightning [38]. It integrates interfaces with the popular hyperparameter tuning framework WANDB [44], which allows users to tune and optimize model performance efficiently. The framework generates detailed reports for the trained models, including performance metrics regarding the learning task (F1, Accuracy, Recall, etc.). LOBCAST supports backtesting for profit analysis, utilizing the Backtesting.py [45] external library. This feature enables users to assess the profitability of their models in simulated trading scenarios. We plan to add new features such as (i) training and testing with different LOB representations [46; 47], and (ii) test on adversarial perturbations to evaluate the representations' robustness [48]. We believe that LOBCAST, along with the advancements in DL models and the utilization of LOB data, has the potential to improve the state of the art on trend forecasting in the financial domain.
Footnote 4: [https://github.com/matteoprata/LOBCAST](https://github.com/matteoprata/LOBCAST)
### Performance, Robustness and Generalizability
To test robustness and generalizability, we conducted our experiments for each model using five different seeds to ensure reliable results and mitigate the impact of random initialization of network weights and training dataset shuffling. The training process involved training the 15 models for each seed on each of the considered prediction horizons (\(\mathcal{K}=\{1,2,3,5,10\}\)). More details on the setting of the experiments are provided in the SUP Section 5. On average over all 5 runs, the training process for all the models took approximately 155 hours for FI-2010 and 258 hours for LOB-2021/2022, utilizing a cluster comprised of 8 GPUs (1 NVIDIA GeForce RTX 2060, 2 NVIDIA GeForce RTX 3070, and 5 NVIDIA Quadro RTX 6000).
In Table 2, we summarize the results of our experiments. As the datasets are not well balanced, we focused on F1-score; other performance metrics are reported in the SUP. The Table compares the claimed performance of each system (column F1 Claim) with those measured in the robustness (FI-2010) and generalizability (LOB-2021 and 2022) experiments. For each dataset, we show the average performance and the standard deviation achieved by each model in all the horizons, along with its rank.
To evaluate the robustness and the generalizability of the models, we compute the **robustness** and the **generalizability scores**, a value \(\leq 100\) that is computed as \(100-(|A|+S)\), where \(A\) and \(S\) are defined as follows. \(A\) is the average difference between the F1-score reported in the original paper and the one that we observed in our experiments on FI-2010 for robustness, and on LOB-2021 and LOB-2022 for generalizability. \(S\) is the standard deviation of these differences. The score penalizes models that demonstrate higher variability in their performance by subtracting the standard deviation. The average and standard deviation were computed over the declared horizons for each model and considering all five seeds.
Table 2 clearly highlights the following:
1. Except for a few systems, there is a considerable difference between the claimed performances and those measured in both robustness and generalizability experiments. Note that while the performance gap is negative on average and considerably negative in the scenario of LOB-2021 and 2022, a few systems outperform the claimed results, as highlighted by the arrows in Table 2.
2. All models are very sensitive to hyperparameters, in fact, they diverged (F1-score \(\leqslant 33\%\)) during the hyperparameters search for about half of the runs.
3. The ranking of systems changes considerably if we compare the declared performances with those measured in our experiments. On the other hand, the best six systems in FI-2010 remain the same in LOB-2021 and 2022.
4. The best-ranked systems do not consistently hold the lead in terms of robustness and generalizability - except for BINCTABL. On the contrary, some of them obtained poor generalizability scores, suggesting that they overfitted the FI-2010 dataset.
5. Five of the best six models incorporate attention mechanisms. In particular, the best-performing model is BINCTABL, which enhances the original CTABL model by adding an Adaptive Bilinear Normalization layer, enabling joint normalization of the input time series along both temporal and feature dimensions. On average, BINCTABL improves the F1-score by up to \(9.2\%\) compared to DLA, i.e., the second-best model, and up to \(13\%\) compared to CTABL.
6. Regrettably, ensemble models (the last two rows in Table 2) do not exceed the performance of the top-performing models, which is probably due to the relatively high agreement rate among systems, as shown in Section 6 in SUP.
Robustness on FI-2010As far as the robustness experiments are concerned, it is important to note that some models discussed in the literature incorporate additional market observation features for predictions. This is the case for models such as DAIN, CNNLSTM, TLOBOF, and DLA. To ensure a fair comparison among the models, we included them in our study but reduced their feature set to only the 40 raw LOB features. Due to the presence of these additional features, a strict robustness study could not be conducted for these models. However, the reduction of features did not necessarily cause a deterioration in performance: of particular interest is the case of CNNLSTM, for which the authors used stationary features derived from the LOB, stating that they were better than the raw ones. Impressively, CNNLSTM achieves the greatest average improvement of \(20.9\%\) among all the models, proving that, for this model, the raw LOB features are better suited to forecast the mid-price movement than the features proposed by the original authors. This is also the case for DLA, which originally uses 144 input features. In fact, with the only raw features, DLA exhibited remarkable performance, ranking second best in terms of F1-score.
\begin{table}
\begin{tabular}{|c|c|c|c||c|c|c||c|c|c|} \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c||}{**FI-2010**} & \multicolumn{2}{c||}{**LOB-2021**} & \multicolumn{2}{c||}{**LORB-3022**} \\ & \multirow{2}{*}{**F1 Claim**} & \multirow{2}{*}{**FT1**} & \multirow{2}{*}{**F1**} & \multirow{2}{*}{**Robustness**} & \multicolumn{2}{c||}{**F1**} & \multicolumn{2}{c||}{**F1**} & \multicolumn{2}{c||}{**General**} & \multicolumn{2}{c|}{**F1**} & \multicolumn{2}{c|}{**F1**} & \multicolumn{2}{c|}{**General**} \\ & & & & **ROBCAST** & **Rank** & **Score (\%)** & **LOBCAST** & **Rank** & **Score (\%)** & **LOBCAST** & **Rank** & **Score (\%)** \\ \hline MLP & \(5.18\pm 3.2\) & \(4.08\pm 2.6\) & \(14\) & \(91.8\) & \(55.53\pm 3.9\) & \(14\) & \(20\) & \(5.31\pm 2.5\) & \(13\) \\ \hline LSTM & \(6.34\pm 2.1\) & \(6.34\pm 3.6\) & \(7\) & \(1.69\pm 4.1\) & \(11\) & \(85.9\) & \(5.66\pm 2.8\) & \(9\) & \(88.6\) \\ \hline CNN1 & \(57.9\pm 1.9\) & \(58.1\pm 13.1\) & \(10\) & \(80.9\) & \(5.97\pm 5.3\) & \(8\) & & \(5.71\pm 2.7\) & \(6\) \\ \hline Crank1 & \(7.35\pm 2.5\) & \(6.08\pm 4.3\) & \(5\) & \(91.9\) & \(5.96\pm 7.2\) & \(3\) & \(78.4\) & \(5.81\pm 3.3\) & \(5\) & \(78.4\) \\ \hline DEMPB & \(78.9\pm 4.4\) & \(71.4\pm 5.3\) & \(4\) & \(57.6\) & \(1.59\pm 5.3\) & \(4\) & \(73.7\) & \(\mathbf{98.9\pm 2.9}\) & \(14\) & \(74.7\) \\ \hline DAIN & \(66.8\pm 1.5\) & \(55.6\pm 5.9\) & \(11\) & \(81.4\) & \(5.59\pm 4.4\) & \(12\) & \(79.5\) & \(5.41\pm 2.1\) & \(12\) & \(83.9\) \\ \hline CNNLSTM & \(47.0\pm 0.0\) & \(6.32\pm 8.4\) & \(8\) & \(76.7\) & \(57.6\pm 3.3\) & \(10\) & \(87.8\) & \(5.66\pm 2.5\) & \(70.3\) \\ \hline CSN2 & \(40.0\pm 0.8\) & \(50.1\pm 17.3\) & \(12\) & \(20.5\) & \(5.53\pm 5.3\) & \(13\) & \(86\) & \(5.56\pm 3.2\) & \(10\) & \(88.6\) \\ \hline TRANSLOR & \(\mathbf{87.3\pm 4.0}\) & \(59.4\pm 2.6\) & \(9\) & \(\mathbf{69.9}\) & \(5.77\pm 2.9\) & \(7\) & \(\mathbf{64.2}\) & \(5.04\pm 6.1\) & \(14\) & \(\mathbf{66.1}\) \\ \hline TLOBOF & \(53.0\pm 0.0\) & \(49.7\pm 10.5\) & \(13\) & \(81.5\) & \(57.3\pm 2.9\) & \(9\) & \(5.42\pm 3.1\) & \(11\) & \(\mathbf{66.1}\) \\ \hline BINCTABL & \(51.0\pm 0.0\) & \(\mathbf{51.6\pm 7.0}\) & \(1\) & \(\mathbf{64.2\pm 2.7}\) & \(1\) & \(72.5\) & \(15.92\pm 3.3\) & \(2\) & \(72.3\) \\ \hline DEMPBOUT & \(78.8\pm 3.1\) & \(67.3\pm 9.0\) & \(6\) & \(81.2\) & \(6.00\pm 3.0\) & \(2\) & \(75.7\) & \(5.58\pm 9.2\) & \(3\) & \(74.5\) \\ \hline DLA & \(7.7\pm 0.7\) & \(7.43\pm 12.1\) & \(2\) & \(2\) & \(5.77\pm 7.7\) & \(6\) & \(74.9\) & \(5.66\pm 2.4\) & \(8\) & \(76.9\) \\ \hline ATINBOF & \(67.1\pm 5.5\) & \(40.9\pm 7.7\) & \(15\) & \(66.1\) & \(5.43\pm 13.1\) & \(15\) & \(80.9\) & \(48.0\pm 6.9\) & \(15\) & \(81.2\) \\ \hline AXIALO & \(82.0\pm 3.7\) & \(73.4\pm 5.7\) & \(3\) & \(88.2\) & \(1.59\pm 3.3\) & \(5\) & \(\mathbf{71.3}\) & \(58.6\pm 2.6\) & \(4\) & \(\mathbf{70.7}\) \\ \hline \hline METALOB & – & \(82.2\pm 7.3\) & – & – & \(55.9\pm 2.6\) & – & – & \(53.2\pm 1.5\) & – & – \\ \hline MAORITY & – & \(60.0\pm 12.7\) & – & – & \(55.5\pm 2.3\) & – & – & \(47.9\pm 2.0\) & – & – \\ \hline \end{tabular}
\end{table}
Table 2: Robustness, generalizability, and performance scores of the models. Arrows indicate whether the measured F1 of a system is higher or lower than stated in the original paper. Colour saturation highlights systems with best (green) and worst (red) robustness and generalizability scores.
Based on these experiments (summarized in Table 2), the BINCTABL model demonstrates the **highest F1-score** when averaged over the seeds and prediction horizons, achieving an average of \(82.6\%\pm 7.0\). Notably, the BINCTABL model also exhibits the strongest robustness score of \(99.7\), ranking as the best in terms of robustness. For a more comprehensive analysis, Figure 2 provides the confusion matrices of the BINCTABL model's predictions for two horizons (\(k=1\) and \(k=10\)). The confusion matrices demonstrate that the model is slightly biased toward the stationary class. This pattern is consistent across all the models, especially for the first three horizons, reflecting the imbalance of the dataset towards the stationary class, as specified in Section 4.1.
Remarkably, a significant number of models in our study failed to achieve the claimed performance levels. Two possible reasons are the lack of the original code and the missing hyperparameters declaration. Among the models, TRANSLOB and ATNBoF exhibit the largest discrepancies, ranking as the second and first worst performers, respectively. Notably, ATNBoF performs the poorest among all models, both in terms of robustness score and F1-score.
We observed that CNN1, CNN2, CNNLSTM, TLONBOF, and DLA are the most sensitive models in terms of network weight initialization and dataset shuffling, in fact, these models exhibit a standard deviation over the runs that exceeds 5 points, indicating a high degree of variability in their performance.
Finally, we highlight that none of the top three models in our study utilize \(h=100\) long market observations as input, despite it being a common practice in the literature [26, 27, 28, 32, 36], meaning that they are able to achieve good results without relying on a large historical context. This suggests that the most influential and relevant dynamics impacting their predictions tend to occur within a short time frame. In Section 6 in SUP, we analyze in more detail the robustness results of our benchmark study when varying the horizons.
Generalizability on LOB-2021/2022When comparing the performance of models on the FI-2010 and LOB-2021/2022 datasets, we observe that models showing high performance on the FI-2010 dataset demonstrate a deterioration in performance. Conversely, some of the models that performed poorly on the FI-2010 dataset show an improvement in performance on the LOB-2021/2022 datasets. However, the overall performance of all models on the LOB-2021/2022 dataset is still significantly lower than on the FI-2010 dataset, ranging 48-61% in F1-score. Furthermore, we conjecture that the overall performance is worse in LOB-2022 than in LOB-2021 due to the higher stocks' volatility. We mention two potential factors contributing to this observed phenomenon. Firstly, the LOB-2021/2022 datasets present a higher level of complexity than the FI-2010 dataset, despite having been generated with a similar approach. Indeed, NASDAQ is a more efficient and liquid market than the Finnish one, as evidenced by the fact that LOB-2021/2022 datasets have approximately three times the size of FI-2010 in terms of events for the same period length. Secondly, the best-performing models may overfit the FI-2010 dataset, leading to a decrease in their performance when applied to LOB-2021/2022 datasets. In particular, BINCTABL experiences an average decrease of approximately \(19.6\%\) in F1-Score across all horizons, resulting in a generalizability score of \(73.5\%\). For a more detailed analysis of our generalizability results, we refer to Section 6 in SUP, where we also illustrate the substantial performance variation across different stocks. Among the tested models, CSCO stands out as yielding the highest performance. This may be attributed to the high stationarity of CSCO (balance 18-65-17% in the train set), indicating more stable and predictable behaviour. This hypothesis is supported by the confusion matrices, which consistently show the best performance in the stationary class across all models; for reasons of space, we reported only those of BINCTABL in Figure 2 while for the complete study, we refer the reader to Section 7 in SUP. Finally, as a final benchmark test, we conducted a trading simulation using LOB-2021. The results
Figure 2: Confusion matrices for BINCTABL \((k=1,10)\) on FI-2010 and LOB-2021 datasets.
confirm the challenging nature of the task using the up-to-date LOB-2021 dataset, indicating that the models' profitability is far from guaranteed. For more detailed information about the simulation, please refer to Section 7 in SUP.
## 5 Discussion and Conclusions
Our findings highlight that price trend predictors based on DNNs using LOB data are not consistently reliable as they often exhibit non-robust and non-generalizable performance. Our experiments demonstrate that the existing models are susceptible to hyperparameter selection, randomization, and experimental context (stocks, volatility). In addition, the selection of datasets and the experimental setup fail to capture the intricacies of the real-world scenario. This lack of generalizability makes them inadequate for practical applications in real-world settings.
ModelsOur results lead to a crucial observation: on the LOBSTER dataset, SOTA DL models for LOB data exhibit low generalizability. We suggest that this phenomenon is due to two factors: the higher complexity of the LOBSTER dataset compared to the FI-2010 dataset, and the overfitting of the best-performing models to the FI-2010 dataset, which lowers their performance on the LOBSTER dataset. Another key finding of this study is that the top models with the highest performance on both datasets employ attention mechanisms. This suggests that the attention technique enhances the extraction of informative features and the discovery of patterns in LOB data. However, in general, it appears that current models cannot cope with the complexity of financial forecasting with LOB data. Future investigations should consider state-of-the-art approaches to multivariate time series forecasting, such as [49, 50, 51], which have not yet been adopted in the financial sector.
DatasetFinancial movements can be influenced by geopolitical events, as political actions and decisions can significantly impact economic conditions, market sentiment, and investor confidence [1]. These factors are not captured by LOB data alone. For this reason, we believe that price predictors may benefit from integrating LOB data with additional information, for example, sentiment analysis relying on social media and press data, representing an easily accessible source of exogenous factors impacting the market [52]. This is particularly true for mid- and long-term price trend prediction, whereas it might not hold for HFT strategies [2]. We remark that micro and macroscopic market trends are fundamentally different, and the microscopic behaviour of the market is very much driven by HFT algorithms, making it almost exclusively dependent on financial movements rather than external factors. In this scenario, granular and raw LOBs may suffice to provide data for price trend prediction. Another weakness in dataset generation is the potential for training, validation, and test splits to have dissimilar distributions. This occurs due to the distinct characteristics of the historical periods covered by the stock time series. This can negatively affect the model's ability to generalize effectively and make reliable predictions on unseen data.
LabellingAs we discussed in Sections 2, 4.1 and 4.4, the choice of the threshold for class definition in Equation 2 plays a crucial role in determining the trend associated to a market observation. We believe that current solutions present room for improvement. As discussed in Section 4.1, in FI-2010, the parameter \(\theta\) was chosen to obtain a balanced dataset in the number of classes for the horizon \(k=5\) (which is the mean value of the considered interval in the set \(\mathcal{K}\)). Thus, \(\theta\) is not chosen in accordance with its financial implication but rather serves the purpose of balancing the dataset. We recall that the dataset is made of different stocks. With such a labelling system, fixed \(\theta\), stocks with low returns become associated with stable trends, as their behaviour is overshadowed by stocks exhibiting higher returns. Good practices that could be investigated are to use a weighted look-behind moving average to absorb data noise instead of mid-prices as in Equation 2 or to define a dynamically adapting \(\theta\) which accounts for changing trends of a stock's mid-price. Moreover, the labelling approach of Equation 2, used by all surveyed models, fails to leverage important aspects available in LOB data, including the volume, which directly influences stock volatility. Therefore, another possible improvement is the definition and use of other insightful features that can be extrapolated from a LOB in addition to the mid-price. Such values could encapsulate other peculiar and informative features, such as stocks' spread and volumes.
ProfitIn the context of stock prediction tasks, it is of utmost importance to go beyond standard statistical performance metrics such as accuracy and F1 score and incorporate trading simulations to assess the practical value of algorithms. SPTP predictors' ultimate measure of success lies in their ability to generate profits under real market conditions. It is essential to conduct trading simulations
using real simulators that go beyond testing on historical data. Recent progress has been made in the context of reactive simulators [53, 54, 55, 56].
We acknowledge that our study is subject to some limitations, which should be considered when interpreting our findings. First, we conducted a grid hyperparameter search for the models which did not specify them. Since hyperparameter search is not exhaustive, our chosen best hyperparameters could potentially undermine the quality of the original systems. Secondly, due to computational resource limitations, we could not train the benchmarked models on LOB datasets spanning longer periods, e.g., years rather than weeks. We recognize that doing so could have improved our generalizability results.
## Disclaimer
This paper was prepared for informational purposes in part by the Artificial Intelligence Research group of JPMorgan Chase & Co. and its affiliates ("JP Morgan"), and is not a product of the Research Department of JP Morgan. JP Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful.
### Acknowledgements
This research was funded by JPMorgan Chase AI Research Faculty award "_Understanding inter-dependent market dynamics: vulnerabilities and opportunities_". We also thank Poste Italiane for funding a Ph.D. scholarship on Financial applications of Artificial Intelligence.
Market Observation
We represent the evolution of a Limit Order Book (LOB) as a time series \(\mathbb{L}\), where each \(\mathbb{L}(t)\in\mathbb{R}^{4L}\) is called a LOB record, for \(t=1,\ldots,N\), being \(N\) the number of LOB observations and \(L\) the number of levels. In particular, \(\mathbb{L}(t)=\{P^{s}(t),V^{s}(t)\}_{s\in\{\mathtt{ask,bid}\}}\), where \(P^{\mathtt{ask}}(t),P^{\mathtt{bid}}(t)\in\mathbb{R}^{L}\) represent the prices of levels 1 to \(L\) of the LOB, on the _ask_ (\(s=\mathtt{ask}\)) side and _bid_ (\(s=\mathtt{bid}\)) side, respectively, at time \(t\). Analogously, \(V^{\mathtt{ask}}(t),V^{\mathtt{bid}}(t)\in\mathbb{R}^{L}\) represent the volumes. This means that for each \(t\) and every \(j\in\{1,\ldots,L\}\) on the _ask_ side, \(V^{\mathtt{ask}}_{j}(t)\) shares can be sold at price \(P^{\mathtt{ask}}_{j}(t)\). Given the time series of a LOB \(\mathbb{L}\) and a temporal window \(T=[t-h,t]\), \(h\in\mathbb{N}\), we can extract _market observations_ on \(T\), \(\mathbb{M}(T)\), by considering the sub-sequence of LOB observations starting from time \(t-h\) up to \(t\). Figure 3 represents an observation \(\mathbb{M}(T)\in\mathbb{R}^{h\times 4L}\). The market observation over the window \([t-h,t]\) is associated with the label computed at time \(t\) through Equations 1 and 2 in the main paper. A Stock Price Trend Prediction (SPTP) Deep Learning (DL) model takes as an input a market observation and outputs a probability distribution over the trend classes U, D, and S.
## Appendix B Models
In our experiments, we consider 13 State-Of-the-Art (SOTA) models based on DL for the SPTP task. These models were published in papers between 2017 and 2022. Two additional baselines, namely Multilayer Perceptron (MLP) and Long-Short Term Memory (LSTM) were also included in our analysis, in addition to two ensemble methods described in Section B.1. All models are trained, validated and tested with LOB data. In the remainder of this section, we briefly describe each selected model.
Tsantekis et al. [27] (2017) use a LSTM to predict price directions considering moving averages of the mid-price over the past and the future \(k\) steps. In the same year, the same authors proposed in [28] a model based on a Convolutional Neural Network (CNN) (CNN1) for future mid-price movement predictions from large-scale high-frequency limit order data. The proposed architecture is composed of a series of convolutional and pooling layers followed by a set of fully connected layers that are used to classify the input. The parameters of the model are learnt by minimizing the categorical cross-entropy. In [31] (2020), the same research group proposed two new architectures. The first one (CNN2) uses a series of convolutional layers for capturing the temporal dynamics of time series extracted from a LOB and for correlating temporally distant features. In the last convolutional layer, CNN2 retains the temporal ordering by flattening only the dimensions of the convolution. The authors then propose an architecture that merges the described CNN with an LSTM, that we call
Figure 3: An example of market observation.
CNNLSTM. Initially, the CNN is used for feature extraction for the LOB time series. It produces a new time series of features with the same length as the original one, which is then passed to the LSTM module for classification.
Tran et al. [29] in 2018 introduced a new Neural Network (NN) architecture for multi-variate time series that incorporates an attention mechanism in the temporal mode. The authors call this architecture Temporal Attention-Augmented Bilinear (TABL), as it applies a bilinear transformation to the input, which consists of a set of samples at different time stamps. The Bilinear Layer (BL) is able to detect feature and time dependencies within the same sample and is augmented with a temporal attention mechanism to capture interactions between different time instances. The authors define three different network configurations, called A(TABL), B(TABL), and C(TABL), with 0, 1, and 2 hidden layers, respectively. In our experiments, we consider C(TABL), which outperforms the others. In [6] (2021), the same authors extended the solutions implemented in [29] by integrating a data-driven normalization strategy that takes into account statistics from both temporal and feature dimensions to tackle potential problems posed by non-stationarity and multimodalities of the input series. The new model is called BINCTABL.
Passalis (2019) et al. [30] introduce the DAIN (Deep Adaptive Input Normalization) three-step layer that adaptively normalizes data depending on the task at hand, instead of using some fixed statistics calculated beforehand as in traditional normalization approaches. DAIN works as follows: in the first layer, called the adaptive shifting layer, the mean of the current time series is scaled by the weight matrix of the first neural layer. The resulting vector is passed to the adaptive scaling layer, which first computes the standard deviation of the original feature vector with respect to the shifted one, and then scales this result using the weight matrix of the scaling layer. The last layer, called the adaptive gating layer, is meant to suppress features that are not relevant by applying a sigmoid function in order to neglect features with excessive variance, which could hinder network generalization. The authors integrate DAIN in three different architectures, a MLP proposed in [57], a CNN as in [28] and RNN [58]. In our experiments, we consider the architecture with the highest performance, namely the MLP.
Zhang et al. [26] (2019) propose DEEPLOB. The authors propose a smooth data labelling approach based on mid-prices to limit noise and discard small oscillations. They propose a 3-block architecture composed of standard convolutional layers, an Inception Module, and a LSTM layer. The first two elements are used for feature extraction, whereas the LSTM layer captures time dependencies among the extracted features.
Wallbridge et al. [32] (2020) introduce TransLOB, a new DL architecture for mid-price trend prediction, composed of two main components: a convolutional module made up of five dilated causal convolutional layers and a transformer module, composed by two transformer encoder layers, each made up of a combination of multi-head self-attention, residual connections, normalization, and feedforward layers. Between the convolutions and the transformer module, the tensor is passed to a normalization layer and concatenated to a positional encoding.
Passalis et al. [33] (2020) propose a model for high-frequency limit order book data based on Temporal Logistic Neural Bag-of-Features formulation (TLoNBoF). Given a collection of time series, TLoNBoF extracts features with a 1-D convolutional layer to capture the temporal relationships between succeeding feature vectors. Then the features are transformed into vectors of constant length, i.e., their length must be invariant to the length of the input time series. To cope with this, the authors define a Temporal Logistic Neural Bag-of-Features formulation to aggregate the extracted feature vectors. A fine-grained temporal segmentation scheme is also proposed to capture the temporal dynamics of the time series. To this end, the transformed feature vectors are segmented into three temporal regions to capture the short-term, mid-term, and long-term behaviour of the time series.
In 2021, Zhang et al.[34] adopt Sequence-to-Sequence (Seq2Seq) [59, 58] and Attention [60] to recursively generate multi-horizon forecasts and build a predictor called DEEPLOBATT. A typical Seq2Seq model consists of an encoder that analyses the input time steps to extract meaningful features. Then, only the last hidden state from the encoder is used to make estimations, which penalizes the processing of long sequence input. To overcome this limitation, the Attention module accesses hidden states of the encoder and assigns a proper weight to each hidden state. Each input contains the most recent 50 updates, and each update includes information for both the ask and bid of a LOB. Therefore, a single input has the dimension (50, 40), and each output consists of a multi-horizon prediction of all 5 points of the FI-2010 dataset. As an encoder, they adapt a previous model, namely
DeepLob [26], to extract representative features from raw LOB data while they experiment with both Seq2Seq and Attention models for the decoder.
Guo et al. [35] (2022) propose a novel architecture for price trend prediction named Deep Learning Architecture (DLA). Firstly, the dataset is preprocessed and aggregated at different time windows. Once extracted, the features are given as input to the three-phase proposed architecture. The first phase uses Temporal Attention to adaptively assign attention weights to each moment of the sliding window. The processed data is passed to a stacked Gated Recurrent Unit (GRU) architecture to obtain an accurate representation of the analysed trends, which is complex and nonlinear. The GRU architecture consists of two hidden GRU layers to generate as output the hidden state at each time period. This is given to the second temporal attention stage, which is used to generate more accurate attention weights. The proposed solution is compared to several other models in the literature, including C(TABL) [29], DeepLOB [26] and TLo-NBoF [33]. The proposed solution achieves very high performance on the FI-2010 dataset outperforms the other models. The authors analyse the performance of their model by varying several parameters, including label thresholds and the choice of the time step.
Tran et al. [36] extend the solution proposed in [61], which introduces a neural bag-of-features (N-BoF)-based method for building a _codeword_ that is eventually fed to a classifier. In [36], the neural bag-of-feature model was enhanced by incorporating a 2D-Attention (2DA) module that highlights important elements in the matrix data while discarding irrelevant ones by zeroing them out. The 2D-Attention function performs a linear interpolation between the input data matrix and input data matrix filtered by an attention mask matrix that encodes the importance of the columns of the original input. The proposed 2DA block can be applied to the features to highlight or discard the outputs of certain quantization neurons, whose results are considered equally important in the NBoF model for every input sequence (Codeword Attention). The resulting model is called ATNBoF. The 2DA function can also be applied to lend weight to salient temporal information, which is otherwise aggregated and equally contributing to the quantized features in the NBoF model (Temporal Attention).
Kiesel et al. [37] (2022) propose Axial-LOB, a model based on axial attention for price trend prediction. Unlike the naive attention mechanism, axial attention factorizes 2D attention into two 1D attention modules, one along the width (feature) axis, and a second one along the height (time) dimension. Raw values of the LOB are preprocessed and passed to the axial attention block: Each layer of the attention block is preceded and followed by a module composed of \(1\times 1\) convolutions, batch normalization, and ReLu activation to adjust the number of channels in the intermediate layers of the network. For training the axial attention module, the authors use mini-batch Stochastic Gradient Descent (SGD) by minimizing the cross-entropy loss between the predicted class probabilities and the ground truth label. The authors compare the performance of the proposed model against the solutions adopted in [28; 29; 26] in terms of precision, recall, and F1 on the FI-2010 dataset. Axial-LOB proves to have improved performance with respect to these works while being simpler in terms of the number of parameters.
### Ensamble Methods
To explore the possibility of achieving new SOTA performance by combining the predictions of all 15 models, we have implemented two ensemble methods: _MAJORITY_ and _METALOB_.
The _MAJORITY_ ensemble assigns the class label that appears most frequently among the predictions of the classifiers. To account for variations in the performance of individual classifiers, we incorporate a weighting scheme based on their F1 scores. This ensures that predictions from higher-performing models carry more influence in the final decision.
The _METALOB_ meta-classifier is implemented as a multilayer perceptron (MLP) with two fully-connected layers. It is designed to learn how to effectively combine the outputs of the 15 DL models, which serve as the base classifiers to produce the final output. The input to the meta-classifier is a 1D tensor with a probability distribution over the trends (_up, stationary, down_), for each of the models, resulting in a tensor of \(3\cdot 15\) elements. The test set of LOB-2021/2022 is divided into three distinct subsets. We allocated 70% of the data for training, 15% for validating, and the remaining 15% for testing the meta-classifier.
By implementing these ensemble methods, our objective was to leverage the collective intelligence of ensemble models and potentially achieve performance that surpasses that of individual models.
Unfortunately, the ensemble models did not achieve the expected level of performance, as they failed to surpass the performance of the best individual models, as discussed in the main paper.
## Appendix C Stock Selection
For our generalizability study, in order to create a variegated evaluation scenario, we curated a pool of 630 stocks from NASDAQ exchange with a market capitalization that ranged from \(\sim 2\) Billion to \(\sim 3\) Trillion dollars. Data was gathered from NASDAQ Stock Screener [62]. From the pool of stocks we generated 6 clusters with \(t\)-_distributed Stochastic Neighbor Embedding_ (\(t\)-SNE) to capture stocks differences in the years 2021-2023. We used the following features: daily return, hourly return, volatility, outstanding shares, P/E ratio, and market capitalization. The P/E ratio indicates the ratio between the price of a stock (P) and the company's annual earnings per share (E). The analysis led to the identification of 6 stocks that are nearest to the cluster centroids of the generated 3-dimensional latent space. The stocks are the following: SoFi Technologies (SOFI), Netflix (NFLX), Cisco Systems (CSCO), Wing Stop (WING), Shoals Technologies Group (SHLS), and Landstar System (LSTR), making up the set that we denote by \(\mathcal{S}=\{\)SOFI, NFLX, CSCO, WING, SHLS, LSTR\(\}\). Table 3 captures the main features of these stocks for the period of July 2021. The selected stocks have very variable average daily returns, the minimum being SHLS and the maximum being NFLX. Daily and Hourly returns highlight that some stocks are more volatile than others. The market capitalization represents the total value of the outstanding common shares owned by stockholders. Stocks show different class balancing in the training set. CSCO is the stock with a major unbalance toward the stable class, whereas NFLX and LSTR are more unbalanced towards up and down classes, respectively. In Section 6, as well as in the main paper, we analyze the reasons behind the occurrence of class imbalance specific to individual stocks and discuss its impact.
## Appendix D Datasets
FI-2010The most widely spread public LOB dataset is **FI-2010**, which was proposed in 2017 by Ntakaris et al. [12] with the objective of evaluating the performance of machine learning models on the SPTP task. The dataset consists of LOB data from five Finnish companies: Kesko Oyj, Outokumpu Oyj, Sampo, Rautaruukki, and Wartsila Oyj (KESKOB, OUT1V, SAMPO, RTRKS, WRT1V, respectively) of the NASDAQ Nordic stock market. Data spans the time period between June 1st to June 14th, 2010, corresponding to 10 trading days (trading happens only on business days). About 4 million limit order messages are stored for ten levels of the LOB. The dataset has an event-based granularity, meaning that the time series records are not uniformly spaced in time. LOB observations are sampled at intervals of 10 _events_ (i.e. the submission of an order that causes a LOB update), resulting in a total of 394,337 events.
In FI-2010, the first 7 out of the 10 trading days are dedicated to the training set, while the remaining 3 days constitute the test set. We also extracted a validation set from the training set as the last 20% of the samples to perform hyper-parameter tuning, as in [26]. Moreover, as FI-2010 is already normalized, we selected the dataset with \(z\)-score normalization for our experiments.
\begin{table}
\begin{tabular}{|c||c|c|c||c||c|} \hline
**Stock** & **Daily Return** & **Hourly** & **Market Cap.** & **P/E Ratio** & \begin{tabular}{c} **Train Set** \\ **(\%)**\(k=5\) \\ \end{tabular} &
\begin{tabular}{c} **Train Set** \\ **(\%)**\(k=5\) \\ \end{tabular} \\ \hline SOFI & \(-2.3\pm 3.1\) & \(-0.3\pm 1.2\) & \(4.26\cdot 10^{9}\) & \(-27.84\) & \(41-19-40\) & \(14.8\) \\ \hline NFLX & \(0.6\pm 1.7\) & \(0.05\pm 0.6\) & \(1.58\cdot 10^{11}\) & \(38.28\) & \(45-5-50\) & \(21.7\) \\ \hline CSCO & \(0.2\pm 0.7\) & \(0.02\pm 0.4\) & \(2\cdot 10^{11}\) & \(17.59\) & \(18-65-17\) & \(46.2\) \\ \hline WING & \(-0.3\pm 3.2\) & \(-0.04\pm 0.9\) & \(6.06\cdot 10^{9}\) & \(96.87\) & \(44-7-49\) & \(6.1\) \\ \hline SHLS & \(-2.4\pm 4.9\) & \(-0.3\pm 1.9\) & \(4.05\cdot 10^{9}\) & \(26.24\) & \(42-14-44\) & \(7.4\) \\ \hline LSTR & \(0.1\pm 2.8\) & \(-0.03\pm 0.73\) & \(6.16\cdot 10^{9}\) & \(16.55\) & \(48-5-47\) & \(3.8\) \\ \hline \end{tabular}
\end{table}
Table 3: Stats for the stocks (2021-07-01 – 2021-07-15).
Finally, the dataset is already provided with the labels for each horizon \(k\in\mathcal{K}=\{1,2,3,5,10\}\) by leveraging the trend definitions described in Equation 2 in the main paper. Such a labelling scheme is very sensitive to the threshold \(\theta\) regarding the resulting balancing between "upward", "downward" and "stable" trends. The authors of the dataset employed a single threshold \(\theta=0.002\) for all horizons, but it balances only the case of \(k=5\). Varying the horizon \(k\in\mathcal{K}\), the class imbalance occurs as shown in Table 4. Class imbalance is not addressed to guarantee a fair robustness evaluation since the considered works do not claim to have done so.
LOB-2021 and LOB-2022To study the generalizability of the 15 models, we extracted the market observations (see Section C) from the LOBSTER dataset in two time periods: _July 2021_ (2021-07-01 to 2021-07-15, ten trading days) (see Table 3) making up **LOB-2021**, and _February 2022_ (2022-02-01 to 2022-02-15, ten trading days) making up **LOB-2022**. These two periods have shown to be different in terms of volatility, as the impact of the war in Ukraine has made the market more volatile and unstable. The mid-price trends for these two periods and for the selected stocks are depicted in Figure 4. We build the two datasets associated with these two time periods resembling the structure of the FI-2010 dataset, described in the previous section and proposed in [12].
To generate the LOB-2021/2022 datasets, we utilize the LOBSTER data, which consists of LOB records (i.e., \(\mathbb{L}(t)\) vectors) resulting from events caused by traders at the exchange. LOBSTER associates these records with the specific events that caused changes in the LOB. We isolated the following types of events: order _submissions_, _deletions_, and _executions_, which account for almost all the events in the markets.
For each stock in the set \(\mathcal{S}\) we construct a _stock time series_ of LOB records \(\mathbb{L}_{s}(t)\in\mathbb{R}^{4L}\), with \(L=10,s\in\mathcal{S}\), \(N_{s}\) being the amount of records of the stock \(s\) in the considered temporal interval (e.g., (2021-07-01, 2021-07-15) for LOB-2021), \(t\in[1,N_{s}]\). We recall that the \(4\cdot 10\) features represent the prices and volumes on the buy and sell sides for the ten levels of the LOB. We highlight that the time series \(\mathbb{L}_{s}\) are non-uniform in time since LOB events can occur at irregular intervals driven by traders' actions. We do not impose temporal uniformization. Instead, we sample the market observation every ten events, as for FI-2010. Furthermore, we do not account for liquidity beyond the 10th order level in the LOB. This approximation is necessary to ensure computational tractability while retaining the most influential levels. It is a commonly employed technique in stock market prediction models, also employed in FI-2010.
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline Horizon \(k\) & Upward (\%) & Stationary (\%) & Downward (\%) \\ \hline
1 & 18 & 63 & 19 \\ \hline
2 & 25 & 50 & 25 \\ \hline
3 & 28 & 43 & 29 \\ \hline
5 & 32 & 35 & 33 \\ \hline
10 & 37 & 25 & 38 \\ \hline \end{tabular}
\end{table}
Table 4: Class balancing on FI-2010.
Figure 4: Selected stocks mid-price normalized by the midprice of the first day.
Each stock time series \(\mathbb{L}_{s}\) is split into _training_, _validation_, and _testing_ sets using a 6-2-2 days split. Normalization is performed on stock time series using a \(z\)-score approach, separately normalizing the prices and volumes. The mean and standard deviation are calculated from the union of the training and validation splits for all stock time series. These statistics are then used to normalize the entire dataset, including the test splits. The final dataset is constructed by vertical stacking (i.e., concatenating along the rows) the six training splits (i.e., one for each stock), six validation splits, and six test splits in this order.
The dataset is used to extract market observations with a sliding window approach, as explained in Section 3 of the main paper. Labelling market observations is accomplished by leveraging the trend definitions described in Equation 2 of the main paper, mapping market observations to the corresponding trend based on a predefined prediction horizon \(k\in\mathcal{K}\). It is important to note that for each prediction horizon \(\mathcal{K}\), a new dataset is generated. Consequently, LOB-2021 and LOB-2022 consist of five (i.e., \(|\mathcal{K}|\)) distinct datasets, each corresponding to one of the five prediction horizons.
## Appendix E Hyperparameters Search
For evaluating the _robustness_ of the surveyed models, we used the hyperparameters reported in the original papers whenever they were available. However, we encountered cases where hyperparameters were not declared at all, such as in LSTM [27] and CNN1 [28], while in other cases, including CNNLSTM [31], AXIALLOB [37], ATNBOF [36] and DAIN [30] only partial information was provided. To address these gaps, we performed a grid search exploring different values for the **batch size**, including \(\{16,32,64,128,256\}\) and the **learning rate**, including \(\{0.01,0.001,0.0001,0.00001\}\).
Regarding the _generalizability_ experiment, we found that the majority of models using the hyperparameters from the robustness analysis performed poorly on the LOB-2021/2022 datasets. We conducted a comprehensive hyperparameter search on horizon \(k=5\) (which is the most balanced) using a grid search approach for all 16 models. For this search, we maintained the same number of epochs and optimizer used in the robustness analysis, while searching for batch size and learning rate using the same domains mentioned above. For a complete overview of the hyperparameters utilized in our experiments, refer to Table 5.
## Appendix F Additional Experimental Results
RobustnessFigure 5 depicts the F1 score, accuracy, precision, and recall of the surveyed models obtained through our framework called LOBCAST, for the time horizons \(\mathcal{K}=\{1,2,3,5,10\}\). Most of the models show similar behaviour with respect to the prediction horizons. In particular, regarding the F1-score, the worst performance is obtained for \(k=2\), after which there is an increasing trend as the prediction horizon increases. This might sound counterintuitive, as it consists of forecasting the
\begin{table}
\begin{tabular}{c|c c c c|c c c c c} \hline \hline & \multicolumn{3}{c|}{F1-2010 (Robustness)} & \multicolumn{3}{c}{LOB-2021/2022 (Generalizability)} \\ \hline Model & Learning Rate & Optimized & Batch Size & Epochs & Dropout & Learning Rate & Optimizer & Batch Size & Epochs & Dropout \\ \hline LSTM & 0.001 & Adam & 32 & 100 & - & 0.0001 & Adam & 64 & 100 & - \\ MLP & 0.001 & Adam & 64 & 100 & - & 0.00001 & Adam & 64 & 100 & - \\ CNN1 & 0.001 & Adam & 64 & 100 & - & 0.0001 & Adam & 32 & 100 & - \\ CTABL & 0.01 & Adam & 256 & 200 & - & 0.001 & Adam & 64 & 200 & - \\ DAIN & 0.0001 & RMSprop & 32 & 100 & 0.5 & 0.0001 & RMSprop & 64 & 100 & 0.5 \\ DEEPLOB & 0.01 & Adam & 32 & 100 & - & 0.01 & Adam & 32 & 100 & - \\ CNN1STM & 0.001 & RMSprop & 32 & 20 & 0.1 & 0.001 & RMSprop & 128 & 100 & 0.1 \\ CNN2 & 0.001 & RMSprop & 32 & 100 & - & 0.001 & RMSprop & 128 & 100 & - \\ TRANSLORB & 0.0001 & Adam & 32 & 150 & - & 0.001 & Adam & 128 & 100 & - \\ TLONBoE & 0.0001 & Adam & 128 & 100 & - & 0.00001 & Adam & 32 & 100 & - \\ BINCURL & 0.001 & Adam & 128 & 200 & - & 0.001 & Adam & 32 & 200 & - \\ DEEPLOBATT & 0.001 & Adam & 32 & 100 & - & 0.0001 & Adam & 128 & 100 & - \\ AXIALLOB & 0.01 & SGD & 64 & 50 & - & 0.01 & SGD & 64 & 50 & - \\ ATNBoE & 0.001 & Adam & 128 & 80 & 0.2 & 0.00001 & Adam & 32 & 80 & 0.2 \\ DLA & 0.01 & Adam & 256 & 100 & - & 0.001 & Adam & 64 & 100 & - \\ \hline METALOB & 0.0001 & SGD & 64 & 100 & - & 0.0001 & SGD & 64 & 100 & - \\ \hline \hline \end{tabular}
\end{table}
Table 5: Hyperparameters adopted in our experiments.
price trend in a more distant future. However, for very short horizons, the labelling system adopted may be susceptible to noise affecting the model's capability to extract relevant patterns.
Figure 6 shows a bar chart representing the F1-score of the 17 models reproduced using the LOBCAST framework for the five prediction horizons \(\mathcal{K}\). The plot shows black empty bars representing the declared performance in the corresponding paper, when applicable. In the figure, the number on the bar represents the obtained performance in LOBCAST on the FI-2010 dataset, and the value in brackets indicates the difference between the obtained performance and the originally declared performance in the respective paper. We highlight that not all papers declare their performance for all the horizons. The figure clearly highlights how robust the considered models are. Surprisingly, for CNN2 and CNNLSTM, our experiments achieved noticeably higher performance than the one declared in the original paper. We also observe the trend where the BINCTABL model consistently emerges as one of the top-performing models across all the horizons. Moreover, it notably closely aligns with the performance reported in the paper presenting it.
The largest discrepancy is observed for TRANSLOB and ATNBoF (or TNBoF-TA), whose average performances differ by 28% from the original results. On average, ATNBoF achieves an F1-score of only \(40.9\%\). This substantial deviation from the claimed performance highlights the challenges and limitations associated with this particular model.
Figure 7 shows the agreement matrix of the models for the horizon \(k=5\). As expected, the highest agreement (\(\approx\)80%) is among the best-performing models, namely BINCTABL, AXIALLOB, DEEPLOB, CTABL, DLA and DEEPLOBATT. The model that exhibits less correlation with the other models is MLP.
The best-performing model in our benchmark is BINCTABL reaching 92.1% of F1-Score on time horizon \(k=10\). Specifically, BINCTABL introduces an Adaptive Bilinear Normalization layer
Figure 5: Evaluation metrics on different horizons \(\mathcal{K}\) on FI-2010 dataset.
Figure 6: F1-score on FI-2010.
to CTABL, enabling joint normalization of the input time series along both temporal and feature dimensions. This enhancement yields a remarkable improvement, with an average increase of \(9.2\%\) in the F1-score compared to the second-best model (DLA). Interestingly BINCTABL is composed only of 11.446 parameters, which makes it very fast at inference time (0.0005s).
GeneralizabilityIn this section, we provide additional results on the generalizability of the models. We evaluate the performance of the models on two different datasets: LOB-2021 and LOB-2022.
Figure 7: Agreement matrix FI-2010 in the horizon \(k=5\).
The evaluation metrics used include F1-score, accuracy, precision, and recall, which are displayed in Figure 8 for LOB-2021. The plot for LOB-2022 is omitted since they show similar properties.
We observe that most models exhibit a similar trend in both LOB-2021 and LOB-2022 datasets. However, the performance curves in these generalizability tests differ from the results obtained on the FI-2010 dataset, shown in Figure 5. Specifically, for the LOB-2021/2022 datasets, the F1-score of most models shows an increasing trend as the prediction horizon increases up to \(k=3\), after which it starts to decrease.
To ease readability, in Table 6 we report the F1-score of all the models, horizons and periods.
The performance of the models, as reported by the authors of the selected paper, exhibits changes when evaluated on the LOB-2021 and LOB-2022 datasets. These changes show varying degrees of generalizability among the models.
Notably, the ATNBoF model demonstrates the most substantial improvement with respect to the declared performances, showing an average increase of 12.2% across all prediction horizons. A similar improvement is exhibited by MLP and TLONBoF. Despite this improvement, ATNBoF still exhibits the lowest overall performance with an average score of 53.1%. It is worth mentioning that ATNBoF is the most sensitive to random initialization.
In contrast, the other models experience a significant decline in performance when evaluated on LOB-2021 and LOB-2022 datasets. For example, the previously best-performing model on the FI-2010 dataset, BINCTABL, shows an average decrease in F1-score of approximately 19.6% across all prediction horizons. This decline results in a generalizability score of 73.5% (as mentioned in Table 2 of the main paper). However, despite this decline, BINCTABL remains the top-performing model when evaluated on the LOB-2021 dataset on almost all the prediction horizons. On these datasets it exhibits similar performance to DEPPLOB and DEEPLOBATT models.
Figure 9 shows the agreement matrix on LOB-2022. Considering the more flattened performances of the models on the LOB-2021/2022 dataset compared to the FI-2010 dataset, the agreement percentages among the models are consistently high, and no distinct patterns are observed. Unlike FI-2010, where METALOB predicted as BINCTABL 82.8% of the time, on LOB-2022 (and also LOB-2021) METALOB showed no preference for any model, resulting in a balanced agreement rate (\(\approx 33\%\)) among all models. We decided not to include the agreement matrix of LOB-2021 because it was similar to LOB-2022.
In Figure 10, we present the results of our tests for the time horizon \(k=5\) on each individual stock from LOB-2021 dataset. Among the tested stocks, CSCO stands out as yielding the highest
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{|c||}{\(k-1\)} & \multicolumn{3}{|c||}{\(k-2\)} & \multicolumn{3}{|c|}{\(k-3\)} & \multicolumn{3}{|c|}{\(k-5\)} & \multicolumn{3}{|c|}{\(k-10\)} \\ \hline
**Model** & **FI** & **FC** & **LOB** & **LOB** & **FI** & **FF** & **LOB** & **FI** & **FF** & **LOB** & **FI** & **LOB** & **FI** & **LOB** & **FI** & **LOB** & **LOB** \\
**2010** & **2010** & **2012** & **2012** & **2012** & **2010** & **2012** & **2010** & **2012** & **2010** & **2012** & **2010** & **2012** & **2010** & **2012** & **2012** & **2010** & **2010** & **2012** & **2012** & **2010** & **2012** & **2012** \\ \hline \hline MLP & 48.3 & 48.2 & 48.3 & 51.1 & 51.1 & 44.0 & 56.2 & 54.1 & **-** & 47.2 & 58.5 & 55.9 & 56.0 & 49.0 & 59.2 & 55.0 & **-** & 51.6 & 55.4 & 40.3 \\ \hline LSTM & 06.3 & 65.0 & 49.6 & 53.7 & 62.4 & 58.8 & 58.0 & 57.4 & - & 63.3 & 60.3 & 00.6 & 61.4 & 66.9 & 60.6 & 56.2 & - & 59.4 & 56.0 & 22.6 \\ \hline CNN1 & 55.2 & 49.3 & 52.5 & 55.3 & 59.2 & 46.1 & 37.7 & 50.8 & - & 62.3 & 60.2 & 20.3 & 29.4 & 65.8 & 60.1 & 58.5 & - & 67.2 & 36.7 & 22.6 \\ \hline CIFAR & 77.6 & 60.5 & 55.3 & 57.8 & 66.9 & 62.4 & 60.7 & 60.9 & - & 70.4 & 62.2 & 60.8 & 78.4 & 71.6 & 62.2 & 58.8 & - & 73.9 & 57.8 & 20.0 \\ \hline DEEPLOB & 83.4 & 71.1 & 50.0 & 77.0 & 72.8 & 62.4 & 60.4 & 62.0 & - & 70.8 & 62.7 & **62.4** & 80.4 & 78.4 & 62.2 & 60.8 & - & 77.6 & 57.4 & 55.2 \\ \hline DAIN & 63.3 & 53.9 & 47.7 & 52.2 & 63.7 & 66.6 & 54.9 & - & 58.1 & 59.5 & 50.8 & - & 61.2 & 60.0 & 65.5 & - & 62.8 & 58.0 & 51.2 \\ \hline CNNLSTM & 47.0 & 63.5 & 51.8 & 55.0 & - & 49.1 & 58.1 & 59.8 & - & 63.3 & 59.9 & 59.2 & 47.0 & 60.2 & 60.1 & 57.1 & 47.0 & 71.0 & 55.3 & 33.1 \\ \hline CNN2 & 46.0 & 27.6 & 49.9 & 51.9 & - & 35.4 & 55.9 & 50.0 & - & 53.2 & 58.9 & 58.7 & 45.0 & 67.8 & 57.3 & 44.0 & 68.5 & 54.0 & 52.0 \\ \hline TRANSLOB & **88.7** & 67.4 & 53.4 & 43.7 & **80.6** & 54.7 & 57.8 & 43.0 & - & 59.8 & 60.7 & **57.5** & **82.0** & 60.3 & 56.6 & **91.6** & 60.5 & 59.8 & 51.0 \\ \hline TLONBoF & 53.0 & 36.5 & 52.5 & 53.1 & - & 51.7 & 50.5 & 56.5 & - & 41.5 & 60.1 & 57.1 & - & 52.4 & 59.9 & 55.7 & - & 66.0 & 58.5 & 56.0 & 48.5 \\ \hline BINCTABL & 81.0 & **81.1** & **57.0** & **58.4** & **71.2** & **71.5** & **62.4** & **62.0** & **-** & **80.8** & **63.9** & 62.2 & 88.1 & **87.7** & **63.5** & **60.4** & **-** & **92.1** & **59.1** & 53.2 \\ \hline DEEPLOART & 82.4 & 76.6 & 54.8 & 55.8 & 73.7 & 54.8 & 61.1 & 60.5 & 76.9 & 66.0 & 62.6 & 62.1 & 79.4 & 73.6 & 62.8 & **60.9** & 81.5 & 71.6 & 59.0 & **55.3** \\ \hline DLA & 77.8 & 79.4 & 51.2 & 54.4 & - & 60.3 & 86.6 & 58.0 & 79.4 & 78.9 & 61.3 & 60.0 & 79.0 & 87.1 & 60.3 & 57.3 & - & 52.2 & 57.1 & 53.4 \\ \hline ATNBoF & 67.9 & 32.9 & 49.8 & 47.8 & 60.0 & 34.2 & 33.1 & - & 38.2 & 54.6 & 43.3 & 73.4 & 84.1 & 57.2 & 50.8 & - & 51.0 & 50.9 & 40.9 \\ \hline AVAILOB & 85.1 & 72.2 & 54.0 & 56.9 & 75.5 & 63.1 & 60.7 & 60.1 & **80.1** & 72.8 & 62.6 & 63.0 & 83.8 & 79.3 & 62.4 & 58.9 & 79.2 & 79.5 & 54.6 \\ \hline \hline METALOB & - & 81.1 & 51.1 & 52.3 & - & 70.5 & 56.1 & 53.3 & - & 80.3 & 57.5 & 53.3 & - & 87.5 & 58.4 & 51.5 & - & 91.8 & 56.0 & 50.9 \\ \hline MAJORITY & - & 47.1 & 51.8 & 50.6 & - & 44.9 &
performance. This may be attributed to the high stationarity of CSCO (balance 18-65-17% in the train set), indicating more stable and predictable behaviour. This hypothesis is supported by the confusion matrices reported in the main paper in section 4.4, which shows the best performance in the stationary class for BINCTABL model. We highlight that it was impossible to extract the per-stock information on the FI-2010 dataset because it was already assembled, and the authors did not provide information on that procedure.
LabellingThe experiments shown above highlight that the models' performance does not exhibit a clear trend with respect to the prediction horizon. The labelling method is probably the cause of this phenomenon, in fact, classifying trends based on the mid-price tends to embody noise on the nearer horizons. This hypothesis is supported by the work of Zhang et al. [26], specifically, they generated a dataset using an alternative labelling method that relies on the mean of the previous and next \(k\) mid-prices to identify trends. Interestingly they observed an inverse trend in performance with respect to the horizons; in fact, the best performances were achieved with the shortest horizon and deteriorated as it increased. While exploring various labelling techniques is beyond the scope of this benchmark, we provide an initial investigation in this direction. Specifically, focusing on \(k=5\) in LOB-21, we select two stocks, NFLX and SOFI.
Based on Equation 1 and 2 of the original paper, we can define \(\theta_{N}\) and \(\theta_{S}\) as the thresholds that balance the occurrences of the classes for the stocks NFLX and SOFI, respectively. Similarly, we can define \(\theta_{0}\) as the threshold that balances the occurrences of the classes for the ensemble of six stocks within the dataset.
Figure 11 shows the results of three different training settings: (i.) **ALL (\(\theta_{0}\))** represents the training of the models over the ensemble of all the six stocks using the threshold \(\theta_{0}\); (ii.) **NFLX (\(\theta_{0}\)) (SOFI (\(\theta_{0}\)))** represents the training of the models over NFLX (SOFI) stock using the threshold \(\theta_{0}\). (iii.)
Figure 8: Evaluation metrics on different horizons \(\mathcal{K}\) on LOB-2021.
Figure 9: Agreement matrix on LOB-2022.
Figure 11: Different labelling strategies on NFLX and SOFI stocks for \(k=5\).
Figure 10: F1-Score per stock, time horizon \(k=5\), on LOB-2021.
**NFLX** (\(\theta_{N}\)) (**SOFI** (\(\theta_{S}\))) represents the training of the models over NFLX (SOFI) stock using the threshold \(\theta_{N}\).
In the case of SOFI, all methods, except for BINCTABL, achieve the highest performance in the **ALL (\(\theta_{\mathbf{0}}\))** setting. This indicates that these models are able to extract useful signals from other stocks, reducing overfitting and improving overall performance. On the other hand, comparing the **SOFI** (\(\theta_{0}\)) and **SOFI** (\(\theta_{S}\)) settings does not provide significant insights. This suggests that the balancing of the three classes is not crucial for achieving higher performance. This is even more the case for NFLX in Figure 10(a), considering that the imbalance due to \(\theta_{0}\) is much higher (see Table 3).
These results indicate that the labelling mechanism should be revised from its current definition and be agnostic with respect to the balancing involved. Trends definitions should not solely depend on the magnitude of the future price shift relative to the current price. Other factors, such as persistence over time and volume considerations, should also be taken into account. A more comprehensive discussion of the limitations and challenges associated with the labelling mechanisms can be found in the main paper, particularly in the final discussion and conclusions section.
## Appendix G Profit Analysis
As a final benchmark test, we conducted a trading simulation using our framework, relying on Backtesting.py Python library 5. As highlighted by [24], most of the existing literature in the SPTP field neglects backtesting, even though it is essential for evaluating the performance of algorithmic trading strategies and for potential real-world use.
Footnote 5: [https://kernc.github.io/backtesting.py/](https://kernc.github.io/backtesting.py/)
We performed backtesting using the same period as the test set of the LOB-2021 dataset, i.e., from 2021-07-13 to 2021-07-15. To perform backtesting, we generated an Open High Low Close (OHLC) time series with a 10 events period. The OHLC is an aggregation technique to summarize periods of a time series, e.g., minutes, hours, days, or a number of events (10 in this case). Each data point of the series represents four aggregates of the considered period. The _Open_ represents the first price of the period; _High_ is the highest price of the period; _Low_ is the lowest price of the period; _Close_ is the last price of the period.
We base our trading simulation on the methodology of the seminal paper [26] in this field, in which the authors conducted a similar experiment. We established certain parameters for our simulation. Firstly, we set the number of shares per trade to a fixed value of 1, simplifying our analysis and assuming a negligible market impact. Furthermore, our simulated trader begins with an initial capital of $10.000, and we make the assumption of no transaction fees.
The _trading strategy_ relies on the models and operates by generating signals every 10 events to predict subsequent price movements. These signals, categorized as _up_, _stationary_, or _down_, determine the trading action. When the signal is _up_, the simulated trader places a buy order. Conversely, if the signal is _down_ and the trader currently holds a long position, he places a sell order. In cases where the signal is _stationary_, the trader takes no action. The orders are filled at the next open price.
The results of the trading simulation for each stock are presented in Figure 12. The strongest correlation observed is between the daily returns of the stocks, as shown in Table 3, and the returns of the strategy described above. In fact, the two stocks with the highest positive daily returns (namely LSTR and NFLX) are the only ones for which the strategy is profitable. On the other hand, the two stocks with the highest negative daily returns (SOFI and SHLS) are the ones for which most models show a negative return. Another correlation, albeit less strong, is between the volatility of the stocks and the return of the models. Specifically, lower volatility is associated with higher model returns.
We recognize the limitations of this simulation. For instance, we do not perform portfolio optimization or position sizing, we assume the trades execution at the mid-price, and we ignore transaction costs, but a realistic and sophisticated algorithmic trading simulation is beyond the scope of this study and remains an interesting aspect for future research.
Figure 12: Distribution of returns on five seeds. |
2304.09044 | Construction of coarse-grained molecular dynamics with many-body
non-Markovian memory | We introduce a machine-learning-based coarse-grained molecular dynamics
(CGMD) model that faithfully retains the many-body nature of the
inter-molecular dissipative interactions. Unlike common empirical CG models,
the present model is constructed based on the Mori-Zwanzig formalism and
naturally inherits the heterogeneous state-dependent memory term rather than
matching the mean-field metrics such as the velocity auto-correlation function.
Numerical results show that preserving the many-body nature of the memory term
is crucial for predicting the collective transport and diffusion processes,
where empirical forms generally show limitations. | Liyao Lyu, Huan Lei | 2023-04-18T15:05:54Z | http://arxiv.org/abs/2304.09044v1 | # Construction of coarse-grained molecular dynamics with many-body non-Markovian memory
###### Abstract
We introduce a machine-learning-based coarse-grained molecular dynamics (CGMD) model that faithfully retains the many-body nature of the inter-molecular dissipative interactions. Unlike the common empirical CG models, the present model is constructed based on the Mori-Zwanzig formalism and naturally inherits the heterogeneous state-dependent memory term rather than matching the mean-field metrics such as the velocity auto-correlation function. Numerical results show that preserving the many-body nature of the memory term is crucial for predicting the collective transport and diffusion processes, where empirical forms generally show limitations.
Introduction
Accurately predicting the collective behavior of multi-scale physical systems is a long-standing problem that requires the integrated modeling of the molecular-level interactions across multiple scales [1]. However, for systems without clear scale separation, there often exists no such a set of simple collective variables by which we can formulate the evolution in an analytic and self-determined way. One canonical example is coarse-grained molecular dynamics (CGMD). While the reduced degrees of freedom (DoFs) enable us to achieve a broader range of the spatio-temporal scale, the construction of truly reliable CG models remains highly non-trivial. A significant amount of work [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13] (see also review [14]), including recent machine learning (ML)-based approaches [15; 16; 17; 18; 19], have been devoted to constructing the conservative CG potential for retaining consistent static and thermodynamic properties. However, accurate prediction of the CG dynamics further relies on faithfully modeling a memory term that represents the energy-dissipation processes arising from the unresolved DoFs; the governing equations generally become non-Markovian on the CG scale. Moreover, such non-Markovian term often depends on the resolved variables in a complex way [20; 21; 22; 23; 24; 25; 26] where the analytic formulation is generally unknown. Existing approaches often rely on empirical models such as Brownian motion [27], Langevin dynamics [28], and dissipative particle dynamics (DPD) [29; 30]. Despite their broad applications, studies [31; 32; 33] based on direct construction from full MD show that the empirical (e.g., pairwise additive) forms can be insufficient to capture the state-dependent energy-dissipation processes due to the many-body and non-Markovian effects. Recent efforts [34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48] model the memory term based on the generalized Langevin equation (GLE) and its variants (see also review [49]). While the velocity auto-correlation function (VACF) is often used as the target quantity for model parameterization, it is essentially a metric of the background dissipation under mean-field approximation. The homogeneous kernel overlooks the heterogeneity of the energy dissipation among the CG particles stemming from the many-body nature of the marginal probability density function of the CG variables. This limitation imposes a fundamental challenge for accurately modeling the local irreversible responses as well as the transport and diffusion processes on the collective scale.
This work aims to fill the gap with a new CG model that faithfully entails the state-dependent non-Markovian memory and the coherent noise. The model formulation can be loosely viewed as an extended dynamics of the CG variables joint with a set of non-Markovian features that embodies the many-body nature of the energy dissipation among the CG particles. Specifically, we treat
each CG particle as an agent and seek a set of symmetry-preserving neural network (NN) representations that directly map its local environments to the non-Markovian friction interactions, and thereby circumvent the exhausting efforts of fitting the individual memory terms with a unified empirical form. Different from the ML-based potential model [19], the memory terms are represented by NNs in form of second-order tensors that strictly preserve the rotational symmetry and the positive-definite constraint. Coherent noise can be introduced satisfying the second fluctuation-dissipation theorem and retaining consistent invariant distribution. Rather than matching the VACF, the model is trained based on the Mori-Zwanzig (MZ) projection formalism such that the effects of the unresolved interactions can be seamlessly inherited. We emphasize that the construction is not merely for mathematical rigor. Numerical results of a polymer molecule system show that the CG models with empirical memory forms are generally insufficient to capture heterogeneous inter-molecular dissipation that leads to inaccurate cross-correlation functions among the particles. Fortunately, the present model can reproduce both the auto- and cross-correlation functions. More importantly, it accurately predicts the challenging collective dynamics characterized by the hydrodynamic mode correlation and the van Hove function [50] and shows the promise to predict the meso-scale transport and diffusion processes with molecular-level fidelity.
## II Methods
Let us consider a full MD system consisting of \(M\) molecules with a total number of \(N\) atoms. The phase space vector is denoted by \(\mathbf{z}=[\mathbf{q},\mathbf{p}]\), where \(\mathbf{q},\mathbf{p}\in\mathbb{R}^{3N}\) represent the position and momentum vector, respectively. Given \(\mathbf{z}(0)=\mathbf{z}_{0}\), the evolution follows \(\mathbf{z}(t)=\mathrm{e}^{\mathcal{L}}\mathbf{z}_{0}\), where \(\mathcal{L}\) is the Liouville operator determined by the Hamiltonian \(H(\mathbf{z})\). The CG variables are defined by representing each molecule as a CG particle, i.e., \(\phi(\mathbf{z})=\left[\phi^{Q}(\mathbf{z}),\phi^{P}(\mathbf{z})\right]\), where \(\phi^{Q}(\mathbf{z})=[\mathbf{Q}_{1},\mathbf{Q}_{2},\cdots,\mathbf{Q}_{M}]\) and \(\phi^{P}(\mathbf{z})=[\mathbf{P}_{1},\mathbf{P}_{2},\cdots,\mathbf{P}_{M}]\) represent the center of mass and the total momentum of individual molecules, respectively. \(\mathbf{Z}(t)=[\mathbf{Q}(t),\mathbf{P}(t)]\) denote the map \(\phi(\mathbf{z}(t))\) with \(\mathbf{z}(0)=\mathbf{z}_{0}\). To construct the reduced model, we define the Zwanzig projection operator as the conditional expectation with a fixed CG vector \(\mathbf{Z}\), i.e., \(\mathcal{P}_{\mathbf{Z}}f(\mathbf{z}):=\mathbb{E}[f(\mathbf{z})|\phi(\mathbf{z}) =\mathbf{Z}]\) under conditional density proportional to \(\delta(\phi(\mathbf{z})-\mathbf{Z})\mathrm{e}^{-\beta\mathrm{H}(\mathbf{z})}\) and its orthogonal operator \(\mathcal{D}_{\mathbf{Z}}=\mathbf{I}-\mathcal{P}_{\mathbf{Z}}\).
Using Zwanzig's formalism [51], the dynamics of \(\mathbf{Z}(t)\) (see Appendix A) can be written as
\[\dot{\mathbf{Q}} =\mathbf{M}^{-1}\mathbf{P} \tag{1}\] \[\dot{\mathbf{P}} =-\nabla U(\mathbf{Q})+\int_{0}^{t}\mathbf{K}(\mathbf{Q}(s),t-s) \mathbf{V}(s)\,\mathrm{d}s+\mathbf{R}(t),\]
where \(\mathbf{M}\) is the mass matrix and \(\mathbf{V}=\mathbf{M}^{-1}\mathbf{P}\) is the velocity. \(U(\mathbf{Q})\) is the free energy under \(\phi^{Q}(\mathbf{z})\equiv\mathbf{Q}\). \(\mathbf{K}(\mathbf{Q},t)=\mathcal{P}_{\mathbf{Z}}[(\mathbf{e}^{\mathcal{Q}_{ \mathbf{Z}}\mathcal{L}_{t}}\mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{P})( \mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{P})^{T}]\) is the memory representing the coupling between the CG and unresolved variables, and \(\mathbf{R}(t)\) is the fluctuation force.
Eq. (1) provides the starting point to derive the various CG models. Direct evaluation of \(\mathbf{K}(\mathbf{Q},t)\) imposes a challenge as it relies on solving the full-dimensional orthogonal dynamics \(\mathbf{e}^{\mathcal{Q}}\mathbf{z}^{t}\). Further simplification \(\mathbf{K}(\mathbf{Q},t)\approx\theta(t)\) leads to the common GLE with a homogeneous kernel. Alternatively, the pairwise approximation \([\mathbf{K}(\mathbf{Q},t)]_{ij}\approx\gamma(Q_{ij})\delta(t)\) or \(\gamma(Q_{ij})\theta(t)\) leads to the standard DPD (M-DPD) and non-Markovian variants (NM-DPD), respectively. However, as shown below, such empirical forms are limited to capturing the state-dependence that turns out to be crucial for the dynamics on the collective scale, and motivates the present model retaining the many-body nature of \(\mathbf{K}(\mathbf{Q},t)\).
To elaborate the essential idea, let us start with the Markovian approximation \(\mathbf{K}(\mathbf{Q},t)\approx-\mathbf{\Gamma}(\mathbf{Q})\delta(t)\), where \(\mathbf{\Gamma}(\mathbf{Q})=\mathbf{\Xi}(\mathbf{Q})\mathbf{\Xi}(\mathbf{Q}) ^{T}\) is the friction tensor preserving the semi-positive definite condition, and \(\mathbf{\Xi}(\mathbf{Q})\) needs to retain the translational, rotational, and permutational symmetry, i.e.,
\[\mathbf{\Xi}_{ij}(\mathbf{Q}_{1}+\mathbf{b},\cdots,\mathbf{Q}_{M }+\mathbf{b})=\mathbf{\Xi}_{ij}(\mathbf{Q}_{1},\cdots,\mathbf{Q}_{M}) \tag{2}\] \[\mathbf{\Xi}_{ij}(\mathcal{U}\,\mathbf{Q}_{1},\cdots,\mathcal{U} \,\mathbf{Q}_{M})=\mathcal{U}\,\mathbf{\Xi}_{ij}(\mathbf{Q}_{1},\cdots, \mathbf{Q}_{M})\mathcal{U}^{T}\] \[\mathbf{\Xi}_{\sigma(i)\sigma(j)}(\mathbf{Q}_{\sigma(1)},\cdots, \mathbf{Q}_{\sigma(M)})=\mathbf{\Xi}_{ij}(\mathbf{Q}_{1},\cdots,\mathbf{Q}_{ M}),\]
where \(\mathbf{\Xi}_{ij}\in\mathbb{R}^{3\times 3}\) represents the friction contribution of \(j\)-th particle on \(i\)-th particle, \(\mathbf{b}\in\mathbb{R}^{3}\) is a translation vector, \(\mathcal{U}\) is a unitary matrix, and \(\sigma(\cdot)\) is a permutation function.
To inherit the many-body interactions, we map the local environment of each CG particle into a set of generalized coordinates, i.e., \(\hat{\mathbf{Q}}_{i}^{k}=\mathbf{Q}_{i}+\sum_{l\in\mathcal{N}_{i}}f^{k}(Q_{il })\mathbf{Q}_{il}\), where \(\mathbf{f}\colon\mathbb{R}\to\mathbb{R}^{K}\) is an encoder function to be learned, and \(\mathcal{N}_{i}=\{l|Q_{il}<r_{c}\}\) is the neighboring index set of the \(i\)-th particle within a cut-off distance \(r_{c}\). Accordingly, \(\hat{\mathbf{Q}}_{ij}\in\mathbb{R}^{3\times K}\) represents a set of features that encode the intermolecular configurations beyond the pairwise approximation. The \(k\)-th column \(\hat{\mathbf{Q}}_{ij}^{k}=\hat{\mathbf{Q}}_{i}^{k}-\hat{\mathbf{Q}}_{j}^{k}\) preserves the translational and permutational invariance, by which we represent \(\mathbf{\Xi}_{ij}\) by
\[\mathbf{\Xi}_{ij}=\sum_{k=1}^{K}h_{k}(\hat{\mathbf{Q}}_{ij}^{T}\hat{\mathbf{Q} }_{ij})\hat{\mathbf{Q}}_{ij}^{k}\otimes\hat{\mathbf{Q}}_{ij}^{k}+h_{0}(\hat{ \mathbf{Q}}_{ij}^{T}\hat{\mathbf{Q}}_{ij})\mathbf{I} \tag{3}\]
where \(h:\mathbb{R}^{K\times K}\rightarrow\mathbb{R}^{K+1}\) are encoder functions which will be represented by NNs. For \(i=j\), we have \(\mathbf{\Xi}_{ii}=-\sum_{j\in\mathcal{N}_{i}}\mathbf{\Xi}_{ij}\) based on the Newton's third law. We refer to Appendix E for the proof of the symmetry constraint (2).
Eq. (3) entails the state-dependency of the memory term \(\mathbf{K}(\mathbf{Z},t)\) under the Markovian approximation. To incorporate the non-Markovian effect, we embed the memory term within an extended Markovian dynamics [35] (see also Ref. [47]). Specifically, we seek a set of non-Markovian features \(\mathbf{\zeta}:=[\mathbf{\zeta}_{1},\mathbf{\zeta}_{2},\cdots,\mathbf{\zeta}_{n}]\), and construct the joint dynamics of \([\mathbf{Z},\mathbf{\zeta}]\) by imposing the many-body form of the friction tensor between \(\mathbf{P}\) and \(\mathbf{\zeta}\), i.e.,
\[\dot{\mathbf{Q}} =\mathbf{M}^{-1}\mathbf{P} \tag{4}\] \[\dot{\mathbf{P}} =-\nabla U(\mathbf{Q})+\mathbf{\Xi}(\mathbf{Q})\mathbf{\zeta}\] \[\dot{\mathbf{\zeta}} =-\mathbf{\Xi}(\mathbf{Q})^{T}\mathbf{V}-\mathbf{\Lambda}\mathbf{\zeta}+\mathbf{ \xi}(t),\]
where \(\mathbf{\Xi}=\left[\mathbf{\Xi}^{1}\mathbf{\Xi}^{2}\cdots\mathbf{\Xi}^{n}\right]\) and each sub-matrix takes the form (3) constructed by \(\{\mathbf{f}^{i}(\cdot),\mathbf{h}^{i}(\cdot)\}_{i=1}^{n}\) respectively. \(\mathbf{\Lambda}=\mathbf{\hat{\Lambda}}\otimes\mathbf{I}\) represents the coupling among \(n\) features, where \(\mathbf{I}\in\mathbb{R}^{3N\times 3N}\) is the identity matrix and \(\mathbf{\hat{\Lambda}}\in\mathbb{R}^{n\times n}\)needs to satisfy the Lyapunov stability condition \(\mathbf{\hat{\Lambda}}+\mathbf{\hat{\Lambda}}^{T}\geq 0\). Therefore, we write \(\mathbf{\hat{\Lambda}}=\mathbf{\hat{\Lambda}}\mathbf{\hat{\Lambda}}^{T}+\mathbf{\hat{\Lambda} }^{a}\), where \(\mathbf{\hat{\Lambda}}\) is a lower triangular matrix and \(\mathbf{\hat{\Lambda}}^{a}\) is an anti-symmetry matrix which will be determined later. By choosing the white noise \(\mathbf{\xi}(t)\) following
\[\left\langle\mathbf{\xi}(t)\mathbf{\xi}(t^{\prime})\right\rangle=\beta^{-1}(\mathbf{ \Lambda}+\mathbf{\Lambda}^{T})\delta(t-t^{\prime}), \tag{5}\]
we can show that the reduced model (4) retains the consistent invariant distribution, i.e., \(\rho(\mathbf{Q},\mathbf{P},\mathbf{\xi})\propto\exp[-\beta(U(\mathbf{Q})+\mathbf{ P}^{T}\mathbf{M}^{-1}\mathbf{P}/2+\mathbf{\zeta}^{T}\mathbf{\zeta}/2)]\) (see proof in Appendix C).
Eq. (4) departs from the common CG models by retaining both the heterogeneity and non-Markovianity of the energy dissipation process. Rather than matching the mean-field metrics such as the homogeneous VACF, we learn the embedded memory \(\mathbf{\Xi}(\mathbf{Q}(t))\mathrm{e}^{\mathbf{\Lambda}(t-s)}\mathbf{\Xi}(\mathbf{Q}(s))^ {T}\) based on the MZ form. However, directly solving the orthogonal dynamics \(\mathrm{e}^{\mathcal{Q}\mathbf{z}L}\) is computationally intractable. Alternatively, we introduce the constrained dynamics \(\tilde{\mathbf{z}}(t)=\mathrm{e}^{\mathcal{H}t}\mathbf{z}(0)\) following Ref. [32]. Based on the observation \(\mathcal{P}\mathcal{Q}=\mathcal{P}\mathcal{R}\equiv 0\), we sample the MZ form from \(\tilde{\mathbf{z}}(t)\), i.e., \(\mathbf{K}_{\text{MZ}}(\mathbf{Z},t)=\mathcal{P}_{\mathbf{Z}}[(\mathrm{e}^{\mathcal{H }t}\mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{P})(\mathcal{Q}_{\mathbf{Z}}\mathcal{L} \mathbf{P})^{T}]\) and the memory of the CG model reduces to \(\mathbf{K}_{CG}(\mathbf{Z},t)=\mathbf{\Xi}(\mathbf{Q})\mathrm{e}^{\mathbf{\Lambda}t}\mathbf{ \Xi}(\mathbf{Q})^{T}\). This enables us to train the CG models in terms of the encoders \(\{\mathbf{f}^{i}(\cdot),\mathbf{h}^{i}(\cdot)\}_{i=1}^{n}\) and matrices \(\mathbf{\hat{\Lambda}}\) and \(\mathbf{\hat{\Lambda}}^{a}\) by minimizing the empirical loss
\[L=\sum_{l=1}^{N_{\text{s}}}\sum_{j=1}^{N_{\text{t}}}\left\|\mathbf{K}_{CG}( \mathbf{Z}^{(l)},t_{j})-\mathbf{K}_{\text{MZ}}(\mathbf{Z}^{(l)},t_{j})\right\| ^{2}, \tag{6}\]
where \(l\) represents the different CG configurations (see Appendix F for details in training).
## III Numerical results
To demonstrate the accuracy of the present model, we consider a full micro-scale model of a star-shaped polymer melt system similar to Ref. [32], where each molecule consists of 73 atoms. The atomistic interactions are modeled by the Weeks-Chandler-Anderse potential and the Hookean bond potential. The full system consists of 486 molecules in a cubic domain \(90\times 90\times 90\) with periodic boundary conditions. The Nose-Hoover thermostat [52; 53] is employed to equilibrate the system with \(k_{B}T=4.0\) and micro-canonical ensemble simulation is conducted during the production stage (see Appendix B) for details). Below we compare different dynamic properties predicted by the full MD and the various CG models. For fair comparisons, we use the same CG potential \(U(\mathbf{Q})\) constructed by the DeePCG scheme [19] for all the CG models; the differences in dynamic properties solely arise from the different formulations of the memory term.
Let us start with the VACF which has been broadly used in CG model parameterization and validation. As shown in Fig. 1, the predictions from the present model (NM-MB) show good agreement with the full MD results. In contrast, the CG model with the memory term represented by the pairwise decomposition and Markovian approximation (i.e., the standard M-DPD form) yields apparent deviations. The form of the pairwise decomposition with non-Markovian approx
Figure 1: The VACF of the full MD and CG models with various memory formulations in (a) semi-log scale (b) original scale. “M” and “NM” represent Markovian and Non-Markovian; GLE, DPD, and MB represent state-independent, pairwise, and the present (NM-MB) model retaining the many-body effects, respectively.
imation (NM-DPD) shows improvement at a short time scale but exhibits large deviations at an intermediate scale. Such limitations indicate pronounced many-body effects in the energy dissipation among the CG particles. Alternatively, if we set the VACF as the target quantity, we can parameterize the empirical model such as GLE by matching the VACF predicted by the full MD. Indeed, the prediction from the constructed GLE recovers the MD results. However, as shown below, this form over-simplifies the heterogeneity of the memory term and leads to inaccurate predictions on the collective scales.
Fig. 2 shows the velocity cross-correlation function (VCCF) between two CG particles, i.e., \(C^{\rm{xx}}(t;r_{0})=\mathbb{E}[\mathbf{V}_{i}(0)\cdot\mathbf{V}_{j}(t)|Q_{ij} (0)=r_{0}]\), where \(r_{0}\) represents the initial distance. Similar to VACF, the present model (NM-MB) yields good agreement with the full MD results. However, the predictions from other empirical models, including the GLE form, show apparent deviations. Such limitations arise from the inconsistent representation of the local energy dissipation and can be understood as following. The VACF represents the energy dissipation on each particle as a homogeneous background heat bath; it is essentially a mean-field metric and can not characterize the dissipative interactions among the particles. Hence, the reduced models that only recover the VACF could be insufficient to retain the consistent local momentum transport and the correlations among the particles.
Figure 2: The VCCF \(C^{\rm{xx}}(t;r_{0})\) predicted by the full MD and different CG models with initial distance (a) \(10<r_{0}<11\) and (b) \(14<r_{0}<15\). Same line legend as Fig. 1.
Furthermore, the various empirical models for local energy dissipations can lead to fundamentally different transport processes on the collective scale. Fig. 3 shows the normalized correlations of the longitudinal and transverse hydrodynamic modes [54], i.e., \(C_{L}(t)=\langle\bar{u}_{1}(t)\bar{u}_{1}(0)\rangle\) and \(C_{T}(t)=\langle\bar{u}_{2}(t)\bar{u}_{2}(0)\rangle\), where \(\bar{\mathbf{u}}=1/M\sum_{j=1}^{M}\mathbf{V}_{j}\mathrm{e}^{i\mathbf{k}\cdot \mathbf{Q}_{j}}\), \(\mathbf{k}\) is the wave vector, and the subscripts \(1\) and \(2\) represent the direction parallel and perpendicular to \(\mathbf{k}\), respectively. Similar to the VCCF, the prediction from the present model (NM-MB) agrees well with the MD results while other models show apparent deviations. In particular, the prediction from the GLE model shows strong over-damping due to the ignorance of the inter-molecule dissipations.
Finally, we examine the diffusion process on the collective scale. Fig. 4 shows the van Hove function that characterizes the evolution of the inter-particle structural correlation defined by \(G(r,t)\propto\frac{1}{M^{2}}\sum_{j\neq i}^{M}\delta(\|\mathbf{Q}_{i}(t)- \mathbf{Q}_{j}(0)\|-r)\). At \(t=0\), \(G(r,t)\) reduces to the standard radial distribution function where all the CG models can recover such initial conditions. However, for \(t>0\), predictions from the models with the pairwise decomposition (NM-DPD) and the GLE form show apparent deviations. Specifically, at an early stage near \(t=50\), the neighboring particles begin to artificially jump into the region near the reference particle, violating the fluid-structure thereafter. In contrast, the present model (NM-MB) shows consistent predictions of the structure evolution over a long period until \(t=1000\), when the initial fluid structure ultimately diffuses into a homo
Figure 3: (a) Longitudinal and (b) Transverse hydrodynamic modes predicted by MD and different CG models. Same line legend as Fig. 1.
geneous state.
## IV Summary
To conclude, we developed a CG model that faithfully accounts for the broadly overlooked many-body nature of the non-Markovian memory term. We show that retaining the heterogeneity and the strong correlation of the local energy dissipation is crucial for accurately predicting the cross-correlation among the CG particles, which, however, can not be fully characterized by the mean-field metrics such as VACF. More importantly, the memory form representing the inter
Figure 4: The van Hove function predicted by (a) full MD (b) the present NM-MB model (c) NM-DPD model (d) GLE model.
molecule energy dissipations may play a profound role in the transport and diffusion processes on the collective scale. In particular, the present model accurately predicts the hydrodynamic mode correlation and the van Hove function where empirical forms show limitations, and therefore, shows the promise to study challenging problems relevant to the meso-scale transition and synthesis processes.
###### Acknowledgements.
The work is supported in part by the National Science Foundation under Grant DMS-2110981 and the ACCESS program through allocation MTH210005.
## Appendix A Dynamics of the coarse-grained variables
We consider a full MD system consisting of \(M\) molecules with a total number of \(N\) atoms. The phase space vector is denoted by \(\mathbf{z}=[\mathbf{q},\mathbf{p}]\), where \(\mathbf{q}\in\mathbb{R}^{3N}\) and \(\mathbf{p}\in\mathbb{R}^{3N}\) represent the position and momentum vector, respectively. The coarse-grained (CG) variables are defined by representing each molecule as a CG particle, i.e., \(\phi(\mathbf{z})=\left[\phi^{Q}(\mathbf{q}),\phi^{P}(\mathbf{q})\right]\), where \(\phi^{Q}=[\mathbf{Q}_{1},\mathbf{Q}_{2},\cdots\mathbf{Q}_{M}]\) and \(\phi^{P}=[\mathbf{P}_{1},\mathbf{P}_{2},\cdots\mathbf{P}_{M}]\) represent the center of mass (COM) and the total momentum of the individual molecules. Let \(\mathbf{Z}(t)=[\mathbf{Q}(t),\mathbf{P}(t)]\) denote the map \(\phi(\mathbf{z}(t))\) with \(\mathbf{z}(0)=\mathbf{z}_{0}\). Using the Koopman operator [55], \(\mathbf{Z}(t)\) can be mapped from the initial values, i.e.,
\[\mathbf{Z}(t)=\mathrm{e}^{\mathcal{L}_{t}}\mathbf{Z}(0), \tag{10}\]
where \(\mathcal{L}\) is the Liouville operator determined by the full-model Hamiltonian \(H(\mathbf{z})\). Below we derive the reduced model by choosing CG variables \(\mathbf{Z}\) as a linear mapping of the full phase-space vector \(\mathbf{z}\) (see also Ref. [56]) and we refer to Refs. [57; 32] for discussions of the more general cases.
Following Zwanzig's approach, we define a projection operator as the conditional expectation with a fixed CG vector \(\mathbf{Z}\), i.e., \(\mathcal{P}_{\mathbf{Z}}f(\mathbf{z}):=\int\delta(\phi(\mathbf{z})-\mathbf{Z})\rho_{ 0}(\mathbf{z})f(\mathbf{z})\,\mathrm{d}\mathbf{z}/\Omega(\mathbf{Z})\), where \(\rho_{0}(\mathbf{z})\propto\mathrm{e}^{-\beta\mathrm{H}(\mathbf{z})}\) represents the equilibrium density function and \(\Omega(\mathbf{Z})=\int\delta(\phi(\mathbf{z})-\mathbf{Z})\rho_{0}(\mathbf{z})\, \mathrm{d}\mathbf{z}\). Also, we define an orthogonal operator \(\mathcal{Q}_{\mathbf{Z}}=\mathbf{I}-\mathcal{P}_{\mathbf{Z}}\). Using Eq. (10), we have \(\dot{\mathbf{Z}}(t)=\mathrm{e}^{\mathcal{L}_{t}}\mathcal{P}_{\mathbf{Z}}\mathcal{ L}\mathbf{Z}(0)+\mathrm{e}^{\mathcal{L}_{t}}\mathcal{Q}_{\mathbf{Z}}\mathcal{L} \mathbf{Z}(0)\). In particular, we choose \(\mathbf{Z}=\mathbf{Z}(0)\). Using the Duhamel-Dyson identity, we can
write the dynamics of \(\mathbf{Z}(t)\) as
\[\dot{\mathbf{Z}}(t)=\mathrm{e}^{\mathcal{L}_{t}}\mathcal{P}_{\mathbf{Z}} \mathcal{L}\mathbf{Z}(0)+\int_{0}^{t}\mathrm{d}\mathrm{s}\mathrm{e}^{\mathcal{L }(t-s)}\mathcal{P}_{\mathbf{Z}}\mathcal{L}\mathrm{e}^{\mathcal{Q}_{\mathbf{Z}} \mathcal{L}s}\mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{Z}(0)+\mathrm{e}^{ \mathcal{Q}_{\mathbf{Z}}\mathcal{L}_{t}}\mathcal{Q}_{\mathbf{Z}}\mathcal{L} \mathbf{Z}(0). \tag{20}\]
Let us start with the mean-field term \(\mathcal{P}_{\mathbf{Z}}\mathcal{L}\mathbf{Z}(0)\). For the present study, the CG variables are linear functions of \(\mathbf{z}\). Therefore, we have \(\mathcal{P}_{\mathbf{Z}}\mathcal{L}\mathbf{Q}=\mathcal{L}\mathbf{Q}=\mathbf{ M}^{-1}\mathbf{P}\), i.e., \(\mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{Q}\equiv 0\). For \(\mathcal{P}_{\mathbf{Z}}\mathcal{L}\mathbf{P}\) associated with the \(i\)-th CG particle, we have
\[\mathcal{P}_{\mathbf{Z}}\mathcal{L}\mathbf{P}_{i} =\int\delta(\phi(\mathbf{z})-\mathbf{Z})\rho_{0}(\mathbf{z}) \mathcal{L}\mathbf{P}_{i}\mathrm{d}\mathbf{z}/\Omega(\mathbf{Z}) \tag{21}\] \[=\int\delta(\phi(\mathbf{z})-\mathbf{Z})\rho_{0}(\mathbf{z})(- \sum_{i\in\mathcal{N}_{i}}\nabla_{\mathbf{q}_{i}}H(\mathbf{z}))\,\mathrm{d} \mathbf{z}/\Omega(\mathbf{Z})\] \[=\int\delta(\phi(\mathbf{z})-\mathbf{Z})(\beta^{-1}\sum_{i\in \mathcal{N}_{i}}\nabla_{\mathbf{q}_{i}})\rho_{0}(\mathbf{z})\,\mathrm{d} \mathbf{z}/\Omega(\mathbf{Z})\] \[=\beta^{-1}\nabla_{\mathbf{Q}_{i}}\int\delta(\phi^{Q}(\mathbf{q} )-\mathbf{Q})\rho_{0}(\mathbf{q})\,\mathrm{d}\mathbf{q}/\int\delta(\phi^{Q}( \mathbf{q})-\mathbf{Q})\rho_{0}(\mathbf{q})\,\mathrm{d}\mathbf{q}\] \[=-\nabla_{\mathbf{Q}_{i}}U(\mathbf{Q}),\]
where \(\mathcal{N}_{i}\) represents the index set of the atoms that belongs to the \(i\)-th molecule, and \(U(\mathbf{Q})\) represents the free energy defined by \(U(\mathbf{Q})=-\beta^{-1}\ln\big{[}\int\delta(\phi^{Q}(\mathbf{q})-\mathbf{Q} )\rho_{0}(\mathbf{q})\,\mathrm{d}\mathbf{q}\big{]}\).
For the memory term \(\mathcal{P}_{\mathbf{Z}}\mathcal{L}\mathrm{e}^{\mathcal{Q}_{\mathbf{Z}} \mathcal{L}s}\mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{P}\) associated with the \(i\)-th CG particle, we have
\[\mathcal{P}_{\mathbf{Z}}\mathcal{L}\mathrm{e}^{\mathcal{Q}_{ \mathbf{Z}}\mathcal{L}s}\mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{P}_{i} =\int\rho_{0}(\mathbf{z})\delta(\phi(\mathbf{z})-\mathbf{Z}) \mathcal{L}\mathrm{e}^{\mathcal{Q}_{\mathbf{Z}}\mathcal{L}s}\mathcal{Q}_{ \mathbf{Z}}\mathcal{L}\mathbf{P}_{i}\mathrm{d}\mathbf{z}/\Omega(\mathbf{Z}) \tag{22}\] \[=\int\rho_{0}(\mathbf{z})(\mathcal{L}\phi(\mathbf{z})\cdot \nabla_{\mathbf{Z}})\delta(\phi(\mathbf{z})-\mathbf{Z})\mathrm{e}^{\mathcal{Q }_{\mathbf{Z}}\mathcal{L}s}\mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{P}_{i} \mathrm{d}\mathbf{z}/\Omega(\mathbf{Z})\] \[=\int\rho_{0}(\mathbf{z})(\mathcal{Q}_{\mathbf{Z}}\mathcal{L} \mathbf{P}\cdot\nabla_{\mathbf{P}})\delta(\phi(\mathbf{z})-\mathbf{Z}) \mathrm{e}^{\mathcal{Q}_{\mathbf{Z}}\mathcal{L}s}\mathcal{Q}_{\mathbf{Z}} \mathcal{L}\mathbf{P}_{i}\mathrm{d}\mathbf{z}/\Omega(\mathbf{Z})\ (\text{by }\mathcal{Q}_{ \mathbf{Z}}\mathcal{L}\mathbf{Q}\equiv 0)\] \[=\nabla_{\mathbf{P}}\cdot\int\rho_{0}(\mathbf{z})\delta(\phi( \mathbf{z})-\mathbf{Z})(\mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{P})\otimes \mathrm{e}^{\mathcal{Q}_{\mathbf{Z}}\mathcal{L}s}\mathcal{Q}_{\mathbf{Z}} \mathcal{L}\mathbf{P}_{i}\mathrm{d}\mathbf{z}/\Omega(\mathbf{Z})\] \[=\nabla_{\mathbf{P}}\cdot\underbrace{\left(\int\rho_{0}(\mathbf{z} )\delta(\phi(\mathbf{z})-\mathbf{Z})(\mathcal{Q}_{\mathbf{Z}}\mathcal{L} \mathbf{P})\otimes\mathrm{e}^{\mathcal{Q}_{\mathbf{Z}}\mathcal{L}s}\mathcal{Q} _{\mathbf{Z}}\mathcal{L}\mathbf{P}_{i}\mathrm{d}\mathbf{z}/\Omega(\mathbf{Z}) \right)}_{\mathbf{K}_{i}(\mathbf{Z},s)}\] \[-\tilde{\mathbf{K}}_{i,}(\mathbf{Z},s)\cdot\nabla_{\mathbf{P}} \left(1/\Omega(\mathbf{Z})\right)\Omega(\mathbf{Z}).\]
Furthermore, we take the assumption that the memory kernel only depends on the positions of the CG particles \(\mathbf{Q}\), i.e., \(\nabla_{\mathbf{P}}\cdot\tilde{\mathbf{K}}(\mathbf{Z},s)\equiv 0\). Also, similar to the derivation in Eq. (21), we note that
\[\Omega(\mathbf{Z})\propto\int\delta(\phi^{Q}(\mathbf{q})-\mathbf{Q})\rho_{0}( \mathbf{q})\delta(\phi^{P}(\mathbf{q})-\mathbf{P})e^{-\beta\mathbf{P}^{T} \mathbf{M}^{-1}\mathbf{P}/2}\,\mathrm{d}\mathbf{z}\propto e^{-\beta\mathbf{P}^ {T}\mathbf{M}^{-1}\mathbf{P}/2}. \tag{23}\]
Therefore, Eq. (22) can be further simplified as
\[\mathcal{P}_{\mathbf{Z}}\mathcal{L}\mathrm{e}^{\mathcal{Q}_{\mathbf{Z}} \mathcal{L}s}\mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{P}_{i}=-\beta\tilde{ \mathbf{K}}_{i,}(\mathbf{Q},s)\cdot\mathbf{M}^{-1}\mathbf{P}. \tag{24}\]
With Eqs. (18) (19), we can show that the dynamics of \(\mathbf{Z}=[\mathbf{Q},\mathbf{P}]\) can be written as
\[\dot{\mathbf{Q}} =\mathbf{M}^{-1}\mathbf{P} \tag{20}\] \[\dot{\mathbf{P}} =-\nabla U(\mathbf{Q})-\int_{0}^{t}\mathbf{K}(\mathbf{Q}(t-s),s) \mathbf{V}(t-s)\,\mathrm{d}s+\mathbf{R}(t),\]
where \(\mathbf{K}(\mathbf{Q},s)=\beta\tilde{\mathbf{K}}(\mathbf{Q},s)\) and \(\mathbf{R}(t)=\mathrm{e}^{\mathcal{Q}_{\mathbf{Z}}\mathcal{L}_{\mathbf{T}} \mathcal{Q}_{\mathbf{Z}}\mathcal{L}\mathbf{Z}(0)}\) is modeled as a random process representing the different initial condition \(\mathbf{z}_{0}\) with \(\phi(\mathbf{z}_{0})=\mathbf{Z}\).
## Appendix B The micro-scale model of the polymer melt system
We consider the micro-scale model of a star-shaped polymer melt system similar to Ref. [32]. Each polymer molecule consists of a "center" atom connected by 12 arms with 6 atoms per arm. The potential function is governed by the pairwise and bond interactions, i.e.,
\[V(\mathbf{q})=\sum_{i\neq j}V_{p}(q_{ij})+\sum_{k}V_{b}(l_{k}), \tag{21}\]
where \(V_{p}\) is the pairwise interaction between both the intra- and inter-molecular atoms except the bonded pairs. \(q_{ij}=\|\mathbf{q}_{i}-\mathbf{q}_{j}\|\) is the distance between the \(i\)-th and \(j\)-th atoms. \(V_{p}\) takes the form of the Lennard-Jones potential with cut-off \(r_{c}\), i.e.,
\[V_{p}(r)=\begin{cases}V_{\text{LJ}}(r)-V_{\text{LJ}}(r_{c}),\ r<r_{c}\\ 0,\ r\geq r_{c}\end{cases}V_{\text{LJ}}(r)=4\epsilon\left[\left(\frac{\sigma}{ r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6}\right], \tag{22}\]
where \(\epsilon=1.0\) is the dispersion energy and \(\sigma=2.415\) is the hardcore distance. Also we choose \(r_{c}=2^{1/6}\sigma\) so that \(V_{p}\) recovers the Weeks-Chandler-Andersen potential. \(V_{b}\) is the bond interaction between the neighboring particles of each polymer arm and \(l_{k}\) is the length of the \(k\)-th bond. The bond potential \(V_{b}\) is chosen to be the harmonic potential, i.e.,
\[V_{b}(l)=\frac{1}{2}k_{s}(l-l_{0})^{2}, \tag{23}\]
where \(k_{s}=1.714\) and \(l_{0}=1.615\) represent the elastic coefficient and the equilibrium length \(l_{0}\), respectively. The atom mass is chosen to be unity. The full system consists of \(N=486\) polymer molecules in a cubic domain \(90\times 90\times 90\) with periodic boundary condition imposed along each direction. The Nose-Hoover thermostat is employed to equilibrate the system with \(k_{B}T=4.0\) and micro-canonical ensemble simulation is conducted during the production stage.
## Appendix C Invariant density function of the CG model
The reduced model takes the following form
\[\dot{\mathbf{Q}} =\mathbf{M}^{-1}\mathbf{P} \tag{10}\] \[\dot{\mathbf{P}} =-\nabla U(\mathbf{Q})+\mathbf{\Xi}(\mathbf{Q})\mathbf{\zeta}\] \[\dot{\mathbf{\zeta}} =-\mathbf{\Xi}(\mathbf{Q})^{T}\mathbf{V}-\mathbf{\Lambda}\mathbf{\zeta}+\mathbf{ \xi}(t),\]
where \(\mathbf{\Xi}=\left[\mathbf{\Xi}^{1}\mathbf{\Xi}^{2}\cdots\mathbf{\Xi}^{n}\right]\) represents a set of non-Markovian features. It resembles the extended dynamics for the GLE proposed in Ref. [47] except that the coupling between \(\mathbf{P}\) and the features \(\mathbf{\zeta}\) are represented by the state-dependent friction tensor \(\mathbf{\Xi}(\mathbf{Q})\) retaining the many-body nature. By properly choosing the white noise \(\mathbf{\xi}(t)\), we can show that model (10) retains the invariant density function consistent with the full MD model.
**Proposition C.1**.: _By choosing the white noise \(\mathbf{\xi}(t)\) following_
\[\left\langle\mathbf{\xi}(t)\mathbf{\xi}(t^{\prime})\right\rangle=\beta^{-1}(\mathbf{ \Lambda}+\mathbf{\Lambda}^{T})\delta(t-t^{\prime}), \tag{11}\]
_Model (10) retains the consistent invariant distribution_
\[\rho_{\rm eq}(\mathbf{Q},\mathbf{P},\mathbf{\xi})\propto\exp[-\beta(U(\mathbf{Q} )+\mathbf{P}^{T}\mathbf{M}^{-1}\mathbf{P}/2+\mathbf{\zeta}^{T}\mathbf{\zeta}/2)] \tag{12}\]
Proof.: Let \(\tilde{\mathbf{Z}}=[\mathbf{Q},\mathbf{P},\mathbf{\zeta}]\) denote the resolved variables and \(W(\tilde{\mathbf{Z}})=U(\mathbf{Q})+\mathbf{P}^{T}\mathbf{M}^{-1}\mathbf{P}/2 +\mathbf{\zeta}^{T}\mathbf{\zeta}/2\) the free energy of the extended dynamics. Model (10) can be written as the following gradient dynamics
\[\frac{\mathrm{d}\tilde{\mathbf{Z}}}{\mathrm{d}t}=\underbrace{\left(\begin{array} []{ccc}0&\mathbf{I}&0\\ -\mathbf{I}&0&\mathbf{\Xi}(\mathbf{Q})\\ 0&\mathbf{\Xi}(\mathbf{Q})^{T}&\mathbf{\Lambda}\end{array}\right)}_{\mathbf{G}( \mathbf{Q})}\nabla_{\mathbf{Z}}W(\tilde{\mathbf{Z}})+\tilde{\mathbf{\xi}}(t),\]
where \(\tilde{\mathbf{\xi}}(t)=[0,0,\mathbf{\xi}(t)]\). Accordingly, the Fokker-Planck equation takes the form
\[\frac{\partial\rho(\tilde{\mathbf{Z}},t)}{\partial t}=\nabla\cdot\left(- \mathbf{G}(\mathbf{Q})\nabla W(\tilde{\mathbf{Z}})\rho(\tilde{\mathbf{Z}},t)- \frac{1}{2}\beta^{-1}(\mathbf{G}(\mathbf{Q})+\mathbf{G}(\mathbf{Q})^{T}) \nabla\rho(\tilde{\mathbf{Z}},t)\right).\]
Plug Eq. (12) into the above equation, we have
\[\nabla\cdot\left(\beta^{-1}\mathbf{G}(\mathbf{Q})\nabla\rho_{ \rm eq}(\mathbf{z},t)-\frac{1}{2}\beta^{-1}(\mathbf{G}(\mathbf{Q})+\mathbf{G }(\mathbf{Q})^{T})\nabla\rho_{\rm eq}(\mathbf{z},t)\right) =\beta^{-1}\nabla\cdot\left(\tilde{\Lambda}^{A}\nabla\rho_{\rm eq }(\mathbf{z},t)\right) \tag{13}\] \[\equiv 0,\]
where \(\tilde{\Lambda}={\rm diag}(0,0,\mathbf{\Lambda})\) and \(\tilde{\Lambda}^{A}\) is anti-symmetric.
## Appendix D Conservative free energy of the CG model
The equilibrium density distribution of the CG model needs to match the marginal density distribution of the CG variables of the full model. Due to the unresolved atomistic degrees of freedom, the conservative CG potential \(U(\mathbf{Q})=-\beta^{-1}\ln\left[\int\delta(\phi^{Q}(\mathbf{q})-\mathbf{Q})\rho_{0 }(\mathbf{q})\,\mathrm{d}\mathbf{q}\right]\) (up to a constant) generally encodes the many-body interactions even if the full MD force field is governed by two-body interactions. As shown in the previous study [31; 32], accurate modeling of this many-body potential \(U(\mathbf{Q})\) is crucial for predicting the static/equilibrium structure properties such as the radial distribution, angle (i.e., three-body) distribution, and the equation of state. It provides the starting point for the present study focusing on constructing reliable reduced models that accurately predict the non-equilibrium processes on the collective scale.
To establish a fair comparison among the various CG models, we use the _same_ conservative CG potential \(U(\mathbf{Q})\) constructed by DeePCG [19] method for all the CG models. As shown in Fig. 5, all the CG models can accurately recover the radius distribution function (RDF) of the full MD model, where the standard pairwise approximation shows limitations. This result validates the accuracy of the constructed \(U(\mathbf{Q})\). Therefore, the different non-equilibrium properties predicted by the various CG models (presented in the main manuscript) arise from the different formulations of the memory term \(\mathbf{K}(\mathbf{Q},t)\), which is the main focus of the present study.
## Appendix E Symmetry-preserving neural network representation
Preserving the physical symmetry constraints is crucial for both the accuracy and the generalization ability of the constructed ML-models. Besides the conservative potential \(U(\mathbf{Q})\), the constructed memory term will need to satisfy the translation- and permutation-invariance, as well as the rotation-symmetries. Let \(\mathcal{T}_{\mathbf{b}}\), \(\mathcal{R}_{\mathcal{U}}\), and \(\mathcal{P}_{\sigma}\) denote the translation, rotation, and permutation operator whose actions on a general function \(\mathcal{F}(\mathbf{Q}_{1},\cdots,\mathbf{Q}_{M})\) defined by
\[\begin{split}\mathcal{T}_{\mathbf{b}}\mathcal{F}(\mathbf{Q}_{1}, \cdots,\mathbf{Q}_{M})&:=\mathcal{F}(\mathbf{Q}_{1}+\mathbf{b}, \cdots,\mathbf{Q}_{M}+\mathbf{b}),\\ \mathcal{R}_{\mathcal{U}}\mathcal{F}(\mathbf{Q}_{1},\cdots, \mathbf{Q}_{M})&:=\mathcal{F}(\mathbf{Q}_{1}\mathcal{U},\cdots, \mathbf{Q}_{M}\mathcal{U}),\\ \mathcal{P}_{\sigma}\mathcal{F}(\mathbf{Q}_{1},\cdots,\mathbf{Q}_ {M})&:=\mathcal{F}(\mathbf{Q}_{\sigma(1)},\cdots,\mathbf{Q}_{ \sigma(M)}),\end{split} \tag{10}\]
where \(\mathbf{b}\in\mathbb{R}^{3}\) is a position vector, \(\mathcal{U}\in\mathbb{R}^{3\times 3}\) is an orthogonal matrix and \(\sigma\) is an arbitrary permutation of the set of indices. The components of the constructed memory will need to satisfy the
symmetry constraints
\[\begin{split}\mathcal{T}_{\mathbf{b}}\Xi_{ij}(\mathbf{Q}_{1},\cdots, \mathbf{Q}_{M})&=\Xi_{ij}(\mathbf{Q}_{1},\cdots,\mathbf{Q}_{M})\\ \mathcal{R}_{\mathcal{U}}\Xi_{ij}(\mathbf{Q}_{1},\cdots,\mathbf{Q }_{M})&=\mathcal{U}\,\Xi_{ij}(\mathbf{Q}_{1},\cdots,\mathbf{Q}_{M })\mathcal{U}^{T}\\ \mathcal{P}_{\sigma}\Xi_{ij}(\mathbf{Q}_{1},\cdots,\mathbf{Q }_{M})&=\Xi_{\sigma(i)\sigma(j)}(\mathbf{Q}_{\sigma(1)},\cdots, \mathbf{Q}_{\sigma(M)}),\end{split} \tag{10}\]
**Proposition E.1**.: _The representation preserves the symmetry conditions (10), where \(\mathbf{\hat{Q}}_{i}^{k}=\mathbf{Q}_{i}+\sum_{l\in\mathcal{N}_{i}}f^{k}(Q_{il} )\mathbf{Q}_{il}\) represents the local environment-determined features (generalized coordinate) for the i-th particle, \(\mathbf{f}:\mathbb{R}\rightarrow\mathbb{R}^{K}\) and \(\mathbf{h}:\mathbb{R}^{K\times K}\rightarrow\mathbb{R}^{K+1}\) are two encoder functions._
Proof.: We note that \(\mathcal{T}_{\mathbf{b}}\mathbf{Q}_{ij}=\mathcal{T}_{\mathbf{b}}\mathbf{Q}_{i} -\mathcal{T}_{\mathbf{b}}\mathbf{Q}_{j}=\mathbf{Q}_{ij},\ \mathcal{T}_{\mathbf{b}}Q_{ij}=\left\| \mathcal{T}_{\mathbf{b}}\mathbf{Q}_{i}-\mathcal{T}_{\mathbf{b}}\mathbf{Q}_{j} \right\|=Q_{ij}\), \(\mathcal{R}_{\mathcal{U}}\mathbf{Q}_{ij}=\mathcal{U}\mathbf{Q}_{ij}\), \(\mathcal{R}_{\mathcal{U}}Q_{ij}=Q_{ij}\), \(\mathcal{P}_{\sigma}\mathbf{Q}_{ij}=\mathbf{Q}_{\sigma(i)\sigma(j)}\), and \(\mathcal{P}_{\sigma}Q_{ij}=Q_{\sigma(i)\sigma(j)}\). Therefore, for arbitrary indices
Figure 5: The radius distribution function (RDF) of the full MD and various CG models with the same conservative CG potential \(U(\mathbf{Q})\) constructed by the DeepCG model.
\(i\) and \(k\), the feature \(\hat{\mathbf{Q}}_{i}^{k}\) satisfy the following symmetry conditions
\[\begin{split}\mathcal{T}_{\mathbf{b}}\hat{\mathbf{Q}}_{i}^{k}& =\mathcal{T}_{\mathbf{b}}\mathbf{Q}_{i}+\sum_{l\in\mathcal{N}_{i} }f^{k}(\mathcal{T}_{\mathbf{b}}Q_{il})\mathcal{T}_{\mathbf{b}}\mathbf{Q}_{il}= \hat{\mathbf{Q}}_{i}^{k}+\mathbf{b}\\ \mathcal{R}_{\mathcal{U}}\hat{\mathbf{Q}}_{i}^{k}&= \mathcal{R}_{\mathcal{U}}\mathbf{Q}_{i}+\sum_{l\in\mathcal{N}_{i}}f^{k}( \mathcal{R}_{\mathcal{U}}Q_{il})\mathcal{R}_{\mathcal{U}}\mathbf{Q}_{il}= \mathcal{U}\,\hat{\mathbf{Q}}_{i}^{k}\\ \mathcal{P}_{\sigma}\hat{\mathbf{Q}}_{i}^{k}&= \mathcal{P}_{\sigma}\mathbf{Q}_{i}+\sum_{l\in\mathcal{N}_{\sigma(i)}}f^{k}( \mathcal{P}_{\sigma}Q_{il})\mathcal{P}_{\sigma}\mathbf{Q}_{il}=\hat{\mathbf{Q }}_{\sigma(i)}^{k},\end{split} \tag{10}\]
where we have used the fact that \(\sum_{l}f(r_{l})\mathbf{r}_{l}\) is permutational invariant for the last equation.
Therefore, we have \(\mathcal{T}_{\mathbf{b}}\hat{\mathbf{Q}}_{ij}=\mathcal{T}_{\mathbf{b}}\hat{ \mathbf{Q}}_{i}-\mathcal{T}_{\mathbf{b}}\hat{\mathbf{Q}}_{j}=\hat{\mathbf{Q}} _{ij}\), \(\mathcal{T}_{\mathbf{b}}\hat{\mathbf{Q}}_{ij}=\|\mathcal{T}_{\mathbf{b}}\hat{ \mathbf{Q}}_{i}-\mathcal{T}_{\mathbf{b}}\hat{\mathbf{Q}}_{j}\|=\hat{Q}_{ij}\), \(\mathcal{R}_{\mathcal{U}}\hat{\mathbf{Q}}_{ij}=\mathcal{U}\,\hat{\mathbf{Q}}_ {ij}\), \(\mathcal{R}_{\mathcal{U}}\hat{\mathbf{Q}}_{ij}=\hat{Q}_{ij}\), \(\mathcal{P}_{\sigma}\hat{\mathbf{Q}}_{ij}=\hat{\mathbf{Q}}_{\sigma(i)\sigma( j)}\), and \(\mathcal{P}_{\sigma}\hat{Q}_{ij}=\hat{Q}_{\sigma(i)\sigma(j)}\). Thus, for arbitrary indices \(i,j\) and \(k\), the encoder functions \(h_{k}(\hat{\mathbf{Q}}_{ij}\hat{\mathbf{Q}}_{ij}^{T})\) satisfy the following symmetry condition
\[\begin{split}\mathcal{T}_{\mathbf{b}}h_{k}(\hat{\mathbf{Q}}_{ij} ^{T}\hat{\mathbf{Q}}_{ij})&=h_{k}((\mathcal{T}_{\mathbf{b}}\hat{ \mathbf{Q}}_{ij})^{T}\mathcal{T}_{\mathbf{b}}\hat{\mathbf{Q}}_{ij})=h_{k}( \hat{\mathbf{Q}}_{ij}^{T}\hat{\mathbf{Q}}_{ij})\\ \mathcal{R}_{\mathcal{U}}h_{k}(\hat{\mathbf{Q}}_{ij}^{T}\hat{ \mathbf{Q}}_{ij})&=h_{k}((\mathcal{R}_{\mathcal{U}}\mathbf{Q}_{ij} )^{T}\mathcal{R}_{\mathcal{U}}\hat{\mathbf{Q}}_{ij})=h_{k}(\hat{\mathbf{Q}}_{ ij}^{T}\hat{\mathbf{Q}}_{ij})\\ \mathcal{P}_{\sigma}h_{k}(\hat{\mathbf{Q}}_{ij}^{T}\hat{ \mathbf{Q}}_{ij})&=h_{k}(\hat{\mathbf{Q}}_{\sigma(i)\sigma(j)}^{T }\hat{\mathbf{Q}}_{\sigma(i)\sigma(j)}).\end{split} \tag{11}\]
Plugging Eq. (11) into the definition of \(\Xi_{ij}\) yields (10).
## Appendix F Training Details
With the equilibrium stage presented in Sec. B, we use constrained dynamics to collect samples of the instantaneous force \(\mathbf{F}(t)\) on individual molecules with a fixed configuration \(\mathbf{Z}:=[\tilde{\mathbf{Q}},\tilde{\mathbf{P}}]\), where \(\tilde{\mathbf{Q}}\) and \(\tilde{\mathbf{P}}\) represent the COMs and total momentum of the individual molecules. As they are linear functions of the full phase space vector \(\mathbf{z}=[\mathbf{q},\mathbf{p}]\), the constraint dynamics (see Ref. [32]) for the \(j\)-th atomistic particle associated with the \(i\)-th molecule follows
\[\begin{split}\dot{\mathbf{q}}_{j}&=m^{-1}\mathbf{p }_{j}-\tilde{\mathbf{Q}}_{i}\\ \dot{\mathbf{p}}_{j}&=-\nabla_{\mathbf{q}_{j}}V( \mathbf{q})+\frac{1}{N_{m}}\sum_{k\in\mathcal{N}_{i}}\nabla_{\mathbf{q}_{k}}V (\mathbf{q})\end{split} \tag{12}\]
where \(V(\mathbf{q})\) is the potential function of the full MD model and \(N_{m}\) is the number of atoms per molecule. With \(\mathbf{Z}(0)=\mathbf{Z}\), we have \(\mathbf{Z}(t)\equiv\mathbf{Z}\) for \(t>0\) under (12). The memory kernel can be sampled from the time correlation as
\[\mathbf{K}_{\text{MZ}}(\mathbf{Z},t)=\left\langle\delta\mathbf{F}(t)\delta\mathbf{ F}(0)^{T}\right\rangle_{\mathbf{Z}}, \tag{13}\]
where \(\delta\mathbf{F}=\mathbf{F}-\mathcal{P}_{\boldsymbol{Z}}(\mathbf{F})\) is the fluctuation force on individual molecules and \(\mathcal{P}_{\boldsymbol{Z}}(\mathbf{F})\) is the mean force obtained from the many-body potential \(U(\mathbf{Q})\) discussed in D. We collect two configuration samples consisting of 486 molecules. For each configuration, 5000 independent ensembles are conducted with a production stage of 500000 steps to compute the correlation function.
The encoder functions \(f\) and \(h\) are parameterized as 4-layer fully connected neural networks. Each hidden layer consists of 10 neurons. The number of state-dependent features is set to be \(K=10\) and the number of non-Markovian features \(n=5\).
The NNs are trained by Adam [58] for 100000 steps. For each step, 5 targeted CG particles and their neighbors within the cutoff will be selected as one training set. The initial learning rate is \(1\times 10^{-3}\) and the decay rate is 0.5 per 100000 steps.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.